Figure 5 - uploaded by Christoph Scheepers
Content may be subject to copyright.
Example visual stimulus for the sentences: " Foodtock will jump onto the sofa " (Upper Path), " Foodtock will crawl onto the sofa " (Lower Path) in Experiment 2.  

Example visual stimulus for the sentences: " Foodtock will jump onto the sofa " (Upper Path), " Foodtock will crawl onto the sofa " (Lower Path) in Experiment 2.  

Source publication
Article
Full-text available
Motion events in language describe the movement of an entity to another location along a path. In two eye-tracking experiments we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In the first experiment, participants listened to sentences describing...

Similar publications

Article
Full-text available
Concurrent auditory stimuli have been shown to enhance detection of abstract visual targets in experimental setups with little ecological validity. We presented 11 participants, wearing an eye-tracking device, with a visual detection task in an immersive audiovisual environment replicating a real-world environment. The participants were to fixate o...
Article
Full-text available
A choice experiment was conducted concurrently with eye-tracking technology to examine consumer preferences for local and organic produce, notably effects of logo- versus text-labeling formats. We find consumers prefer local to nonlocal, but some consumers will pay a higher premium for logo-labeled produce compared with text-labeled produce. Additi...
Article
Full-text available
Purpose: We aimed at further elucidating whether aphasic patients' difficulties in understanding non-canonical sentence structures, such as Passive or Object-Verb-Subject sentences, can be attributed to impaired morphosyntactic cue recognition, and to problems in integrating competing interpretations. Methods: A sentence-picture matching task wi...
Article
Full-text available
In a concurrent eye-tracking and EEG study, we investigated the impact of salience on the monitoring and control of eye-movement behaviour and the role of visual working memory (VWM) capacity in mediating this effect. Participants made eye movements to a unique line-segment target embedded in a search display also containing a unique distractor. Ta...
Conference Paper
Full-text available
Eye-tracking technology, as an objective-recording method of user’s eye movement data, has widely been applied in human–computer interface usability test. The quantitative analysis of eye movement data is good at comparing different UI designs; however, it can hardly probe the influential factors of usability issues from a UCD perspective. This res...

Citations

... This paucity of evidence on predictive language processing in ASD is in contrast to substantial evidence on incremental processing in language comprehension among neurotypical adults (e.g., Altmann & Kamide, 1999;Altmann & Kamide, 2007Altmann & Mirković, 2009;Kamide et al., 2003;Kamide et al., 2016;Kang et al., 2020;Kang & Ge, 2022;Knoeferle et al., 2005;Knoeferle & Crocker, 2007;Kukona et al., 2014) and children (e.g., Borovsky et al., 2012;Gambi et al., 2018;Gambi et al., 2016;Huang & Snedeker, 2011;Reuter et al., 2021;Trueswell et al., 1999). The ability to process language incrementally is also associated with individual differences in language skills and nonverbal cognitive abilities, such as vocabulary size (Borovsky et al., 2012;Lew-Williams & Fernald, 2007). ...
Article
Full-text available
The present study aims to fill the research gap by evaluating published empirical studies and answering the specific research question: Can individuals with autism spectrum disorder (ASD) predict upcoming linguistic information during real-time language comprehension? Following the PRISMA framework, an initial search via PubMed, Web of Science, SCOPUS, and Google Scholar yielded a total of 697 records. After screening the abstract and full text, 10 studies, covering 350 children and adolescents with ASD ranging from 2 to 15 years old, were included for analysis. We found that individuals with ASD may predict the upcoming linguistic information by using verb semantics but not pragmatic prosody during language comprehension. Nonetheless, 9 out of 10 studies used short spoken sentences as stimuli, which may not encompass the complexity of language comprehension. Moreover, eye-tracking in the lab setting was the primary data collection technique, which may further limit the generalizability of the research findings. Using a narrative approach to synthesize and evaluate the research findings, we found that individuals with ASD may have the ability to predict the upcoming linguistic information. However, this field of research still calls for more studies that will expand the scope of research topics, utilize more complex linguistic stimuli, and employ more diverse data collection techniques.
... Tracking mouse movements during experimental task performance allows us to access unfolding cognitive processes in their temporal and spatial dynamics (Spivey & Dale, 2006). Mouse tracking has been used to investigate the processing of action words (e.g., Kamide et al., 2016), cardinal directions (e.g., Tower-Richardi et al., 2012), emotion (e.g., Mattek et al., 2016), number (see Faulkenberry et al., 2018, for review), and other magnitude-related stimuli (e.g., pitch of musical tones; Hartmann, 2017). However, we are only aware of one study using mouse tracking to investigate space-time associations. ...
Article
Full-text available
In many Western cultures, the processing of temporal words related to the past and to the future is associated with left and right space, respectively – a phenomenon known as the horizontal Mental Time Line (MTL). While this mapping is apparently quite ubiquitous, its regularity and consistency across different types of temporal concepts remain to be determined. Moreover, it is unclear whether such spatial mappings are an essential and early constituent of concept activation. In the present study, we used words denoting time units at different scales (hours of the day, days of the week, months of the year) associated with either left space (e.g., 9 a.m. , Monday , February ) or right space (e.g., 8 p.m. , Saturday , November ) as cues in a line bisection task. Fifty-seven healthy adults listened to temporal words and then moved a mouse cursor to the perceived midpoint of a horizontally presented line. We measured movement trajectories, initial line intersection coordinates, and final bisection response coordinates. We found movement trajectory displacements for left- vs. right-biasing hour and day cues. Initial line intersections were biased specifically by month cues, while final bisection responses were biased specifically by hour cues. Our findings offer general support to the notion of horizontal space-time associations and suggest further investigation of the exact chronometry and strength of this association across individual time units.
... These basic observations and behavioural differences clearly relate to language. Psychological (Bargh et al. 1996;Loftus and Palmer 1974) and psycholinguistic studies (Kamide et al. 2016;Lindsay et al. 2013;Matlock 2004;Richardson and Matlock 2007;Speed and Vigliocco 2014;Speed et al. 2017) provide evidence that speed information encoded in motion descriptions associates with the visual processing of a scene. For instance, when fast motion is expressed, participants tend to focus on the endpoints of motion more quickly as compared to slow motion (Lindsay et al. 2013;Speed and Vigliocco 2014). ...
Article
Full-text available
In this study, we investigate the expression of speed—one of the principal dimensions of manner—in relation to the expression of space in Estonian, a satellite-framed and morphology-rich language. Our multivariate and extensive corpus analysis is informed by asymmetries attested in languages with regard to expressing space (the goal-over-source bias) and speed (the fast-over-slow bias) where we attempt to explicitly link the two. We demonstrate moderate speed effects in the data in that fast motion verbs tend to combine with Goal, and slow motion verbs with Location and Trajectory expressions, making verbs of fast motion similar to goal verbs in their clausal behaviour. We also show that semantic congruency (i.e., expressing semantic information repeatedly in motion clauses) overrides the goal-over-source bias. That is, although verbs also occur in diverse patterns, they often combine with semantic units that mirror their meaning: goal verbs tend to combine with Goal, source verbs with Source, and manner verbs with Manner expressions. Such semantic congruency might serve as a tool for construal and, thus, is an important issue for future research.
... Previous eye-tracking work in the domain of motion events has revealed that speakers extract path of motion from similar events by predictively fixating on a goal object (Bunger et al., 2012(Bunger et al., , 2016(Bunger et al., , 2021Papafragou et al., 2008;Trueswell & Papafragou, 2010) rather than tracing the trajectory of motion with their eyes since people rarely fixate on empty space (see also Kamide, Lindsay, Scheepers, & Kukona, 2016). Thus, the events that involved a goal directed path (to, into, or past) served as the target motion events. ...
Article
Full-text available
Speakers' visual attention to events is guided by linguistic conceptualization of information in spoken language production and in language-specific ways. Does production of language-specific co-speech gestures further guide speakers' visual attention during message preparation? Here, we examine the link between visual attention and multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers' speech and gesture show language specificity with path of motion mostly expressed within the main verb accompanied by path gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not. Results strongly suggest that speakers' visual attention is guided by language-specific event encoding not only in speech but also in gesture. This provides evidence consistent with models that propose integration of speech and gesture at the conceptualization level of language production and suggests that the links between the eye and the mouth may be extended to the eye and the hand.
... Participants in the experiment by Lindsay, Scheepers, and Kamide (2013) on average looked more often and longer at the trail during along the trail following The student will stagger than The student will run. By contrast, for sentences containing run-verbs (compared with slow-motion stagger-verbs), participants looked earlier to the goal as they listened to the verb (see also Spivey & Geng, 2001;Kamide, Lindsay, Scheepers, & Kukona, 2016;Speed & Vigliocco, 2014, for related evidence). Time curve graphs and analyses of looks showed that these effects emerged during and just after the verb. ...
Article
Full-text available
Abundant empirical evidence suggests that visual perception and motor responses are involved in language comprehension ('grounding'). However, when modeling the grounding of sentence comprehension on a word-by-word basis, linguistic representations and cognitive processes are rarely made fully explicit. This article reviews representational formalisms and associated (computational) models with a view to accommodating incremental and compositional grounding effects. Are different representation formats equally suitable and what mechanisms and representations do models assume to accommodate grounding effects? I argue that we must minimally specify compositional semantic representations, a set of incremental processes/mechanisms, and an explicit link from the assumed processes to measured behavior. Different representational formats can be contrasted in psycholinguistic modeling by holding the set of processes/mechanisms constant; contrasting different processes/mechanisms is possible by holding representations constant. Such psycholinguistic modeling could be applied across a wide range of experimental investigations and complement computational modeling.
... These tendencies may be counteracted by explicitly verbally cueing these picture elements in concurrent verbal explanations such as audio guides or guided tours, thereby guiding the viewers' attention to parts of a painting that may otherwise go unnoticed. In basic cognitive research, studies on the visual world paradigm (Huettig & Altmann, 2005) have demonstrated the influence of concept naming on gaze behavior and information search in pictures (Callan et al., 2013;Huettig & Altmann, 2005;Kamide et al., 2016;Lupyan & Ward, 2013;Yee & Sedivy, 2006). It was shown that pictures of objects presented by means of continuous flash suppression below the threshold of conscious perception were then recognized more accurately if they were concomitantly correctly verbally designated (Lupyan & Ward, 2013). ...
... Huettig and Altmann (2005) conclude that the subjects' gaze fixations are controlled by the fit between word meaning and the mental representation of the object in the visual field. Studies using sentences rather than individual object words yielded similar results (Callan et al., 2013;Kamide et al., 2016). For example, Callan et al. (2013) presented auditory sentences to their subjects in which the characters acted either morally good or morally bad. ...
... Taken together, the present results on visual attention corroborate and extend results from the visual word paradigm using brief verbal cues (Callan et al., 2013;Huettig & Altmann, 2005;Kamide et al., 2016;Lupyan & Ward, 2013;Yee & Sedivy, 2006), demonstrating that verbal cueing triggers viewers to focus attention on the respective object in a given scene. Hence, while it has been argued that works of art evoke a specific mode of processing (Alvarez et al., 2015;Wagner et al., 2014), the present findings indicate that the effects of verbal cueing apply to the conditions of aesthetic perception as well. ...
Article
Full-text available
Verbal explanations in the form of audio or personal guides are widely common in art museums; however, their influence on cognitive processing and understanding artworks has been rarely examined empirically. Based on the model of aesthetic appreciation and aesthetic judgments and the cognitive model of multimedia learning, we conducted an experimental study on the influence of audio explanations on the cognitive processing of artworks. In a 2 × 2 design with verbal cueing (verbally uncued vs. verbally cued picture elements) and saliency (low vs. high salient picture elements) as the within-subject variables, gaze coherence, fixation times on the picture elements, retention, and transfer performance were measured for processing 2 historical paintings. The results show that gaze coherence was higher at time points of verbally cueing picture elements than at time points of not verbally cueing them. Furthermore, the fixation times on verbally cued picture elements were longer than on verbally uncued picture elements. This effect was stronger for high than for low salient picture elements across the 2 paintings. Retention of the picture elements and transfer performance was better for verbally cued than verbally uncued picture elements, and prior disadvantages of particular picture elements with regard to retention and transfer could be compensated by verbally cueing them in the audio explanation. The results are discussed with regard to their theoretical contributions and practical implications.
... The current study used the visual world paradigm (VWP), a method that has been used previously to examine the online mapping between event representation and unfolding language (e.g., Altmann & Kamide, 1999Chambers et al., 2004;Kamide et al., 2003;Kamide et al., 2016;Knoeferle et al., 2005;Kukona, Altmann, & Kamide, 2014). Converging evidence suggests that the visual world paradigm (VWP) is sensitive to the dynamic mapping between language and both event and object representations. ...
... The visual world paradigm has been used extensively in psycholinguistic research to show that participants incorporate cues from syntax, semantics and world knowledge to constrain the available set of objects, and move their eyes to an appropriate visual object before it has been mentioned in the audio (e.g. Altmann & Kamide, 2007, 2009Kamide, Lindsay, Scheepers, & Kukona, 2016). For example, it has been shown that participants are more likely to look at an empty glass of wine compared to a full glass of beer when hearing the sentence "the man has drunk all of…", and vice versa for "the man will drink all of …" (Altmann & Kamide, 2007). ...
Article
Full-text available
Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults and tested their timecourse in 2 preregistered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker's voice and message were either consistent or inconsistent (e.g., "When we go shopping, I usually look for my favorite wine," spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g., wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias toward the voice-consistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker's voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240 ms vs. 1800 ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterized autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information and were comparably sensitive to speaker-meaning inconsistency effects. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... The visual world paradigm has been used extensively in psycholinguistic research to show that participants incorporate cues from syntax, semantics and world knowledge to constrain the available set of objects, and move their eyes to an appropriate visual object before it has been mentioned in the audio (e.g. Altmann & Kamide, 2007, 2009Kamide, Lindsay, Scheepers, & Kukona, 2016). For example, it has been shown that participants are more likely to look at an empty glass of wine compared to a full glass of beer when hearing the sentence "the man has drunk all of…", and vice versa for "the man will drink all of …" (Altmann & Kamide, 2007). ...
Preprint
Typically developing individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in adults with autism spectrum disorder (ASD), and tested their timecourse in two pre-registered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker’s voice and message were either consistent or inconsistent (e.g. “When we go shopping, I usually look for my favourite wine”, spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g. wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias towards the voice-consistent object, well before hearing the disambiguating word, showing that adults with ASD rapidly use the speaker’s voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group (2240ms) compared to the ASD group (1800ms). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterised ASD in terms of a local processing bias and pragmatic dysfunction, people with ASD were unimpaired at integrating multiple modalities of linguistic information, and were comparably sensitive to speaker-meaning inconsistency effects.
... The visual world paradigm has been used extensively in psycholinguistic research to show that participants incorporate cues from syntax, semantics and world knowledge to constrain the available set of objects, and move their eyes to an appropriate visual object before it has been mentioned in the audio (e.g. Altmann & Kamide, 2007, 2009Kamide, Lindsay, Scheepers, & Kukona, 2016). For example, it has been shown that participants are more likely to look at an empty glass of wine compared to a full glass of beer when hearing the sentence "the man has drunk all of…", and vice versa for "the man will drink all of …" (Altmann & Kamide, 2007). ...
Preprint
Typically developing individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in adults with autism spectrum disorder (ASD), and tested their timecourse in two pre-registered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker’s voice and message were either consistent or inconsistent (e.g. “When we go shopping, I usually look for my favourite wine”, spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g. wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias towards the voice-consistent object, well before hearing the disambiguating word, showing that adults with ASD rapidly use the speaker’s voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group (2240ms) compared to the ASD group (1800ms). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterised ASD in terms of a local processing bias and pragmatic dysfunction, people with ASD were unimpaired at integrating multiple modalities of linguistic information, and were comparably sensitive to speaker-meaning inconsistency effects.