About
118
Publications
25,418
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,638
Citations
Introduction
Additional affiliations
January 2015 - October 2015
January 2009 - December 2014
January 2007 - December 2008
Publications
Publications (118)
More and more findings suggest a tight temporal coupling between (non-linguistic) socially interpreted context and language processing. Still, real-time language processing accounts remain largely elusive with respect to the influence of biological (e.g., age) and experiential (e.g., world and moral knowledge) comprehender characteristics and the i...
Language-processing accounts are beginning to accommodate different visual context effects, but they remain underspecified regarding differences between cues, both during sentence comprehension and subsequent recall. We monitored participants' eye movements to mentioned characters while they listened to transitive sentences. We varied whether speak...
The present work is a description and an assessment of a methodology designed to quantify different aspects of the interaction betweenlanguage processing and the perception of the visual world. The recording of eye-gaze patterns has provided good evidence for the contributionof both the visual context and linguistic/world knowledge to language comp...
Abundant empirical evidence suggests that visual perception and motor responses are involved in language comprehension ('grounding'). However, when modeling the grounding of sentence comprehension on a word-by-word basis, linguistic representations and cognitive processes are rarely made fully explicit. This article reviews representational formali...
In this paper, we discuss key characteristics and typical experiment designs of the visual-world paradigm and compare different methods of analysing eye-movement data. We discuss the nature of the eye-movement data from a visual-world study and provide data analysis tutorials on ANOVA, t-tests, linear mixed-effects model, growth curve analysis, clu...
Events are not isolated but rather linked to one another in various dimensions. In language processing, various sources of information—including real-world knowledge, (representations of) current linguistic input and non-linguistic visual context—help establish causal connections between events. In this review, we discuss causal inference in relati...
To test effects of German on anticipation in Vietnamese, we recorded eye-movements during comprehension and manipulated i) verb constraints (different vs. similar in German and Vietnamese) and ii) classifier constraints (absent in German). In each of two experiments, participants listened to Vietnamese sentences like “Mai mặc một chiếc áo.” (‘Mai w...
In the present review paper by members of the collaborative research center “Register: Language Users' Knowledge of Situational-Functional Variation” (CRC 1412), we assess the pervasiveness of register phenomena across different time periods, languages, modalities, and cultures. We define “register” as recurring variation in language use depending...
Several studies have investigated the comprehension of decontextualized English nominal metaphors. However, not much is known about how contextualized, non-nominal, non-English metaphors are processed, and how this might inform existing theories of metaphor comprehension. In the current work, we investigate the effects of context and of sequential...
We investigated the brain responses associated with the integration of speaker facial emotion into situations in which the speaker verbally describes an emotional event. In two EEG experiments, young adult participants were primed with a happy or sad speaker face. The target consisted of an emotionally positive or negative IAPS photo accompanied by...
The Collaborative Research Center 1412 “Register: Language Users’ Knowledge of Situational-Functional Variation” (CRC 1412) investigates the role of register in language, focusing in particular on what constitutes a language user’s register knowledge and which situational-functional factors determine a user’s choices. The following paper is an extr...
In interpreting spoken sentences in event contexts, comprehenders both integrate their current interpretation of language with the recent past (e.g., events they have witnessed) and develop expectations about future event possibilities. Tense cues can disambiguate this linking but temporary ambiguity in their interpretation may lead comprehenders t...
Research findings on language comprehension suggest that many kinds of non-linguistic cues can rapidly affect language processing. Extant processing accounts of situated language comprehension model these rapid effects and are only beginning to accommodate the role of non-linguistic emotional, cues. To begin with a detailed characterization of dist...
When a word is used metaphorically (for example “walrus” in the sentence “The president is a walrus”), some features of that word's meaning (“very fat,” “slow-moving”) are carried across to the metaphoric interpretation while other features (“has large tusks,” “lives near the north pole”) are not. What happens to these features that relate only to...
Age has been shown to influence language comprehension, with delays, for instance, in older adults' expectations about upcoming information. We examined to what extent expectations about upcoming event information (who-does-what-to-whom) change across the lifespan (in 4- to 5-year-old children, younger, and older adults) and as a function of differ...
Previous ERP research suggests that native language processing mechanisms
for role-relation versus verb-action congruence differ. In a picture-sentence
verification task, Knoeferle et al. (2014) asked participants to first inspect a clipart
scene with for instance a gymnast punching a journalist. Subsequently, a
sentence about these characters was...
We used the visual world paradigm to investigate online processing of verb and classifier constraints in L1 Vietnamese and L2 Vietnamese (L1 German) users. We tested whether L2 users process constraints that are L2 specific like L1 users. L1 users used both verb and classifier constraints to anticipate upcoming objects. Early German-Vietnamese bili...
State of the art: Overt verification (hearing piano and matching it to its referent) occurs incrementally in native language comprehension and is distinct by type (e.g., lexical verb-action relations are processed distinctly from compositional role relations). For language learning, verification might also be relevant. Learners must (i) identify wo...
Predicting variability in context effects is a timely enterprise considering that psycho- and neurolinguistic research has assessed how language processing depends on the perceived context, the body, and long-term linguistic knowledge of the language user. The current evidence suggests that some context effects may be systematically more robust tha...
When comprehending a spoken sentence that refers to a visually-presented event, comprehenders both integrate their current interpretation of language with the recent event and develop expectations about future event pos- sibilities. Tense cues can disambiguate this linking, but temporary ambiguity in these cues may lead compre- henders to also rely...
In second language (L2) learning, both language transfer and visual context have been shown to play a role. However, few empirical studies have compared L2 learning across the lifespan (e.g., 18-65 years) to assess the effect of individual differences in L2 learners (e.g., age but also cognitive abilities) on learning success. We investigated wheth...
This research was inspired by visual context effects on real-time language processing (Knoeferle et al., 2005; Tanenhaus et al., 1995) as well as on language learning (Koehne et al., 2015; MacDonald et al., 2017; Yu et al., 2011) and by language transfer in language learning (Jiang, 2002; Jiang, 2004). We investigated whether German (the first lang...
Language and vision interact in non-trivial ways. Linguistically, spatial utterances are often asymmetrical as they relate more stable objects (reference objects) to less stable objects (located objects). Researchers have claimed that such linguistic asymmetry should also be reflected in the allocation of visual attention when people process a depi...
[This corrects the article DOI: 10.3389/fpsyg.2018.00718.].
Existing evidence has shown a processing advantage (or facilitation) when representations derived from a non-linguistic context (spatial proximity depicted by gambling cards moving together) match the semantic content of an ensuing sentence. A match, inspired by conceptual metaphors such as ‘similarity is closeness’ would, for instance, involve car...
Existing evidence has shown a processing advantage (or facilitation) when representations derived from a non-linguistic context (spatial proximity depicted by gambling cards moving together) match the semantic content of an ensuing sentence. A match, inspired by conceptual metaphors such as ‘similarity is closeness’ would, for instance, involve car...
We argue for the integration of socially interpreted context (including speaker information) and comprehender characteristics into real-time language processing accounts. Extant real-time processing accounts are underspecified regarding the integration of social (speaker) information and comprehender characteristics. We extend the Coordinated Inter...
The study is to investigate effects of event depictions on Vietnamese phrase learning in young German adults. Adults (N=64, L1=German, no L2 < age 6, ages 18-31, no prior knowledge of Vietnamese) participated in two experiments (2x2 between-participant factorial design) set up in Presentation with 32 Vietnamese verb-noun phrases balanced in four le...
Eye tracking research on situated language comprehension has shown that participants rely more on a recent event than on a plausible future event during spoken sentence comprehension. When people saw a recent action event and then they listened to a German (NP1-Verb-Adv-NP2) past or futuric present tense sentence, they preferentially looked at the...
In two visual word eye tracking studies, we investigated the influence of prosody and case marking on children's and adults' thematic role assignment. We assigned an SVO/ OVS-biasing (vs. neutral) prosodic contour to unambiguously case marked German subject-verb-object (SVO) and object-verb-subject (OVS) sentences respectively. Scenes depicted ambi...
We assessed whether younger and older adults can make use of direct action-related visual information on the one hand, and contextual social information on the other hand for real-time sentence comprehension of non-canonical German OVS sentences
Psycholinguistic studies investigating grammatical and semantic (i.e. biological) gender knowledge effects on language comprehension have often manipulated the match between a linguistic context and words (e.g. pronouns) in a subsequent sentence (e.g. finding the word ’her’ after a sentence talking about a ’policeman’; Hammer et al., 2008; Kreiner...
It is known that the comprehension of spatial prepositions involves the deployment of visual attention. For example, consider the sentence “The salt is to the left of the stove”. Researchers [29, 30] have theorized that people must shift their attention from the stove (the reference object, RO) to the salt (the located object, LO) in order to compr...
In this review we focus on the close interplay between visual contextual information and real-time language processing. Crucially, we are showing that not only college-aged adults but also children and older adults can profit from visual contextual information for language comprehension. Yet, given age-related biological and experiential changes, c...
Background
Prior visual-world research has demonstrated that emotional priming of spoken sentence processing is rapidly modulated by age. Older and younger participants saw two photographs of a positive and of a negative event side-by-side and listened to a spoken sentence about one of these events. Older adults’ fixations to the mentioned (positiv...
We investigated how constraints imposed by the concurrent visual context modulate the effects of prior gender and action cues as well as of stereotypical knowledge during situated language comprehension. Participants saw videos of female or male hands performing an action and then inspected a display showing the faces of two potential agents (one m...
Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers’ visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of five experiments demonstrates that the presentation of spatia...
Eye movements during linguistic-visual conflicts
In this chapter, I will review recent research on visually situated language comprehension, and in doing so identify key characteristics of situated language comprehension. More specifically I will argue that both active visual context effects and the temporally coordinated interplay between visual attention and language comprehension are character...
see http://www.jbe-platform.com/content/books/9789027267481
Full text: https://edoc.hu-berlin.de/handle/18452/20649
Accounts of visually situated language processing accommodate an active role of the visual context and a tight temporal coordination between comprehension, visual attention, and visual context effects in young adults. By contrast, these psycholinguistic accounts have not yet modelled variation due to age groups (e.g., children), social (e.g., class...
for downloadable version see
http://amor.cms.hu-berlin.de/~knoeferp/Homepage__Psycholinguistics/Publications_files/knoeferle_guerra_LLC_pre-print.pdf
Recent experimental evidence suggests that spatial distance between two depicted objects in a non-referential visual context (i.e., when neither spatial distance nor the objects were mentioned) can rapidly and incrementally modulate the processing of semantic similarity between and-coordinated subject noun phrases in a sentence. The present researc...
Languages differ along various dimensions with respect to how they mark argument relations. In about 35% of the world's languages, the subject appears prior to the verb, which is followed by the object. These languages tend to have strict word order and scarce morphosyntactic marking (Dryer, 2013a; Dryer, 2013b; Iggesen, 2005). In 42% of world's la...
Visual world eye-tracking studies have shown that when people saw a “recent” action (eg Fig. 1, 1-A) performed before they listened to a related sentence (eg, German NP1-Verb-Adv-NP2) then they more often inspected a recent (vs. an alternative future, Fig. 1, 1-C) event target. This preference persists even when most events and sentences in the exp...
We assessed whether adults can make use of direct action-related visual information on the one hand, and contextual social information of varying degree of naturalness on the other hand for sentence comprehension of non-canonical German OVS sentences
Visual attention can be directed by visual and linguistic information. It is not well understood how attention is directed when linguistic information conflicts with the visual scene. Knoeferle and Crocker (2006) established the coordinated interplay account model of sentence comprehension and linguistically mediated visual attention, but it did no...
Previous visual world eye-tracking studies have shown that when a sentential verb can refer (via tense information on the verb and on a following time adverb) to either a recent and a future action event performed by an actor, people inspected the target of the recent event more often than the (different) target of the future event. This 'recent ev...
Two visual world eye-tracking studies investigated the effect of emotions and actions on sentence processing. Positively emotionally valenced German non-canonical object-verb-subject (OVS) sentences were paired with a scene depicting three characters (agent-patient-distractor) as either performing the action described by the sentence, or not perfor...
Can actor gaze modulate the recent event preference during spoken sentence comprehension?
Visual-world eye-tracking studies show rapid visual-context effects on spoken comprehension. Participants prefer to inspect the target of a recently depicted event over the target of a future event when they listen to a related sentence (NP1-VERB-ADV-NP2), an...
The present chapter reviews the literature on visually situated language comprehension against the background that most theories of real-time sentence comprehension have ignored rich non-linguistic contexts. However, listeners' eye movements to objects during spoken language comprehension, as well as their event-related brain potentials (ERPs) have...
Spatial terms such as "above", "in front of", and "on the left of" are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and model...
A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither ref...
In wissenschaftlichen Betrachtungen zum Film gibt es eine merkwürdige Trennung von Körperlichkeit und Textualität in der Rezeptionsforschung und -theorie. Besonders deutlich wird das in der stark zunehmenden Emotionsforschung im audiovisuellen Feld. Die leibliche Dimension des Mediums Kino gerät in den Blick, etwa in der Filmphänomenologie, aber in...
Across three experiments, consumers’ brand search was facilitated by spatially non-informative sounds associated with the target brand. Response latencies and eye movements to the target brand were faster in the presence of congruent (vs. no) sound. This crossmodal facilitation effect held even for newly-learnt associations between brands and sonic...
EYE TRACKING, LINKING HYPOTHESES AND MEASURES IN LANGUAGE PROCESSING
Conditional analyses of eye movements
Improving linking hypotheses in visually situated language processing: combining eye movements and event-related brain potentials
Using eye-tracking, two studies investigated whether a dynamic vs. static emotional facial expression can influence how a listener interprets a subsequent emotionally-valenced utterance in relation to a visual context. Crucially, we assessed whether such facial priming changes with the comprehender's age (younger vs. older adults). Participants ins...
What does semantic similarity between two concepts mean? How could we measure it? The way in which semantic similarity is calculated might differ depending on the theoretical notion of semantic representation. In an eye-tracking reading experiment, we investigated whether two widely used semantic similarity measures (based on featural or distributi...
Studies on covert attention usually monitor participants' eye movements in order to prevent participants from moving their eyes away from a central fixation point. However, given our frequently dynamic attention behavior, keeping the gaze on a fixation point may be effortful and require attentional resources. If so, then trying to maintain fixation...
Recent evidence from eye tracking during reading showed that non-referential spatial distance presented in a visual context can modulate semantic interpretation of similarity relations rapidly and incrementally. In two eye-tracking reading experiments we extended these findings in two important ways; first, we examined whether other semantic domain...
An eye-tracking study compared the effects of actions (depicted as tools between on-screen characters) with those of a speaker's gaze and head shift between the same two characters. In previous research, each of these cues has rapidly influenced language comprehension on its own, but few studies have directly compared these two cues or, more genera...
dabashidze@cit-ec.uni-bielefeld.de) Pia Knoeferle (knoeferl@cit-ec.uni-bielefeld.de) Maria Nella Carminati (mcarmina@techfak.uni-bielefeld.de) Abstract Previous eye tracking findings show that people preferentially direct their attention to the target of a recently depicted event compared with the target of a possible future event during the compre...
Endowing artificial agents with the ability to empathize is believed to enhance their social behavior and to make them more likable, trustworthy, and caring. Neuropsychological findings substantiate that empathy occurs to different degrees depending on several factors including, among others, a person's mood, personality, and social relationships w...
We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulat...
Empathy can be defined as the ability to perceive and understand others' emotional states. Neuropsychological evidence has shown that humans empathize with each other to different degrees depending on factors such as their mood, personality, and social relationships. Although artificial agents have been endowed with features such as affect, persona...
We report a series of eye-tracking studies investigating different facets of how seeing a speaker's gaze affects listeners' visual attention and comprehension. We compare the effect of speaker gaze to other cues in the linguistic and non-linguistic context, such as depicted actions and sentence structure. In addition, we discuss top-down influences...
During comprehension, a listener can rapidly follow a frontally seated speaker's gaze to an object before its mention, a behavior which can shorten latencies in speeded sentence verification. However, the robustness of gaze-following, its interaction with core comprehension processes such as syntactic structuring, and the persistence of its effects...