Judith Holler

Judith Holler
Radboud University | RU · Donders Institute for Brain, Cognition, and Behaviour

Ph.D.

About

88
Publications
32,009
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,992
Citations
Introduction
My research investigates multimodal communication, with a focus on co-speech gestures, eye gaze and spoken language.
Additional affiliations
September 2010 - present
Max-Planck-Institut für Psycholinguistik
September 2000 - August 2010
The University of Manchester

Publications

Publications (88)
Article
The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a...
Article
The given-new contract entails that speakers must distinguish for their addressee whether references are new or already part of their dialogue. Past research had found that, in a monologue to a listener, speakers shortened repeated words. However, the notion of the given-new contract is inherently dialogic, with an addressee and the availability of...
Preprint
Conversation is a time-pressured environment. Recognising a social action (the ‘speech act’, such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions, since a long gap may be m...
Article
Full-text available
During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal...
Article
Full-text available
Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee (common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D-models from buildi...
Preprint
Full-text available
Repair is a core building block of human communication, allowing us to address problems of understanding in conversation. Past research has uncovered the basic mechanisms by which interactants signal and solve such problems. However, the focus has been on verbal interaction, neglecting the fact that human communication is inherently multimodal. Her...
Article
Full-text available
In human communication, when the speech is disrupted, the visual channel (e.g. manual gestures) can compensate to ensure successful communication. Whether speech also compensates when the visual channel is disrupted is an open question, and one that significantly bears on the status of the gestural modality. We test whether gesture and speech are d...
Article
Full-text available
Height‐pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that,...
Article
Full-text available
Conversational turn taking in human interaction is incredibly rapid. The timing mechanisms underpinning this behaviour have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets in telephone conversations are serially dependent, such tha...
Article
Full-text available
Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors...
Article
Full-text available
In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual ge...
Article
Full-text available
In a conversation, recognising the speaker’s social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have i...
Article
Full-text available
During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing—requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. T...
Article
Full-text available
When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker’s mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether...
Chapter
Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. V...
Article
Full-text available
It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields...
Article
Full-text available
Parkinson's disease impairs motor function and cognition, which together affect language and communication. Cospeech gestures are a form of language-related actions that provide imagistic depictions of the speech content they accompany. Gestures rely on visual and motor imagery, but it is unknown whether gesture representations require the involvem...
Article
This study investigates the effectiveness of training preschoolers in order to enhance their social cognition and pragmatic skills. Eighty-three 3–4-year-olds were divided into three groups and listened to stories enriched with mental state terms. Then, whereas the control group engaged in non-reflective activities, the two experimental groups were...
Preprint
Full-text available
It is now widely known that much animal communication is conducted over several modalities, e.g., acoustic and visual, either simultaneously or sequentially. Despite this awareness, students of synchrony and rhythm interaction in animal communication have traditionally focused on a single modality at a time in their analyses. This paper reviews our...
Preprint
Conversational turn taking in humans has been deemed an exquisite feat of timing. Speakers tend to anticipate another speaker’s turn ending so as to rapidly initiate the next turn as a response. This rapid response often takes only around 200ms which is considerably less time than it would take to plan and initiate a next turn in response to a turn...
Conference Paper
Full-text available
In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during...
Preprint
Full-text available
In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during...
Preprint
Full-text available
FINAL VERSION AVAILABLE (OPEN ACCESS): https://www.nature.com/articles/s41598-021-95791-0 In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise...
Article
Full-text available
Children perceive iconic gestures, along with speech they hear. Previous studies have shown that children integrate information from both modalities. Yet it is not known whether children can integrate both types of information simultaneously as soon as they are available (as adults do) or whether they initially process them separately and integrate...
Article
Full-text available
Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fas...
Article
Full-text available
In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection...
Article
Full-text available
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore like...
Preprint
Full-text available
In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection...
Article
The natural ecology of human language is face-to-face interaction comprising the exchange of a plethora of multimodal signals. Trying to understand the psycholinguistic processing of language in its natural niche raises new issues, first and foremost the binding of multiple, temporally offset signals under tight time constraints posed by a turn-tak...
Article
Full-text available
In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the...
Data
Long listener blink. Example of a long listener blink as used in face-to-face conversation [16]. (MOV)
Data
Example of a trial (long blink). Example of a trial in the nod with long blink condition, including the avatar’s question, the avatar’s nods with long blinks during the participant’s answer, and the avatar’s response following answer completion. (MOV)
Data
Dataset underlying the findings. (CSV)
Data
Example of a trial (short blink). Example of a trial in the nod with short blink condition, including the avatar’s question, the avatar’s nods with short blinks during the participant’s answer, and the avatar’s response following answer completion. (MOV)
Article
Speakers can adapt their speech and co-speech gestures based on knowledge shared with an addressee (common ground-based recipient design). Here, we investigate whether these adaptations are modulated by the speaker's age and cognitive abilities. Younger and older participants narrated six short comic stories to a same-aged addressee. Half of each s...
Article
Full-text available
The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast-typically a mere 200-ms elapse between a current and a next speake...
Article
Full-text available
In this article, we examine gaze direction in responses to polar questions using both quantitative and conversation analytic (CA) methods. The data come from a novel corpus of conversations in which participants wore eye-tracking glasses to obtain direct measures of their eye movements. The results show that while most preferred responses are produ...
Article
Does blinking function as a type of feedback in conversation? To address this question, we built a corpus of Dutch conversations, identified short and long addressee blinks during extended turns, and measured their occurrence relative to the end of turn constructional units (TCUs), the location where feedback typically occurs. Addressee blinks were...
Data
Supplementary Materials for Kendrick, K. H., & Holler, J. (2017). Gaze direction signals response preference in conversation. Research on Language and Social Interaction, 50, 12–32. doi:10.1080/08351813.2017.1262120
Article
Full-text available
A combination of impaired motor and cognitive function in Parkinson's disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how...
Article
Full-text available
In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicat...
Conference Paper
Speakers can adapt their speech and co-speech gestures for addressees. Here, we investigate whether this ability is modulated by age. Younger and older adults participated in a comic narration task in which one participant (the speaker) narrated six short comic stories to another participant (the addressee). One half of each story was known to both...
Conference Paper
In face-to-face conversation, addressees are not passive receivers but active collaborators providing vocal and visual feedback while speakers are speaking (hm-hm, nodding; Clark, 1996). The goal of this study was to investigate blinking in conversation (cf. Cummins, 2011) as one potential type of visual addressee feedback. We built a video-corpus...
Article
Full-text available
Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such...
Article
Full-text available
One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process ongoing turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogs to gain insight into how we process turn...
Article
Full-text available
Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic informatio...
Article
Full-text available
One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it b...
Article
Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written...
Article
Full-text available
Pain is a private and subjective experience about which effective communication is vital, particularly in medical settings. Speakers often represent information about pain sensation in both speech and co-speech hand gestures simultaneously, but it is not known whether gestures merely replicate spoken information or complement it in some way. We exa...
Article
Full-text available
Gesture is an important precursor of children's early language development, for example, in the transition to multiword speech and as a predictor of later language abilities. However, it is unclear whether gestural input can influence children's comprehension of complex grammatical constructions. In Study 1, 3- (M = 3 years 5 months) and 4-year-old...
Article
Full-text available
Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and su...
Article
Full-text available
The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18–31 months. The children completed two tasks: (i) a structured measure of pretend (or ‘symbolic’) play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture. Additionally,...
Article
Full-text available
In this Research Topic, we aimed to develop our understanding of cognition by considering the diverse and dynamic relationship between the language we use, our bodily perceptions, and our actions and interactions in the broader environment. We received twenty-six articles that take very different approaches to exploring the question of how our bodi...
Conference Paper
In everyday communication, people not only use speech but also hand gestures to convey information. One intriguing question in gesture research has been why gestures take the specific form they do. Previous research has identified the speaker-gesturer's communicative intent as one factor shaping the form of iconic gestures. Here we investigate whet...
Article
Full-text available
Much evidence has suggested that people conceive of time as flowing directionally in transverse space (e.g., from left to right for English speakers). However, this phenomenon has never been tested in a fully nonlinguistic paradigm where neither stimuli nor task use linguistic labels, which raises the possibility that time is directional only when...
Article
Full-text available
Hand gestures combine with speech to form a single integrated system of meaning during language comprehension (Kelly et al., 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. Thirty-one participants watched videos presenting speech with gestures or manual actions on object...
Article
Full-text available
The Lexical Retrieval Hypothesis proposes that gestures function at the level of speech production, aiding in the retrieval of lexical items from the mental lexicon. However, empirical evidence for this account is mixed, and some critics argue that a more likely function of gestures during lexical retrieval is a communicative one. The present study...