Martijn Baart

Martijn Baart
Tilburg University | UVT · Department of Cognitive Neuropsychology

PhD.

About

49
Publications
6,060
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
631
Citations
Additional affiliations
January 2016 - present
Tilburg University
Position
  • PostDoc Position
April 2012 - present
September 2010 - November 2010
Haskins Laboratories
Position
  • visiting PhD student

Publications

Publications (49)
Article
Full-text available
Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to videos of a 'happy' or 'fearful' face in combination with a slightly incongruous sentence with ambiguous prosody. After this exposure, a...
Article
Full-text available
Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was compara...
Article
Lipread speech suppresses and speeds up the auditory N1 and P2 peaks, but these effects are not always observed or reported. Here, the robustness of lipread induced N1/P2 suppression and facilitation in phonetically congruent audiovisual speech was assessed by analyzing peak values that were taken from published plots and individual data. To determ...
Article
Full-text available
Perceiving linguistic input is vital for human functioning, but the process is complicated by the fact that the incoming signal is often degraded. However, humans can compensate for unimodal noise by relying on simultaneous sensory input from another modality. Here, we investigated noise-compensation for spoken and printed words in two experiments....
Article
Full-text available
When listening to distorted speech, does one become a better listener by looking at the face of the speaker or by reading subtitles that are presented along with the speech signal? We examined this question in two experiments in which we presented participants with spectrally distorted speech (4-channel noise-vocoded speech). During short training...
Article
Full-text available
Humans’ extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations in 144 participants (aged 5 to 27 years) track phrasal and syllabic structures in connected speech mixed w...
Preprint
Full-text available
Humans’ extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations in 144 participants (aged 5 to 27 years) track phrasal and syllabic structures in connected speech mixed w...
Article
We investigated how aging modulates lexico-semantic processes in the visual (seeing written items), auditory (hearing spoken items) and audiovisual (seeing written items while hearing congruent spoken items) modalities. Participants were young and older adults who performed a delayed lexical decision task (LDT) presented in blocks of visual, audito...
Poster
Full-text available
This Poster is about the effect of lip-reading on speech perception and we found that this effect is not related to the reading skill.
Article
Full-text available
Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, w...
Article
Full-text available
Background One potentially relevant neurophysiological marker of internalizing problems (anxiety/depressive symptoms) is the late positive potential (LPP), as it is related to processing of emotional stimuli. For the first time, to our knowledge, we investigated the value of the LPP as a neurophysiological marker for internalizing problems and spec...
Article
The current study investigates how second language auditory word recognition, in early and highly proficient Spanish–Basque (L1-L2) bilinguals, is influenced by crosslinguistic phonological-lexical interactions and semantic priming. Phonological overlap between a word and its translation equivalent (phonological cognate status), and semantic relate...
Article
Full-text available
Humans quickly adapt to variations in the speech signal. Adaptation may surface as recalibration, a learning effect driven by error-minimisation between a visual face and an ambiguous auditory speech signal, or as selective adaptation, a contrastive aftereffect driven by the acoustic clarity of the sound. Here, we examined whether these aftereffect...
Article
In two experiments, we investigated the relationship between lexical access processes, and processes that are specifically related to making lexical decisions. In Experiment 1, participants performed a standard lexical decision task in which they had to respond as quickly and as accurately as possible to visual (written), auditory (spoken) and audi...
Article
Full-text available
Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from—silent—visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated...
Article
Full-text available
Speech perception is influenced by vision through a process of audiovisual integration. This is demonstrated by the McGurk illusion where visual speech (for example /ga/) dubbed with incongruent auditory speech (such as /ba/) leads to a modified auditory percept (/da/). Recent studies have indicated that perception of the incongruent speech stimuli...
Article
Full-text available
Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory “self” advantages. We assessed whether there is a “self” advantage for phonetic recalibration (a lip-read driven cross-modal learning effec...
Article
Full-text available
Hyperscanning refers to obtaining simultaneous neural recordings from more than one person (Montage et al., 2002), that can be used to study interactive situations. In particular, hyperscanning with Electroencephalography (EEG) is becoming increasingly popular since it allows researchers to explore the interactive brain with a high temporal resolut...
Preprint
Full-text available
Lip-reading is crucial to understand speech in challenging conditions. Neuroimaging investigations have revealed that lip-reading activates auditory cortices in individuals covertly repeating absent but known speech. However, in real-life, one usually has no detailed information about the content of upcoming speech. Here we show that during silent...
Article
Full-text available
Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants’ sensitivity to the relationship between the auditory and visual components of audiovisual speech in their n...
Conference Paper
Full-text available
Audiovisual speech integration is reflected in the electrophysiological N1/P2 complex. In this study, we analyzed recordings of electroencephalographic brain activity from 28 subjects who were presented with combinations of auditory, visual, and audiovisual stimuli, using single trial analysis based on an independent component analysis procedure. W...
Article
Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order...
Article
Auditory phoneme categories are less well-defined in developmental dyslexic readers than in fluent readers. Here, we examined whether poor recalibration of phonetic boundaries might be associated with this deficit. 22 adult dyslexic readers were compared with 22 fluent readers on a phoneme identification task and a task that measured phonetic recal...
Chapter
Full-text available
In 2003, we (Bertelson et al. 2003) reported that phonetic recalibration induced by McGurk-like stimuli can indeed be observed. We termed the phenomenon “recalibration” in analogy with the much better known “spatial recalibration,” as we considered it a readjustment or a fine-tuning of an already existing phonetic representation. In the same year,...
Article
Full-text available
Listeners use lipread information to adjust the phonetic boundary between two speech categories (phonetic recalibration, Bertelson et al. 2003). Here, we examined phonetic recalibration while listeners were engaged in a visuospatial or verbal memory working memory task under different memory load conditions. Phonetic recalibration was--like selecti...
Article
Full-text available
It is well known that visual information derived from mouth movements (i.e., lipreading) can have profound effects on auditory speech identification (e.g. the McGurk-effect). Here we examined the reverse phenomenon, namely whether auditory speech affects lipreading. We report that speech sounds dubbed onto lipread speech affect immediate identifica...
Article
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds ar...
Article
Full-text available
Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance with lipread information telling what the phoneme should be (recalibration). Here, we tested the stability of lipread-induced recalibration over time. Listeners were exposed to an ambiguous sound halfway between /t/ and /p/ that was dubbed onto a fac...
Article
Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance with lipread information telling what the phoneme should be (recalibration). Here, we tested the stability of lipread-induced recalibration over time. Listeners were exposed to an ambiguous sound halfway between /t/ and /p/ that was dubbed onto a fac...
Conference Paper
Full-text available
Lipreading can evoke an immediate bias on auditory phoneme perception (e.g. 6) and it can produce an aftereffect reflecting a shift in the phoneme boundary caused by exposure to an auditory ambiguous stimulus that is combined with non- ambiguous lipread speech (recalibration, (1)). Here, we tested the stability of lipread-induced recalibration over...

Network

Cited By