Alexis Hervais-Adelman

Alexis Hervais-Adelman
University of Zurich | UZH · Psychologisches Institut

PhD

About

57
Publications
12,598
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,639
Citations
Introduction
My interests encompass a broad variety of topics in the field of neurolinguistics. The include the neuroscience of simultaneous interpretation, the relationship between executive functions and multilingual control, and the mechanisms of degraded speech comprehension.
Additional affiliations
March 2018 - present
University of Zurich
Position
  • Professor (Assistant)
Description
  • Head of the Neurolinguistics group.
April 2016 - March 2018
Max Planck Institute for Psycholinguistics
Position
  • Research Associate
February 2014 - December 2020
University of Geneva
Position
  • Lecturer

Publications

Publications (57)
Article
Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated...
Article
Full-text available
We used functional magnetic resonance imaging (fMRI) to examine the neural basis of extreme multilingual language control in a group of 50 multilingual participants. Comparing brain responses arising during simultaneous interpretation (SI) with those arising during simultaneous repetition revealed activation of regions known to be involved in speec...
Article
Full-text available
Simultaneous interpreting is a complex cognitive task that requires the concurrent execution of multiple processes: listening, comprehension, conversion of a message from one language to another, speech production, and self-monitoring. This requires the deployment of an array of linguistic and cognitive control mechanisms that must coordinate the v...
Article
Full-text available
Speech is the most important signal in our auditory environment, and the processing of speech is highly dependent on context. However, it is unknown how contextual demands influence the neural encoding of speech. Here, we examine the context dependence of auditory cortical mechanisms for speech encoding at the level of the representation of fundame...
Preprint
Full-text available
In the absence of a task, the brain at rest spontaneously displays activity that reflects features of the underlying neural substrate. Examination of inter-areal coupling of resting state oscillatory activity reveals that the brain's resting activity is composed of functionally connected networks, which differ depending upon oscillatory frequency,...
Article
Full-text available
Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus represent...
Article
Full-text available
Resting brain (rs) activity has been shown to be a reliable predictor of the level of foreign language (L2) proficiency younger adults can achieve in a given time-period. Since rs properties change over the lifespan, we investigated whether L2 attainment in older adults (aged 64–74 years) is also predicted by individual differences in rs activity,...
Article
Full-text available
There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced ele...
Article
A new study using electroencephalography and functional magnetic resonance imaging suggests that dogs and humans may segment speech in similar ways.
Chapter
Conference interpreting demands the coordination of multiple cognitive processes required to attend to a source message, process that source message, convert it to the target language and ultimately produce the target utterance. This chapter focuses on simultaneous (as opposed to consecutive) interpreting, which has the particular demand of requiri...
Preprint
Full-text available
What processes lead to categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. We used a binaur...
Article
Full-text available
Linguistic labels exert a particularly strong top-down influence on perception. The potency of this influence has been ascribed to their ability to evoke category-diagnostic features of concepts. In doing this, they facilitate the formation of a perceptual template concordant with those features, effectively biasing perceptual activation towards th...
Preprint
Full-text available
Previous research suggest that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds o...
Article
Brain connectivity plays a major role in the encoding, transfer, and integration of sensory information. Interregional synchronization of neural oscillations in the γ-frequency band has been suggested as a key mechanism underlying perceptual integration. In a recent study, we found evidence for this hypothesis showing that the modulation of interhe...
Article
Purpose An increasing number of individuals with residual or even normal contralateral hearing are being considered for cochlear implantation. It remains unknown whether the presence of contralateral hearing is beneficial or detrimental to their perceptual learning of cochlear implant (CI)–processed speech. The aim of this experiment was to provide...
Article
The resting human brain exhibits spontaneous patterns of activity that reflect features of the underlying neural substrate. Examination of interareal coupling of resting-state oscillatory activity has revealed that the brain's resting activity is composed of functional networks, whose topographies differ depending on oscillatory frequency, suggesti...
Article
Full-text available
In our study we analyse the online processing of visual-verbal input during simultaneous interpreting with text. To that end, we compared 15 professional interpreters’ eye movements during simultaneous interpreting with text (SIMTXT) to a baseline collected during reading while listening (RWL). We found that interpreters have a preference for a vis...
Article
The role of neuronal oscillations in the processing of speech has recently come to prominence. Since resting-state (RS) brain activity has been shown to predict both task-related brain activation and behavioural performance, we set out to establish whether inter-individual differences in spectrally-resolved RS-MEG power are associated with variatio...
Article
Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integratio...
Article
Full-text available
An amendment to this paper has been published and can be accessed via a link at the top of the paper. The original and corrected figures are shown in the accompanying Publisher Correction.
Article
Full-text available
Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnet...
Preprint
The role of neuronal oscillations in the processing of speech has recently come to prominence. Since resting-state (RS) brain activity has been shown to predict both task-related brain activation and behavioural performance, we set out to establish whether inter-individual differences in spectrally-resolved RS-MEG power are associated with variatio...
Article
In the absence of a task, the brain at rest spontaneously displays activity that reflects features of the underlying neural substrate. Examination of inter-areal coupling of resting state oscillatory activity reveals that the brain’s resting activity is composed of functionally connected networks, which differ depending upon oscillatory frequency,...
Article
Full-text available
The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, a...
Preprint
Full-text available
The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, a...
Article
Full-text available
Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by α‐band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large‐scale interactions and local activity chang...
Preprint
Visual selective attention operates through top-down mechanisms of signal enhancement and suppression, mediated by α-band oscillations. The effects of such top-down signals on local processing in primary visual cortex (V1) remain poorly understood. In the present work, we characterize the interplay between large-scale interactions and local activit...
Article
Simultaneous interpretation is a complex cognitive task that not only demands multilingual language processing, but also requires application of extreme levels of domain-general cognitive control. We used MRI to longitudinally measure cortical thickness in simultaneous interpretation trainees before and after a Master's program in conference interp...
Conference Paper
Full-text available
We investigated how dynamic, directed interactions among cortical areas orchestrate task-specific processing of visual stimuli. We recorded separate fMRI and high-density EEG while participants saw oriented gratings under two task conditions. In the “attended” condition participants discriminated the orientation of the grating, in the “ignored” con...
Article
Full-text available
We studied mutual influences between native and non-native vowel production during learning, i.e., before and after short-term visual articulatory feedback training with non-native sounds. Monolingual French speakers were trained to produce two non-native vowels: the Danish /ɔ/, which is similar to the French /o/, and the Russian /ɨ/, which is diss...
Conference Paper
Full-text available
Introduction: Visual perception evokes activity in a distributed set of brain regions, from low-level sensory areas to high-level processing units. Throughout this broad perceptual network, neural signals can be strongly modulated by top-down attentional influences: attending to specific features, such as color or motion, selectively increases act...
Article
Full-text available
Visual stimuli quickly activate a broad network of brain areas that often show reciprocal structural connections between them. Activity at short latencies (<100 ms) is thought to represent a feed-forward activation of widespread cortical areas, but fast activation combined with reciprocal connectivity between areas in principle allows for two-way,...
Article
Full-text available
Fast and automatic behavioral responses are required to avoid collision with an approaching stimulus. Accordingly, looming stimuli have been found to be highly salient and efficient attractors of attention due to the implication of potential collision and potential threat. Here, we address the question of whether looming motion is processed in the...
Article
Full-text available
Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the...
Article
Full-text available
Figure 5 of the article by Becker et al.(2013) contained a minor error, which we hereby rectify. In the original figure at the bottom left of panel C the indication of the sagittal section used for display of the inverse solution is incorrect. We therefore re-submit Figure 5 with ...
Article
Full-text available
Non-conscious visual processing of different object categories was investigated in a rare patient with bilateral destruction of the visual cortex (V1) and clinical blindness over the entire visual field. Images of biological and non-biological object categories were presented consisting of human bodies, faces, butterflies, cars, and scrambles. Beha...
Article
Full-text available
The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they...
Article
Full-text available
Cortical blindness refers to the loss of vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be pr...
Article
Native listeners make use of higher-level, context-driven semantic and linguistic information during the perception of speech-in-noise. In a recent behavioural study, using a new paradigm that isolated the semantic level of speech by using words, we showed that this native-language benefit is at least partly driven by semantic context (Golestani et...
Article
We investigated localization performance of simple targets in patient TN, who suffered bilateral damage of his primary visual cortex and shows complete cortical blindness. Using a two-alternative forced-choice paradigm, TN was asked to guess the position of left-right targets with goal-directed and discrete manual responses. The results indicate a...
Article
We used functional magnetic resonance imaging (fMRI) to investigate the neural basis of comprehension and perceptual learning of artificially degraded [noise vocoded (NV)] speech. Fifteen participants were scanned while listening to 6-channel vocoded words, which are difficult for naïve listeners to comprehend, but can be readily learned with appro...
Article
Full-text available
In this review we will focus on delineating the neural substrates of the executive control of language in the bilingual brain, based on the existing neuroimaging, intracranial, transcranial magnetic stimulation, and neuropsychological evidence. We will also offer insights from ongoing brain-imaging studies into the development of expertise in multi...
Article
Full-text available
Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor,...
Article
Behavioral evidence supports the idea that perception is guided by mechanisms that compute an input's most probable interpretation. For example, four-band noise-vocoded speech, which is largely unintelligible to naive listeners, becomes perceptually clear when listeners possess prior knowledge of the signal content - a phenomenon we call "pop-out"....
Article
Functional imaging and TMS studies show that motor and premotor cortex responds to heard speech though the functional significance of this response is unclear. Three recent fMRI studies, showing modulation of motor responses to heard speech in the absence of overt spoken or manual responses, may shed light on how regions typically associated with s...
Article
A recent fMRI study on speech-sound processing by Uppenkamp et al. [Neuroimage, 31(3), 1284-1296 (2006)] revealed that regions of the left and right, superior temporal gyri (STG) and anterior, superior temporal sulci (STS) respond preferentially to speech-like stimuli. Hervais-Adelman et al. [BSA London (2007)] extended this research to investigate...
Article
Full-text available
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report...
Article
Full-text available
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was...
Article
Full-text available
This region may be an initial stage of speech processing, but it is unclear whether it is specific to vowels or general to all speech. In the present study, we extend this investigation by studying cortical responses to consonant-vowel (CV) and vowel-consonant (VC) pairs. By comparing activations elicited by CV and VC stimuli with those produced by...

Network

Cited By

Projects

Projects (2)
Project
The lab has a strong and longstanding interest in multilingualism research. We aim to understand brain functional and structural differences underlying bi- and multilingualism, at different levels of language processing (i.e. phonological, lexico-semantic, syntactic), and also with respect to the executive control of language, not only in polyglots but also in language experts (e.g. in simultaneous interpreters). In the last years we have worked on functional and structural plasticity associated with training to become simultaneous interpreters, i.e. experts in ‘extreme language control’. We are planning a longitudinal study to explore cognitive and neural changes associated with L2 learning, beyond the purely linguistic domain. In our work, we characterize bi/multilingual language experience as a continuum, weighing not only the amount of experience, age of acquisition, and usage of each language continuously, but also taking into account typological distance between the language spoken, and this with respect to phonology, lexico-semantics and syntax. We strive towards replication of findings across independent data-sets, and together with international collaborators are seeking to bring a larger body of data together with the view of a larger scale meta-analysis effort.
Project
Quantifying attentional phenomena during simultaneous interpreting with text