
Jyrki Tuomainen- PhD
- University College London
Jyrki Tuomainen
- PhD
- University College London
About
96
Publications
11,011
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,633
Citations
Introduction
Skills and Expertise
Current institution
Additional affiliations
August 2008 - September 2010
Publications
Publications (96)
In face-to-face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucial role in language processing. While several studies have addressed how these cues contribute to native (L1) language processing, their impact on non-native (L2) comprehension is largely unknown. Comprehension of naturalistic language by L...
Introduction:
Abnormal facial growth is a recognized outcome in cleft lip and palate (CLP), resulting in a concave profile and a class III occlusal status. Maxillary osteotomy (MO) is undertaken to correct this facial deformity, and the surgery can impact speech articulation, although the evidence remains limited and ill-defined for the CLP popula...
The effects of threatening stimuli, including threatening language, on trait anxiety have been widely studied. However, whether anxiety levels have a direct effect on language processing has not been so consistently explored. The present study focuses on event‐related potential (ERP) patterns resulting from electroencephalographic (EEG) measurement...
Language is multimodal: non-linguistic cues, such as prosody, gestures and mouth movements, are always present in face-to-face communication and interact to support processing. In this paper, we ask whether and how multimodal cues affect L2 processing by recording EEG for highly proficient bilinguals when watching naturalistic materials. For each w...
The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to pa...
Background:
The status of the velopharyngeal mechanism can be inferred from perceptual ratings of specified speech parameters. Several studies have proposed the measure of an overall velopharyngeal composite score based on these perceptual ratings and have reported good validity. The Cleft Audit Protocol for Speech-Augmented (CAPS-A) is a validate...
This study examined the effect of increasing visual perceptual load on auditory awareness for social and non-social stimuli in adolescents with autism spectrum disorder (ASD, n = 63) and typically developing (TD, n = 62) adolescents. Using an inattentional deafness paradigm, a socially meaningful (‘Hi’) or a non-social (neutral tone) critical stimu...
The effects of threatening stimuli, including threatening language, on trait anxiety have been widely studied. However, whether anxiety levels have a direct effect on language processing has not been so consistently explored. The present study focuses on event-related potential (ERP) patterns resulting from electroencephalographic (EEG) measurement...
The present study attempts to identify how trait anxiety, measured as worry-level, affects the processing of threatening speech. Two experiments using dichotic listening tasks were implemented; where participants had to identify sentences that convey threat through three different information channels: prosody-only, semantic-only and both semantic...
Objective
To investigate the effect of maxillary osteotomy on velopharyngeal function in cleft lip and palate (CLP) using instrumental measures.
Design
A prospective study.
Participants
A consecutive series of 20 patients with CLP undergoing maxillary osteotomy by a single surgeon were seen at 0 to 3 months presurgery (T1), 3 months (T2), and 12...
Background:
Maxillary osteotomy is typically undertaken to correct abnormal facial growth in cleft lip and palate. The surgery can cause velopharyngeal insufficiency resulting in hypernasality. This study aims to identify valid predictors of acquired velopharyngeal insufficiency following maxillary osteotomy by using a range of perceptual and inst...
The present study attempts to identify how trait anxiety, measured as worry-level, affects the processing of threatening speech Two experiments using dichotic listening tasks were implemented; where participants had to identify sentences that convey threat through three different information channels: prosody-only, semantic-only and both semantic a...
Communication naturally occurs in dynamic face-to-face environments where spoken words are embedded in linguistic discourse and accompanied by multimodal cues. Existing research supports predictive brain models where prior discourse but also prosody, mouth movements, and hand-gestures individually contribute to comprehension. In electroencephalogra...
Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale ( n = 334) replication study, by investigati...
Onomatopoeia is widespread across the world’s languages. They represent a relatively simple iconic mapping: the phonological/phonetic properties of the word evokes acoustic related features of referents. Here, we explore the EEG correlates of processing onomatopoeia in English. Participants were presented with a written cue-word (e.g., leash ) and...
Neuroscience findings have recently received critique on the lack of replications. To examine the reproducibility of brain indices of speech sound discrimination and their role in dyslexia, a specific reading difficulty, brain event-related potentials using EEG were measured using the same cross-linguistic passive oddball paradigm in about 200 dysl...
Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale ( N = 334) replication study, by investigati...
Music and speech both communicate emotional meanings in addition to their domain-specific contents. But it is not clear whether and how the two kinds of emotional meanings are linked. The present study is focused on exploring the emotional connotations of musical timbre of isolated instrument sounds through the perspective of emotional speech proso...
Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probabilit...
p-values for the articles for each laboratory and each channel.
r-values for the articles for each channel, computed across laboratories.
r-values for the nouns for each channel, computed across laboratories.
p-values for the nouns for each channel, computed across laboratories.
This file contains Supplementary Tables 1-3.
Supplementary Table 1 contains the sentence materials with cloze probabilities (0-100%) of articles and nouns, along with post-noun sentence endings, comprehension questions and expected answer. Of note, because expectedness of the noun is here determined by the cloze value of the preceding article, the...
r-values for the nouns for each laboratory and each channel.
r-values for the articles for each laboratory and each channel
r-values for the nouns for each laboratory and each channel.
p-values for the articles for each channel, computed across laboratories.
Speech-in-noise (SPIN) perception involves neural encoding of temporal acoustic cues. Cues include temporal fine structure (TFS) and envelopes that modulate at syllable (Slow-rate ENV) and fundamental frequency (F0-rate ENV) rates. Here the relationship between speech-evoked neural responses to these cues and SPIN perception was investigated in old...
In current theories of language comprehension, people routinely and implicitly predict upcoming words by pre-activating their meaning, morpho-syntactic features and even their specific phonological form. To date the strongest evidence for this latter form of linguistic prediction comes from a 2005 Nature Neuroscience landmark publication by DeLong,...
In current theories of language comprehension, people routinely and implicitly predict upcoming words by pre-activating their meaning, morpho-syntactic features and even their specific phonological form. To date the strongest evidence for this latter form of linguistic prediction comes from a 2005 Nature Neuroscience landmark publication by DeLong,...
Speech segments are delivered in an acoustically complex signal. This paper examines the role of acoustic complexity in speech-specific processing in the brain. Mismatch Negativity (MMN) is a deviance detection event-related response elicited by both speech and non-speech stimuli. Speech-specific MMN can further be sensitive to segment-driven categ...
Background:
The Dysarthria-in-Interaction Profile's potential contribution to the clinical assessment of dysarthria-in-conversation has been outlined in the literature, but its consistency of use across different users has yet to be reported.
Aims:
To establish the level of consistency across raters on four different interaction categories. That...
Facilitation of general cognitive capacities such as executive functions through training has stirred considerable research interest during the last decade. Recently we demonstrated that training of auditory attention with forced attention dichotic listening not only facilitated that performance but also generalized to an untrained attentional task...
Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically...
Recent work on visual selective attention has shown that individuals with Autism Spectrum Disorder (ASD) demonstrate an increased perceptual capacity. The current study examined whether increasing visual perceptual load also has less of an effect on auditory awareness in children with ASD. Participants performed either a high- or low load version o...
Background
Abnormal facial growth is a well‐known sequelae of cleft lip and palate (CLP) resulting in maxillary retrusion and a class III malocclusion. In 10–50% of cases, surgical correction involving advancement of the maxilla typically by osteotomy methods is required and normally undertaken in adolescence when facial growth is complete. Current...
Swallowing and speech difficulties after treatment for advanced oral and oropharyngeal tumours are highly prevalent and often require a protracted period of rehabilitation.(1) Changes in function continue to occur beyond 12 months post treatment.(2) The disruption of normal anatomy caused by surgical excision and/or fibrotic changes following chemo...
The literature on the effect of age on saliva production, which has implications for health, quality of life, differential diagnosis, and case management, remains inconclusive. Physiological changes, motor and sensory, are frequently reported with increasing age. It was hypothesized that there would be a change in saliva production with older age....
Background
Abnormal facial growth is a well‐known sequelae of cleft lip and palate (CLP) resulting in maxillary retrusion and a class III malocclusion. In 10–50% of cases, surgical correction involving advancement of the maxilla typically by osteotomy methods is required and normally undertaken in adolescence when facial growth is complete. Current...
Speech contains a variety of acoustic cues to auditory and phonetic contrasts that are exploited by the listener in decoding the acoustic signal. In three experiments, we tried to elucidate whether listeners rely on formant peak frequencies or whole spectrum attributes in vowel discrimination. We created two vowel continua in which the acoustic dis...
We studied the effects of training on auditory attention in healthy adults with a speech perception task involving dichotically presented syllables. Training involved bottom-up manipulation (facilitating responses from the harder-to-report left ear through a decrease of right-ear stimulus intensity), top-down manipulation (focusing attention on the...
Objective
To undertake a critical and systematic review of the literature on the impact of maxillary advancement on speech outcomes in order to identify current best evidence.
Design and Main Outcome Measures
The following databases were searched: PubMed, CINAHL, and The Cochrane Controlled Trials Register. In addition, reference lists were hand s...
Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonet...
Listeners identified 46 variants of synthesized Finnish /e:/, /i:/, /y:/, and /ø:/ vowels in the presence of pink noise at-9,-6,-3, and 0 dB SNR, and without noise. They further evaluated the goodness of the variants in the non-masking condition on scale 1–7. The identification of the highest scoring variants (prototypes) was compared with the lowe...
Prototyypillä tarkoitetaan tietyn käsiteluokan tyypillisintä edustajaa. Pu-heen havaitsemisessa vokaaliprototyypeillä arvellaan olevan merkitystä äänteiden tunnistuksessa [1]. Prototyypit opitaan automaattisesti aistialtis-tuksen kautta. Äidinkielen tuoton opettelu tapahtuu jokeltelemalla, kun pikkulapsi harjoittelee puhetta yrittämällä tuottaa kuu...
Progress report presentation at The Russian-Finnish Seminar in Phonetics, 16th – 17th November, 2009. St.Petersburg State University, Department of Phonetics
Melodic Intonation Therapy (MIT) has proved to be an effective treatment in severe Broca's aphasia. It has been suggested that due to its musical/prosodic content, it activates the intact right hemisphere which may have latent linguistic capabilities. We studied the effects of MIT vs. ordinary repetition performance on hemispheric brain perfusion a...
Speech contains a variety of acoustic cues to phonetic contrasts that are exploited by the listener in decoding the acoustic signal. In three experiments, we tried to elucidate whether listeners rely on formant peak frequencies, whole spectrum representations or phonological features when they discriminate vowels. Experiment I investigated discrimi...
Allergic rhinitis and asthma are common among university students. Inhalant allergies have been considered to be a risk factor contributing to voice disorders. The purpose of this pilot study was to determine if students with confirmed respiratory allergies have frequently occurring vocal symptoms. A questionnaire concerning the prevalence of vocal...
Lexical information can bias categorization of an ambiguous phoneme and subsequently evoke a shift in the phonetic boundary. Here, we explored the extent to which this phenomenon is perceptual in nature. Listeners were asked to ignore auditory stimuli presented in a typical oddball sequence in which the standard was an ambiguous sound halfway betwe...
The temporal dynamics of processing morphologically complex words was investigated by recording event-related brain potentials (ERPs) when native Finnish-speakers performed a visual lexical decision task. Behaviorally, there is evidence that recognition of inflected nouns elicits a processing cost (i.e., longer reaction times and higher error rates...
The left superior temporal cortex shows greater responsiveness to speech than to non-speech sounds according to previous neuroimaging studies, suggesting that this brain region has a special role in speech processing. However, since speech sounds differ acoustically from the non-speech sounds, it is possible that this region is not involved in spee...
Previous studies of students studying to be teachers have indicated that these students commonly have voice disorders. Ideally, voice disorders should be treated before students start their work as teachers, but the resources for this treatment are often limited. This study examines whether group voice therapy is effective for teacher students. Acc...
The left superior temporal cortex shows greater responsiveness to speech than to non-speech sounds according to previous neuroimaging studies, suggesting that this brain region has a special role in speech processing. However, since speech sounds differ acoustically from the non-speech sounds, it is possible that this region is not involved in spee...
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visu...
We report on enhanced processing of speech sounds in congenitally and early blind individuals compared with normally seeing individuals. Two different consonant-vowel (CV) syllables were presented via headphones on each presentation. We used a dichotic listening (DL) procedure with pairwise presentations of CV syllables. The typical finding in this...
The development of a new vowel category was studied by measuring both automatic mismatch negativity and conscious behavioural target discrimination. Three groups, nai;ve Finns, advanced Finnish students of English, and native speakers of English, were presented with one pair of Finnish and three pairs of English synthetic vowels. The aim was to det...
Two experiments were conducted to determine whether vowel familiarity affects automatic and conscious vowel discrimination. Familiar (Finnish) and unfamiliar (Komi) vowels were presented to Finnish subjects. The good representatives of Finnish and Komi mid vowels were grouped into three pairs: front /e- epsilon /, central /ø-oe/, and back /o-o/. Th...
We tested whether listener's knowledge about the nature of the auditory stimuli had an effect on audio-visual (AV) integration of speech. First, subjects were taught to categorize two sine-wave (sw) replicas of the real speech tokens /omso/ and /onso/ into two arbitrary nonspeech categories without knowledge of the speech-like nature of the sounds....
The purpose of the study was to evaluate the effects of the lingual nerve impairment on phonetic quality of speech by analyzing the main acoustic features of vowel sounds when the normal lingual nerve function was partly distorted by local anesthesia.
The study group consisted of 7 men, whose right side lingual nerve was anesthetized with 0.8 mL of...
Two experiments were conducted to find out whether automatic and conscious vowel discrimination differ as a function of their familiarity. In the first experiment listeners discriminated vowel pairs consisting of familiar (Finnish) and unfamiliar (Komi) vowels. Finnish and Komi both belong to the Finno-Ugric language family. Finnish has 8 vowels (f...
The study of speech perception utilizes various phonetic tests such as categorization, goodness rating, and discrimination experiments. These experiments can be supplemented by studies of the plastic changes in the phoneme representations of the brain. The same methods can be applied when the aim is to see how a foreign language is acquired. Moreov...
Judgement of the emotional tone of a spoken utterance is influenced by a simultaneously presented face expression. The time course of this integration was investigated by measuring the mismatch negativity (MMN). In one condition, the standard stimulus was an angry voice fragment combined with a (congruous) angry face expression. In the deviant pair...
The effect of word level prominence on detection speed of word boundaries in Finnish was investigated in two word spotting experiments. The results showed that the perceived stress was not a function of the fundamental frequency (F0) difference between the preceding syllable and the first syllable of the target word. Given the fast response times,...
Three experiments investigated the role of word stress and vowel harmony in speech segmentation. Finnish has fixed word stress on the initial syllable, and vowels from a front or back harmony set cannot co-occur within a word. In Experiment 1, we replicated the results of Suomi, McQueen, and Cutler (1997) showing that Finns use a mismatch in vowel...
The time course of lexical activation was investigated in two event-related potential (ERP) experiments in which subjects were performing a generalized phoneme monitoring task. Reaction time (RT) to a target phoneme in medial or final position of trisyllabic real word carriers were significantly faster than RTs to targets in corresponding positions...
An auditory event-related brain potential called mismatch negativity (MMN) was measured to study the perception of vowel pitch and formant frequency. In the MMN paradigm, deviant vowels differed from the standards either in F0 or F2 with equal relative steps. Pure tones of corresponding frequencies were used as control stimuli. The results indicate...
Event-related potentials were recorded from four aphasic subjects in order to study if discrimination of synthetic vowels is impaired by left posterior brain damage. A component called the mismatch negativity (MMN) which is assumed to reflect basic discriminatory processes of auditory stimuli was measured. In accordance with the hypothesis, two pat...
We have adapted two widely used aphasia tests, the Boston Diagnostic Aphasia Examination (BDAE) and the Boston Naming Test (BNT), to the Finnish language. The BDAE is the first extensive standardized aphasia test battery that will be published in Finnish. The test adaptations as well as reliability and validity information of the Finnish BDAE versi...
Three pure alexic patients were given reading practice with the multiple oral re-reading (MOR) technique (Moyer 1979). All patients read single words relatively fast, but differed from each other in the reading speed of texts. In addition, two of the patients (HT and TT) had no significant memory or visuospatial deficit whereas one patient (PA) exh...
This study examines the hypothesis that audio-visual integration of speech requires both expectation to perceive speech and sufficient attentional resources to allow multimodal integration. Audio-visual integration was measured by recording susceptibility to the McGurk effect whilst participants simultaneously performed a primary visual task under...
Seeing the talker's articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers often fail to perceive as speech. While audiovisual i...
Diss. -- Katholieke Universiteit Brabant.