Isabel SchillerRWTH Aachen University · Institute for Psychology
Researcher in the area of Auditory Cognition, with a special interest for voice perception
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
Citations since 2017
15 Research Items
I am a postdoc researcher in the field of Psychology, specifically Auditory Cognition, with a particular interest in voice production and perception. Using a combination of behavioral and subjective methods, as well as acoustic analyses, I investigate the effects of acoustic interferences, such as noise or a talker's poor voice quality, on human perception and cognition. Recently, I have started to use immersive virtual reality to explore speech (in noise) perception from an audio-visual angle.
January 2017 - December 2020
Das Stimmresynthesesystem VQ-Synth soll die subjektive Behauchtheit (eine Facette der Heiserkeit) von Sprachaufnahmen steigern und damit einen Ansatz zur Erforschung und Therapie funktioneller Dysphonien bieten. Ziel dieser Arbeit war es, VQ-Synth erstmalig in einer Laborstudie auditiv zu evaluieren. Dazu wurde in einem Hörversuch (N = 31) der Einf...
From a cognitive-psychological perspective, listening to running speech is a demanding task, especially under adverse listening conditions. Although relevant for various communicative settings, it remains unclear how speech produced by a dysphonic (hoarse) talker might affect the listener’s cognitive performance. The aim of this study is to investi...
Classroom listening conditions are often characterized by high levels of background noise. Beyond that, every second teacher develops voice disorders during their career, meaning that many pupils are taught in impaired (e.g., hoarse or breathy) voices. This systematic review and meta-analysis aims at understanding how background noise and a speaker...
Purpose: Background noise and voice problems among teachers can degrade listening conditions in classrooms. The aim of this literature review is to understand how these acoustic degradations affect spoken language processing in 6- to 18-year-old children. Method: In a narrative report and meta-analysis, we systematically review studies that examin...
Immersion can be described as the „sense of being there“ in a represented environment or situation. The phenomenon of immersion has, for example, been studied in the context of virtual reality, film, and literature. This interdisciplinary study aims at investigating text-related immersion in combination with varying auditory backgrounds. More speci...
Purpose: The aim of this study was to investigate children’s processing of dysphonic speech in a realistic classroom setting, under the influence of added classroom noise. Method: Normally developing 6-year-old primary-school children performed two listening tasks in their regular classrooms: a phoneme discrimination task to assess speech percepti...
Purpose: The aim of this study was to assess the suitability of imitated dysphonic voice samples for their application in listening tasks investigating the impact of speakers' voice quality on spoken language processing. Methods: A female voice expert recorded speech samples (sustained vowels and connected speech) in her normal voice and while i...
Purpose: Our aim was to investigate isolated and combined effects of speech-shaped noise (SSN) and a speaker’s impaired voice quality on spoken language processing in first-grade children. Method: In individual examinations, 53 typically developing children aged 5 to 6 years performed a speech perception task (phoneme discrimination) and a listenin...
At school, children often face challenging listening conditions due to high noise levels or because they are exposed to dysphonic speakers. To date, no comprehensive review has evaluated how this might affect spoken language processing (SLP). Our aim was to systematically review the literature on the effects of noise and/or impaired voice quality o...
This protocol describes the methods used for a systematic review on the effects of noise and speaker’s impaired voice on spoken language processing in school-aged children. Registration number: CRD42019137275 Prospero link: https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=137275
This study investigated the effect of degraded listening conditions and speech rate on children's answer accuracy and response time in a speech perception task. Fifty-three normally-developing children (aged 5-6 years) listened to 72 pseudo-word pairs presented at two different speech rates (normal and fast) and four different listening conditions...
This database contains audio files that were recorded and evaluated in the context of the project "Imitating dysphonic voice: a suitable technique to create speech stimuli for spoken language processing tasks?". Sample 1, Sample 2, Sample 3, Sample 4: Dysphonic and normophonic recordings that were perceptually evaluated by five independent SLPs (i...
Background – At school, acoustic conditions should allow children to listen effectively. But reality is often different. Many teachers suffer from voice problems, which may reduce their speech quality when lecturing. In addition, children typically face high classroom noise levels. Past studies indicate that listening to either impaired voice or ag...
Share Link: https://authors.elsevier.com/a/1XfqB3AtDlD~nq Objectives: This study aimed (1) to investigate music theory teachers' professional and extra-professional vocal loading and background noise exposure, (2) to determine the correlation between vocal loading and background noise, and (3) to determine the correlation between vocal loading an...
Due to the high vocal demands associated with their profession, teachers face an increased risk of developing voice disorders. Research suggests that up to 50 % of experienced teachers are affected.1–3 Even student teachers, whose vocal load is still relatively low, report voice problems with a frequency of 20 % 4. Little is known about the prevale...
INTRODUCTION: Music theory teachers, who teach rhythm, singing and other music-related skills, depend greatly on a well-functioning voice. Unlike other schoolteachers, who primarily use their voice as a pedagogic tool, music theory teachers also use it as an instrument. Furthermore, they often engage in vocally demanding free-time activities requir...
I would like to know if there are studies that investigated how long primary-school children are able to concentrate on a listening task. Are there official recommendations for a maximum task length?
I am on the research for studies that investigate speaker normalization in children. For example, I wonder whether children around the age of six years can already normalize acoustic differences between speakers as well as adults. Any suggestions for literature on this topic?
Looking forward to reading your suggestions.
--- QUESTION SOLVED (thanks for your help!)---
I am interested in the effect of different exposures on participants' performance (% correct) in two tasks. Specifically, I would like to compare the effects of different exposures as a function of task. To do so, I build the following model (simplified):
- performance ~ exposure (4 levels) X task (2 levels)
However, results are difficult to interpret as performance distributions of task 1 and 2 differ from one another. As the attached exemplatory graphs show, participants generally performed better in task 1 than in task 2.
Does anyone have a suggestion how I could reasonably investigate whether there is an interaction between exposure and task? As an example, I am hoping to get to a conclusion like: "compared to the control (exposure A), exposure B had a significantly stronger effect on performance in task 2 than task 1." Z-transformation could be the key, but I don't quite know how to proceed.
Thanks in advance for your suggestions.
I am interested in creating voice-impaired speech samples for a speech perception task. It seems that, to date, there is no speech synthesizer that can create natural sounding speech with typical dysphonic characteristics (e.g. high jitter or shimmer values). But I might be wrong, since I am new to the field of speech synthesis. If you know of a specific software or can recommend related publications, I'd appreciate your help.
Can anybody tell me the frequency range of human speech sounds (vowels AND consonants)? I read somewhere that it is between 80 - 20000 Hz but I need a reliable source to cite.
Thank you very much in advance,