Amanda Seidl

Amanda Seidl
Purdue University | Purdue · Department of Speech, Language and Hearing Sciences

PhD

About

100
Publications
22,967
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,563
Citations
Additional affiliations
August 2003 - present
Purdue University
Position
  • Professor (Full)

Publications

Publications (100)
Article
Full-text available
Behavioral differences in responding to tactile and auditory stimuli are widely reported in individuals with autism spectrum disorder (ASD). However, the neural mechanisms underlying distinct tactile and auditory reactivity patterns in ASD remain unclear with theories implicating differences in both perceptual and attentional processes. The current...
Article
Purpose Recording young children's vocalizations through wearables is a promising method to assess language development. However, accurately and rapidly annotating these files remains challenging. Online crowdsourcing with the collaboration of citizen scientists could be a feasible solution. In this article, we assess the extent to which citizen sc...
Article
Full-text available
The alerting network, a subcomponent of attention, enables humans to respond to novel information. Children with ASD have shown equivalent alerting in response to visual and/or auditory stimuli compared to typically developing (TD) children. However, it is unclear whether children with ASD and TD show equivalent alerting to tactile stimuli. We exam...
Article
Research has identified bivariate correlations between speech perception and cognitive measures gathered during infancy as well as correlations between these individual measures and later language outcomes. However, these correlations have not all been explored together in prospective longitudinal studies. The goal of the current research was to co...
Article
This study evaluates whether early vocalizations develop in similar ways in children across diverse cultural contexts. We analyze data from daylong audio recordings of 49 children (1‐36 months) from five different language/cultural backgrounds. Citizen scientists annotated these recordings to determine if child vocalizations contained canonical tra...
Article
Full-text available
Infants form object categories in the first months of life. By 3 months and throughout the first year, successful categorization varies as a function of the acoustic information presented in conjunction with category members. Here we ask whether tactile information, delivered in conjunction with category members, also promotes categorization. Six-...
Preprint
Recording young children's vocalizations through wearables is a promising method. However, accurately and rapidly annotating these files remains challenging. Online crowdsourcing with the collaboration of citizen scientists could be a feasible solution. In this paper, we assess the extent to which citizen scientists' annotations align with those ga...
Preprint
Recent developments allow the collection of audio data from lightweight wearable devices, potentially enabling us to study language use from everyday life samples. However, extracting useful information from these data is currently impossible with automatized routines, and overly expensive with trained human annotators. We explore a strategy fit to...
Article
Full-text available
Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant...
Article
We examined full-term and preterm infants’ perception of frequent and infrequent phonotactic pairings involving sibilants and liquids. Infants were tested on their preference for syllables with onsets involving /s/ or /ʃ/ followed by /l/ or /r/ using the Headturn Preference Procedure. Full-term infants preferred the frequent to the infrequent phono...
Preprint
This study evaluates if babbling emerges similarly in children across diverse cultural contexts. Fifty-two children (1-36 months), exposed to five languages, were recorded over the course of one day. Citizen scientists annotated short clips from these recordings to determine if child vocalizations were canonical or not (“ba” versus “a”). Canonical...
Article
Multimodal communication may facilitate attention in infants. This study examined the presentation of caregiver touch-only and touch + speech input to 12-month-olds at high (HRA) and low risk for ASD. Findings indicated that, although both groups received a greater number of touch + speech bouts compared to touch-only bouts, the duration of overall...
Article
Fragile X syndrome (FXS) is a neurogenetic syndrome characterized by cognitive impairments and high rates of autism spectrum disorder (ASD). FXS is often highlighted as a model for exploring pathways of symptom expression in ASD due to the high prevalence of ASD symptoms in this population and the known single‐gene cause of FXS. Early vocalization...
Article
Full-text available
Atypical response to tactile input is associated with greater socio-communicative impairments in individuals with autism spectrum disorder (ASD). The current study examined overt orienting to caregiver-initiated touch in 12-month-olds at high risk for ASD (HRA) with (HRA+) and without (HRA−) a later diagnosis of ASD compared to low-risk comparison...
Article
Full-text available
Purpose Caregivers may show greater use of nonauditory signals in interactions with children who are deaf or hard of hearing (DHH). This study explored the frequency of maternal touch and the temporal alignment of touch with speech in the input to children who are DHH and age-matched peers with normal hearing. Method We gathered audio and video re...
Article
Touch cues might facilitate infants’ early word comprehension and explain the early understanding of body part words. Parents were instructed to teach their infants, 4- to 5-month-olds or 10- to 11-month-olds, nonce words for body parts and a contrast object. Importantly, they were given no instructions about the use of touch. Parents spontaneously...
Article
From early on, young children are sensitive to talker-specific attributes present in the speech signal. For example, infants attend selectively and learn better from their mother’s voice than another female voice. Since age influences vocal quality, we asked whether toddlers show similar selective attention and learning from talkers of specific age...
Article
The cover image is based on the Paper What Do North American Babies Hear? A large‐scale cross‐corpus analysis, by Elika Bergelson et al., DOI 10.1111/desc.12724
Article
A range of demographic variables influences how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day‐long recordings from 6...
Article
Full-text available
Purpose: One promising early marker for autism and other communicative and language disorders is early infant speech production. Here we used daylong recordings of high- and low-risk infant-mother dyads to examine whether acoustic-prosodic alignment as well as two automated measures of infant vocalization are related to developmental risk status i...
Article
This project explored whether disruption of articulation during listening impacts subsequent speech production in 4-yr-olds with and without speech sound disorder (SSD). During novel word learning, typically-developing children showed effects of articulatory disruption as revealed by larger differences between two acoustic cues to a sound contrast,...
Article
Full-text available
Infants' experiences are defined by the presence of concurrent streams of perceptual information in social environments. Touch from caregivers is an especially pervasive feature of early development. Using three lab experiments and a corpus of naturalistic caregiver-infant interactions, we examined the relevance of touch in supporting infants' lear...
Conference Paper
Full-text available
Human children outperform artificial learners because the former quickly acquire a multimodal, syntactically informed, and ever-growing lexicon with little evidence. Most of this lexicon is unlabelled and processed with unsupervised mechanisms, leading to robust and generalizable knowledge. In this paper, we summarize results related to 4-month-old...
Preprint
The field of psychology has become increasingly concerned with issues related to methodology and replicability. Infancy researchers face specific challenges related to replicability: high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations, amongst other facto...
Poster
Throughout their development, infants are exposed to varying speaking rates. Thus, it is important to determine whether they are able to adapt to speech at varying rates and recognize target words from continuous speech despite speaking rate differences. To address this question, a series of four experiments were conducted to test whether infants c...
Chapter
Full-text available
Infant-directed speech is characterized by marked differences from adult-directed speech in both prosodic and segmental properties. This review examines three acoustic cues (pitch, duration, and vowel space), in speech directed to infants with the aim of addressing two important questions. First, do infant- and adult-directed speech differ in simil...
Article
A long line of research investigates how infants learn the sounds and words in their ambient language over the first year of life, through behavioral tasks involving discrimination and recognition. More recently, individual performance in such tasks has been used to predict later language development. Does this mean that dependent measures in such...
Article
Full-text available
Both touch and speech independently have been shown to play an important role in infant development. However, little is known about how they may be combined in the input to the child. We examined the use of touch and speech together by having mothers read their 5-month-olds books about body parts and animals. Results suggest that speech+touch multi...
Article
Recent work has shown that children have detailed phonological representations of consonants at both word-initial and word-final edges. Nonetheless, it remains unclear whether onsets and codas are equally represented by young learners since word edges are isomorphic with syllable edges in this work. The current study sought to explore toddler’s sen...
Article
Full-text available
This paper investigates the interaction between two people, namely, a caregiver and an infant. A particular type of action in human interaction known as “touch” is described. We propose a method to detect “touch event” that uses color and motion features to track the hand positions of the caregiver. Our approach addresses the problem of hand occlus...
Article
Full-text available
Entrainment of prosody in the interaction of mothers with their young children – ERRATUM - Volume 43 Issue 4 - EON-SUK KO, AMANDA SEIDL, ALEJANDRINA CRISTIA, MELISSA REIMCHEN, MELANIE SODERSTROM
Article
Full-text available
Touch screens are increasingly prevalent, and anecdotal evidence suggests that young children are very drawn towards them. Yet there is little data regarding how young children use them. A brief online questionnaire queried over 450 French parents of infants between the ages of 5 and 40 months on their young child's use of touch-screen technology....
Article
Full-text available
Caregiver speech is not a static collection of utterances, but occurs in conversational exchanges , in which caregiver and child dynamically influence each other's speech. We investigate (a) whether children and caregivers modulate the prosody of their speech as a function of their interlocutor's speech, and (b) the influence of the initiator of th...
Conference Paper
Full-text available
Infant-directed speech (IDS) is thought to play a key role in determining infant language acquisition. It is thus important to describe how computational models of infant language acquisition behave when given an input of IDS, as compared to adult-directed speech (ADS). In this paper, we explore how an acoustic motif discovery algorithm fares when...
Article
Cross-linguistically, languages allow a wider variety of phonotactic patterns in onsets than in codas. However, the variability of phonotactic patterns in coda position in different languages suggests these patterns must, at least in part, be learned. Two experiments were conducted to explore whether there is an asymmetry in English-learning infant...
Article
Previous work reveals that toddlers can accommodate a novel accent after hearing it for only a brief period of time. A common assumption is that children, like adults, cope with nonstandard pronunciations by relying on words they know (e.g. ‘this person pronounces sock as sack, therefore by black she meant block’). In this paper, we assess whether...
Article
A growing research line documents significant bivariate correlations between individual measures of speech perception gathered in infancy and concurrent or later vocabulary size. One interpretation of this correlation is that it reflects language specificity: Both speech perception tasks and the development of the vocabulary recruit the same lingui...
Article
Full-text available
Previous studies have shown that infant-directed speech (IDS) differs from adult-directed speech (ADS) on a variety of dimensions. The aim of the current study was to investigate whether acoustic differences between IDS and ADS in English are modulated by prosodic structure. We compared vowels across the two registers (IDS, ADS) in both stressed an...
Article
Allophones are diverse phonetic instantiations of a single underlying sound category. As such, they pose a peculiar problem for infant language learners: These variants occur in the ambient language, but they are not used to encode lexical contrasts. Infants' sensitivity to sounds varying along allophonic dimensions declines by 11 months of age, su...
Article
The lexicon of 6-month-olds is comprised of names and body part words. Unlike names, body part words do not often occur in isolation in the input. This presents a puzzle: How have infants been able to pull out these words from the continuous stream of speech at such a young age? We hypothesize that caregivers' interactions directed at and on the in...
Conference Paper
This talk will examine whether speech processing style on a variety of speech processing tasks (both segmental and suprasegmental) interacts with diagnosis of autism, the presence/absence of specific autistic traits, as well as verbal and non-verbal IQ. Preliminary results indicate clear interactions between diagnosis and task performance style for...
Article
Full-text available
Does the acoustic input for bilingual infants equal the conjunction of the input heard by monolinguals of each separate language? The present letter tackles this question, focusing on maternal speech addressed to 11-month-old infants, on the cusp of perceptual attunement. The acoustic characteristics of the point vowels /a,i,u/ were measured in the...
Article
Full-text available
Past research has shown that English learners begin segmenting words from speech by 7.5 months of age. However, more recent research has begun to show that, in some situations, infants may exhibit rudimentary segmentation capabilities at an earlier age. Here, we report on four perceptual experiments and a corpus analysis further investigating the i...
Article
Full-text available
Languages vary not only in terms of their sound inventory, but also in the phonological status certain sound distinctions are assigned. For example, while vowel nasality is lexically contrastive (phonemic) in Quebecois French, it is largely determined by the context (allophonic) in American English; the reverse is true for vowel tenseness. If phone...
Article
There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theoretical models of language development and contribut...
Article
We investigated how talker variability impacts novel phonological pattern learning in 4- and 11-month-olds. Both age groups were better able to discriminate between legal and illegal phonotactic strings after exposure to multiple talkers than a single talker. It is argued that these data may be best accounted for by hybrid models that include lingu...
Article
Full-text available
ABSTRACT Typically, the point vowels [i,ɑ,u] are acoustically more peripheral in infant-directed speech (IDS) compared to adult-directed speech (ADS). If caregivers seek to highlight lexically relevant contrasts in IDS, then two sounds that are contrastive should become more distinct, whereas two sounds that are surface realizations of the same und...
Article
Full-text available
In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic pr...
Article
Full-text available
There is a substantial literature describing how infants become more sensitive to differences between native phonemes (sounds that are both present and meaningful in the input) and less sensitive to differences between non-native phonemes (sounds that are neither present nor meaningful in the input) over the course of development. Here, we review a...
Article
Both subjective impressions and previous research with monolingual listeners suggest that a foreign accent interferes with word recognition in infants, young children, and adults. However, because being exposed to multiple accents is likely to be an everyday occurrence in many societies, it is unexpected that such non-standard pronunciations would...
Article
Full-text available
Adult speakers of different free stress languages (e.g., English, Spanish) differ both in their sensitivity to lexical stress and in their processing of suprasegmental and vowel quality cues to stress. In a head-turn preference experiment with a familiarization phase, both 8-month-old and 12-month-old English-learning infants discriminated between...
Article
A current theoretical view proposes that infants converge on the speech categories of their native language by attending to frequency distributions that occur in the acoustic input. To date, the only empirical support for this statistical learning hypothesis comes from studies where a single, salient dimension was manipulated. Additional evidence i...
Article
Foreign accents incur processing costs for monolingual listeners, compromising speech perception for both adults [Munro and Derwing (1995)] and infants [Schmale and Seidl (2009)]. However, some adult work suggests that this disadvantage is modulated by accent experience [Bradlow and Bent (2008)]. It remains unknown whether exposure to foreign‐accen...
Article
Full-text available
Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested us...
Article
By their second birthday, children are beginning to map meaning to form with relative ease. One challenge for these developing abilities is separating information relevant to word identity (i.e. phonemic information) from irrelevant information (e.g. voice and foreign accent). Nevertheless, little is known about toddlers' abilities to ignore irrele...
Chapter
Full-text available
Features serve two main purposes in the phonology of languages: First, they delimit sets of sounds that participate in phonological processes and patterns (the classificatory function); and, second, they encode the distinction between pairs of contrastive phonemes (the distinctive function). In this chapter, we summarize evidence from a variety of...
Article
Full-text available
Adults' phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptu...
Article
Toward the end of their first year of life, infants’ overly specified word representations are thought to give way to more abstract ones, which helps them to better cope with variation not relevant to word identity (e.g., voice and affect). This developmental change may help infants process the ambient language more efficiently, thus enabling rapid...
Article
This study investigated the use of MFCC and SVM for automatic detection and comparison of vowel nasalization. The standard 39 MFCC coefficients were extracted at the center of the vowel, and an SVM classifier was built to discriminate between oral and nasalized vowels in a vowel-independent manner. When trained on the TIMIT training set and tested...
Article
In six experiments with English-learning infants, we examined the effects of variability in voice and foreign accent on word recognition. We found that 9-month-old infants successfully recognized words when two native English talkers with dissimilar voices produced test and familiarization items (Experiment 1). When the domain of variability was sh...
Article
English-learning 7.5- but not 6-month-olds extract word forms from fluent speech (Jusczyk et al., 1999). Thus, English learners are thought to begin segmenting words from speech by 7.5 months. However, recent research has shown that when target words are flanked by a frequent and emotionally salient word (e.g., the infant's name), even 6-month-olds...
Article
English-learning 7.5-month-olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non-initially stressed words from speech. Using the same artificial language methodology as Johnson and Jusczyk (2001), we explored the possibility that the emergence of this ability is linked to a decrea...
Article
Previous work suggests that learning to perceive speech categories in infancy may be influenced by the distributions of acoustic cues that underlie them. However, studies on this topic have focused on distributions of single cues, especially voice onset time (VOT), whereas natural phonetic categories are typically defined according to multiple cues...
Article
Each clause and phrase boundary necessarily aligns with a word boundary. Thus, infants' attention to the edges of clauses and phrases may help them learn some of the language-specific cues defining word boundaries. Attention to prosodically well-formed clauses and phrases may also help infants begin to extract information important for learning the...
Article
Previous research has shown that the weighting of, or attention to, acoustic cues at the level of the segment changes over the course of development (Nittrouer & Miller, 1997; Nittrouer, Manning & Meyer, 1993). In this paper we examined changes over the course of development in weighting of acoustic cues at the suprasegmental level. Specifically, w...
Article
Full-text available
Phonological patterns in languages often involve groups of sounds rather than individual sounds, which may be explained if phonology operates on the abstract features shared by those groups (Troubetzkoy, 193957. Troubetzkoy , N. 1939/1969 . Principles of phonology , Berkeley : University of California Press . View all references/1969; Chomsky & Hal...