Sophie Scott

Sophie Scott
  • University College London

About

203
Publications
56,314
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
18,491
Citations
Current institution
University College London

Publications

Publications (203)
Article
Full-text available
Human interaction is immersed in laughter; though genuine and posed laughter are acoustically distinct, they are both crucial socio-emotional signals. In this novel study, autistic and non-autistic adults explicitly rated the affective properties of genuine and posed laughter. Additionally, we explored whether their self-reported everyday experienc...
Article
Spontaneous and conversational laughter are important socio-emotional communicative signals. Neuroimaging findings suggest that non-autistic people engage in mentalizing to understand the meaning behind conversational laughter. Autistic people may thus face specific challenges in processing conversational laughter, due to their mentalizing difficul...
Article
Full-text available
Although emotional mimicry is ubiquitous in social interactions, its mechanisms and roles remain disputed. A prevalent view is that imitating others’ expressions facilitates emotional understanding, but the evidence is mixed and almost entirely based on facial emotions. In a preregistered study, we asked whether inhibiting orofacial mimicry affects...
Preprint
Full-text available
Background While most research on the non-verbal communication challenges encountered by autistic people centres on visual stimuli, non-verbal vocalizations remains overlooked. Laughter serves as a socio-emotional signal for affiliative bonding in interactions. Autistic people seem to experience and produce laughter differently to non-autistic peop...
Article
Full-text available
Sound is processed in primate brains along anatomically and functionally distinct streams: this pattern can be seen in both human and non-human primates. We have previously proposed a general auditory processing framework in which these different perceptual profiles are associated with different computational characteristics. In this paper we consi...
Article
Full-text available
Robert Provine made several critically important contributions to science, and in this paper, we will elaborate some of his research into laughter and behavioural contagion. To do this, we will employ Provine's observational methods and use a recorded example of naturalistic laughter to frame our discussion of Provine's work. The laughter is from a...
Article
Full-text available
The effect of non-speech sounds, such as breathing noise, on the perception of speech timing is currently unclear. In this paper we report the results of three studies investigating participants' ability to detect a silent gap located adjacent to breath sounds during naturalistic speech. Experiment 1 (n = 24, in-person) asked whether participants c...
Article
Full-text available
The ability to learn and reproduce sequences is fundamental to everyday life, and deficits in sequential learning are associated with developmental disorders such as specific language impairment. Individual differences in sequential learning are usually investigated using the serial reaction time task (SRTT), wherein a participant responds to a ser...
Poster
Full-text available
How sensitive are listeners to experimental perturbations of the speech breathing time series? In a series of in-person and online experiments, we investigate the effects of duration and placement of a silent, artificially added gap inserted somewhere between speech breathing and speech sounds.
Article
Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested...
Article
Full-text available
The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper, there...
Article
Full-text available
Auditory verbal hallucinations (AVHs)-or hearing voices-occur in clinical and non-clinical populations, but their mechanisms remain unclear. Predictive processing models of psychosis have proposed that hallucinations arise from an over-weighting of prior expectations in perception. It is unknown, however, whether this reflects (i) a sensitivity to...
Article
Full-text available
Deciding whether others’ emotions are genuine is essential for successful communication and social relationships. While previous fMRI studies suggested that differentiation between authentic and acted emotional expressions involves higher-order brain areas, the time course of authenticity discrimination is still unknown. To address this gap, we tes...
Article
Full-text available
Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these lau...
Article
Full-text available
The networks of cortical and subcortical fields that contribute to speech production have benefitted from many years of detailed study, and have been used as a framework for human volitional vocal production more generally. In this article, I will argue that we need to consider speech production as an expression of the human voice in a more general...
Article
Full-text available
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they...
Article
Full-text available
Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective processing of laugh...
Article
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains un...
Preprint
Full-text available
Auditory verbal hallucinations (AVH) – or hearing voices – occur in clinical and non-clinical populations, but their mechanisms remain unclear. Predictive processing models of psychosis have proposed that hallucinations arise from an over-weighting of prior expectations in perception. It is unknown, however, whether this reflects i) a sensitivity t...
Preprint
Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners’ ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these lau...
Article
Full-text available
The ability to infer the authenticity of other’s emotional expressions is a social cognitive process taking place in all human interactions. Although the neurocognitive correlates of authenticity recognition have been probed, its potential recruitment of the peripheral autonomic nervous system is not known. In this work, we asked participants to ra...
Preprint
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they...
Article
Full-text available
Evidence for perceptual processing in models of speech production is often drawn from investigations in which the sound of a talker's voice is altered in real time to induce “errors.” Methods of acoustic manipulation vary but are assumed to engage the same neural network and psychological processes. This paper aims to review fMRI and PET studies of...
Preprint
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains un...
Preprint
Full-text available
Background: Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective process...
Article
Laughter is a positive vocal emotional expression: most laughter is found in social interactions [1]. We are overwhelmingly more likely to laugh when we are with other people [1], and laughter can play a very important communicative role [2]. We do of course also laugh at humor — but can laughter influence how funny we actually perceive the humorou...
Article
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perceptio...
Article
Full-text available
Nonverbal vocalisations such as laughter pervade social interactions, and the ability to accurately interpret them is an important skill. Previous research has probed the general mechanisms supporting vocal emotional processing, but the factors that determine individual differences in this ability remain poorly understood. Here, we ask whether the...
Article
Full-text available
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instru...
Article
Full-text available
Human voices are extremely variable: The same person can sound very different depending on whether they are speaking, laughing, shouting or whispering. In order to successfully recognise someone from their voice, a listener needs to be able to generalize across these different vocal signals ('telling people together'). However, in most studies of v...
Article
Full-text available
The ability to perceive the emotions of others is crucial for everyday social interactions. Important aspects of visual socioemotional processing, such as the recognition of facial expressions, are known to depend on largely automatic mechanisms. However, whether and how properties of automaticity extend to the auditory domain remains poorly unders...
Article
Full-text available
Previous studies have established a role for premotor cortex in the processing of auditory emotional vocalizations. Inhibitory continuous theta burst transcranial magnetic stimulation (cTBS) applied to right premotor cortex selectively increases the reaction time to a same-different task, implying a causal role for right ventral premotor cortex (PM...
Article
Full-text available
Altering reafferent sensory information can have a profound effect on motor output. Introducing a short delay [delayed auditory feedback (DAF)] during speech production results in modulations of voice and loudness, and produces a range of speech dysfluencies. The ability of speakers to resist the effects of delayed feedback is variable yet it is un...
Data
Data S1. Summary Data for Each Participant (Information on Group Membership, Parameter Estimates for Regions of Interest, and Behavioral Ratings), Related to STAR Methods Data are provided for each participant for (1) group membership (i.e., typically developing, disruptive/high callous-unemotional traits, disruptive/low callous-unemotional traits...
Article
Full-text available
Humans are intrinsically social animals, forming enduring affiliative bonds [1]. However, a striking minority with psychopathic traits, who present with violent and antisocial behaviors, tend to value other people only insofar as they contribute to their own advancement [2, 3]. Extant research has addressed the neurocognitive processes associated w...
Preprint
Human voices are extremely variable: The same person can sound very different depending on whether they are speaking, laughing, shouting or whispering. In order to successfully recognise someone from their voice, a listener needs to be able to generalise across these different vocal signals ('telling people together'). However, in most studies of v...
Article
Full-text available
Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes’ responses to intelligible auditory speech signals (McGettigan & Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Pre...
Preprint
Vocalizations are the production of sounds by the coordinated activity of up to eighty respiratory and laryngeal muscles. Whilst voiced acts, modified by the upper vocal tract (tongue, jaw, lip and palate) are central to the production of human speech, they are also central to the production of emotional vocalizations such as sounds of disgust, ang...
Article
Full-text available
Some individuals show a congenital deficit for music processing despite normal peripheral auditory processing, cognitive functioning, and music exposure. This condition, termed congenital amusia, is typically approached regarding its profile of musical and pitch difficulties. Here, we examine whether amusia also affects socio-emotional processing,...
Article
Full-text available
In 2 behavioral experiments, we explored how the extraction of identity-related information from familiar and unfamiliar voices is affected by naturally occurring vocal flexibility and variability, introduced by different types of vocalizations and levels of volitional control during production. In a first experiment, participants performed a speak...
Article
Full-text available
Trends Hearing and imagining sounds–including speech, vocalizations, and music–can recruit SMA and pre-SMA, which are normally discussed in relation to their motor functions. Emerging research indicates that individual differences in the structure and function of SMA and pre-SMA can predict performance in auditory perception and auditory imagery ta...
Article
Full-text available
Several authors have recently presented evidence for perceptual and neural distinctions between genuine and acted expressions of emotion. Here, we describe how differences in authenticity affect the acoustic and perceptual properties of laughter. In an acoustic analysis, we contrasted spontaneous, authentic laughter with volitional, fake laughter,...
Article
Full-text available
Unlabelled: Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or it...
Article
Full-text available
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However...
Article
Full-text available
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI...
Article
Full-text available
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI...
Article
Faces and voices, in isolation, prompt consistent social evaluations. However, most human interactions involve both seeing and talking with another person. Our main goal was to investigate how facial and vocal information are combined to reach an integrated person impression. In Study 1, we asked participants to rate faces and voices separately for...
Article
Full-text available
We recently demonstrated that drowsiness, indexed using EEG, was associated with left-inattention in a group of 26 healthy right-handers. This has been linked to alertness-related modulation of spatial bias in left neglect patients and the greater persistence of left, compared with right, neglect following injury. Despite handedness being among the...
Article
Full-text available
Frontotemporal dementia is an important neurodegenerative disorder of younger life led by profound emotional and social dysfunction. Here we used fMRI to assess brain mechanisms of music emotion processing in a cohort of patients with frontotemporal dementia (n = 15) in relation to healthy age-matched individuals (n = 11). In a passive-listening pa...
Article
Full-text available
There is much interest in the idea that musicians perform better than non-musicians in understand-ing speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational m...
Article
Full-text available
Abstract Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such 'quantized' views by employing measures of recall preci...
Article
Full-text available
What is rhythm? An immediate answer to this question appears simple and might be linked with everyday examples of rhythmic events or behaviours like dancing, listening to a heartbeat or rocking a baby to sleep. However, a unified scientific definition of rhythm remains elusive. For decades, research programmes concerning rhythm and rhythmic organiz...
Article
Full-text available
Speech perception problems lead to many different forms of communication diffi-culties, and remediation for these prob-lems remains of critical interest. A recent study by Kraus et al. (2014b) published in the Journal of Neuroscience, used a ran-domized controlled trial (RCT) approach to identify how low intensity community-based musical enrichment...
Article
Full-text available
Laughter is often considered to be the product of humour. However, laughter is a social emotion, occurring most often in interactions, where it is associated with bonding, agreement, affection, and emotional regulation. Laughter is underpinned by complex neural systems, allowing it to be used flexibly. In humans and chimpanzees, social (voluntary)...
Article
When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of funct...
Article
Full-text available
It is well established that categorising the emotional content of facial expressions may differ depending on contextual information. Whether this malleability is observed in the auditory domain and in genuine emotion expressions is poorly explored. We examined the perception of authentic laughter and crying in the context of happy, neutral and sad...
Article
Full-text available
Unilateral brain damage can lead to a striking deficit in awareness of stimuli on one side of space called Spatial Neglect. Patient studies show that neglect of the left is markedly more persistent than of the right and that its severity increases under states of low alertness. There have been suggestions that this alertness-spatial awareness link...
Article
Full-text available
This study focuses on the neural processing of English sentences containing unergative, unaccusative and transitive verbs. We demonstrate common responses in bilateral superior temporal gyri in response to listening to sentences containing unaccusative and transitive verbs compared to unergative verbs; we did not detect any activation that was spec...
Article
Full-text available
The melodic contour of speech forms an important perceptual aspect of tonal and nontonal languages and an important limiting factor on the intelligibility of speech heard through a cochlear implant. Previous work exploring the neural correlates of speech comprehension identified a left-dominant pathway in the temporal lobes supporting the extractio...
Article
Full-text available
Noise-vocoding is a transformation which, when applied to speech, severely reduces spectral resolution and eliminates periodicity, yielding a stimulus that sounds "like a harsh whisper" (Scott, Blank et al. 2000). This process simulates a cochlear implant, where the activity of many thousand hair cells in the inner ear is replaced by direct stimula...
Article
Full-text available
It is well established that emotion recognition of facial expressions declines with age, but evidence for age-related differences in vocal emotions is more limited. This is especially true for nonverbal vocalizations such as laughter, sobs, or sighs. In this study, 43 younger adults (M = 22 years) and 43 older ones (M = 61.4 years) provided multipl...
Article
Full-text available
It is not unusual to find it stated as a fact that the left hemisphere is specialized for the processing of rapid, or temporal aspects of sound, and that the dominance of the left hemisphere in the perception of speech can be a consequence of this specialization. In this review we explore the history of this claim and assess the weight of this assu...
Article
Full-text available
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magn...
Article
Full-text available
Humans express laughter differently depending on the context: polite titters of agreement are very different from explosions of mirth. Using functional MRI, we explored the neural responses during passive listening to authentic amusement laughter and controlled, voluntary laughter. We found greater activity in anterior medial prefrontal cortex (amP...
Article
Full-text available
Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity [Belin, P., Fecteau, S., & Bedard, C. Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129–135, 2004]. Our voices are highly flexible and dynamic; talkers...
Article
Full-text available
Spoken language is rarely heard in silence, and a great deal of interest in psychoacoustics has focused on the ways that the perception of speech is affected by properties of masking noise. In this review we first briefly outline the neuroanatomy of speech perception. We then summarise the neurobiological aspects of the perception of masked speech,...
Article
Native listeners make use of higher-level, context-driven semantic and linguistic information during the perception of speech-in-noise. In a recent behavioural study, using a new paradigm that isolated the semantic level of speech by using words, we showed that this native-language benefit is at least partly driven by semantic context (Golestani et...
Article
Full-text available
An anterior pathway, concerned with extracting meaning from sound, has been identified in nonhuman primates. An analogous pathway has been suggested in humans, but controversy exists concerning the degree of lateralization and the precise location where responses to intelligible speech emerge. We have demonstrated that the left anterior superior te...
Chapter
Prominent scholars consider the cognitive and neural similarities between birdsong and human speech and language. Scholars have long been captivated by the parallels between birdsong and human speech and language. In this book, leading scholars draw on the latest research to explore what birdsong can tell us about the biology of human speech and la...
Article
Full-text available
Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of emotional information in social interactions. However, the investigation of how we process these vocal cues entered the research agenda only recently. Here, we introduce a new corpus of nonverbal vocalizations, which we recorded and submitted to perceptu...
Article
This article presents a review of the effects of adverse conditions (ACs) on the perceptual, linguistic, cognitive, and neurophysiological mechanisms underlying speech recognition. The review starts with a classification of ACs based on their origin: Degradation at the source (production of a noncanonical signal), degradation during signal transmis...
Article
Full-text available
Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The...
Article
Our understanding of the neurobiological basis for human speech production and perception has benefited from insights from psychology, neuropsychology and neurology. In this overview, I outline some of the ways that functional imaging has added to this knowledge and argue that, as a neuroanatomical tool, functional imaging has led to some significa...
Article
Full-text available
Over the past 30 years hemispheric asymmetries in speech perception have been construed within a domain-general framework, according to which preferential processing of speech is due to left-lateralized, non-linguistic acoustic sensitivities. A prominent version of this argument holds that the left temporal lobe selectively processes rapid/temporal...
Article
Full-text available
Functional imaging studies of speech perception have revealed extensive involvement of the dorsolateral temporal lobes in aspects of speech and voice processing. In the current study, fMRI was used as a functional imaging method to address the perception of speech in different masking conditions. Throughout the scanning experiment, subjects were di...
Article
Full-text available
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact...
Article
Full-text available
The question of hemispheric lateralization of neural processes is one that is pertinent to a range of subdisciplines of cognitive neuroscience. Language is often assumed to be left-lateralized in the human brain, but there has been a long running debate about the underlying reasons for this. We addressed this problem with fMRI by identifying the ne...
Data
Representative spectrograms and sounds for the various stimulus conditions. For each row of the figure, time is on the x-axis, frequency on the y, with the darkness of the trace indicating the amount of energy present at each particular time/frequency co-ordinate. Each row gives a single example from a particular condition. Conditions are named usi...

Network

Cited By