About
196
Publications
43,773
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
16,258
Citations
Citations since 2017
Publications
Publications (196)
Sound is processed in primate brains along anatomically and functionally distinct streams: this pattern can be seen in both human and non-human primates. We have previously proposed a general auditory processing framework in which these different perceptual profiles are associated with different computational characteristics. In this paper we consi...
Robert Provine made several critically important contributions to science, and in this paper, we will elaborate some of his research into laughter and behavioural contagion. To do this, we will employ Provine's observational methods and use a recorded example of naturalistic laughter to frame our discussion of Provine's work. The laughter is from a...
The effect of non-speech sounds, such as breathing noise, on the perception of speech timing is currently unclear. In this paper we report the results of three studies investigating participants' ability to detect a silent gap located adjacent to breath sounds during naturalistic speech. Experiment 1 (n = 24, in-person) asked whether participants c...
The ability to learn and reproduce sequences is fundamental to everyday life, and deficits in sequential learning are associated with developmental disorders such as specific language impairment. Individual differences in sequential learning are usually investigated using the serial reaction time task (SRTT), wherein a participant responds to a ser...
How sensitive are listeners to experimental perturbations of the speech breathing time series? In a series of in-person and online experiments, we investigate the effects of duration and placement of a silent, artificially added gap inserted somewhere between speech breathing and speech sounds.
Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested...
The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper, there...
Auditory verbal hallucinations (AVHs)-or hearing voices-occur in clinical and non-clinical populations, but their mechanisms remain unclear. Predictive processing models of psychosis have proposed that hallucinations arise from an over-weighting of prior expectations in perception. It is unknown, however, whether this reflects (i) a sensitivity to...
Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these lau...
The networks of cortical and subcortical fields that contribute to speech production have benefitted from many years of detailed study, and have been used as a framework for human volitional vocal production more generally. In this article, I will argue that we need to consider speech production as an expression of the human voice in a more general...
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they...
Deciding whether others’ emotions are genuine is essential for successful communication and social relationships. While previous fMRI studies suggested that differentiation between authentic and acted emotional expressions involves higher-order brain areas, the time course of authenticity discrimination is still unknown. To address this gap, we tes...
The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper theref...
The ability to learn and reproduce sequences is fundamental to everyday life, and deficits in sequential learning are associated with developmental disorders such as specific language impairment. Individual differences in sequential learning are usually investigated using the serial reaction time task (SRTT), wherein a participant responds to a ser...
Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective processing of laugh...
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains un...
Auditory verbal hallucinations (AVH) – or hearing voices – occur in clinical and non-clinical populations, but their mechanisms remain unclear. Predictive processing models of psychosis have proposed that hallucinations arise from an over-weighting of prior expectations in perception. It is unknown, however, whether this reflects i) a sensitivity t...
Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners’ ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these lau...
The ability to infer the authenticity of other’s emotional expressions is a social cognitive process taking place in all human interactions. Although the neurocognitive correlates of authenticity recognition have been probed, its potential recruitment of the peripheral autonomic nervous system is not known. In this work, we asked participants to ra...
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they...
Evidence for perceptual processing in models of speech production is often drawn from investigations in which the sound of a talker's voice is altered in real time to induce “errors.” Methods of acoustic manipulation vary but are assumed to engage the same neural network and psychological processes. This article aims to review fMRI and PET studies...
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains un...
Background: Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective process...
Laughter is a positive vocal emotional expression: most laughter is found
in social interactions [1]. We are overwhelmingly more likely to laugh when we are with other people [1], and laughter can play a very important communicative role [2]. We do of course also laugh at humor — but can laughter influence how funny we actually perceive the humorou...
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perceptio...
Nonverbal vocalisations such as laughter pervade social interactions, and the ability to accurately interpret them is an important skill. Previous research has probed the general mechanisms supporting vocal emotional processing, but the factors that determine individual differences in this ability remain poorly understood. Here, we ask whether the...
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instru...
Human voices are extremely variable: The same person can sound very different depending on whether they are speaking, laughing, shouting or whispering. In order to successfully recognise someone from their voice, a listener needs to be able to generalize across these different vocal signals ('telling people together'). However, in most studies of v...
The ability to perceive the emotions of others is crucial for everyday social interactions. Important aspects of visual socioemotional processing, such as the recognition of facial expressions, are known to depend on largely automatic mechanisms. However, whether and how properties of automaticity extend to the auditory domain remains poorly unders...
Previous studies have established a role for premotor cortex in the processing of auditory emotional vocalizations. Inhibitory continuous theta burst transcranial magnetic stimulation (cTBS) applied to right premotor cortex selectively increases the reaction time to a same-different task, implying a causal role for right ventral premotor cortex (PM...
Altering reafferent sensory information can have a profound effect on motor output. Introducing a short delay [delayed auditory feedback (DAF)] during speech production results in modulations of voice and loudness, and produces a range of speech dysfluencies. The ability of speakers to resist the effects of delayed feedback is variable yet it is un...
Data S1. Summary Data for Each Participant (Information on Group Membership, Parameter Estimates for Regions of Interest, and Behavioral Ratings), Related to STAR Methods
Data are provided for each participant for (1) group membership (i.e., typically developing, disruptive/high callous-unemotional traits, disruptive/low callous-unemotional traits...
Humans are intrinsically social animals, forming enduring affiliative bonds [1]. However, a striking minority with psychopathic traits, who present with violent and antisocial behaviors, tend to value other people only insofar as they contribute to their own advancement [2, 3]. Extant research has addressed the neurocognitive processes associated w...
Supplementary material
Human voices are extremely variable: The same person can sound very different depending on whether they are speaking, laughing, shouting or whispering. In order to successfully recognise someone from their voice, a listener needs to be able to generalise across these different vocal signals ('telling people together'). However, in most studies of v...
Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes’ responses to intelligible auditory speech signals (McGettigan & Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Pre...
Vocalizations are the production of sounds by the coordinated activity of up to eighty respiratory and laryngeal muscles. Whilst voiced acts, modified by the upper vocal tract (tongue, jaw, lip and palate) are central to the production of human speech, they are also central to the production of emotional vocalizations such as sounds of disgust, ang...
Some individuals show a congenital deficit for music processing despite normal peripheral auditory processing, cognitive functioning, and music exposure. This condition, termed congenital amusia, is typically approached regarding its profile of musical and pitch difficulties. Here, we examine whether amusia also affects socio-emotional processing,...
In 2 behavioral experiments, we explored how the extraction of identity-related information from familiar and unfamiliar voices is affected by naturally occurring vocal flexibility and variability, introduced by different types of vocalizations and levels of volitional control during production. In a first experiment, participants performed a speak...
Trends
Hearing and imagining sounds–including speech, vocalizations, and music–can recruit SMA and pre-SMA, which are normally discussed in relation to their motor functions.
Emerging research indicates that individual differences in the structure and function of SMA and pre-SMA can predict performance in auditory perception and auditory imagery ta...
Several authors have recently presented evidence for perceptual and neural distinctions between genuine and acted expressions of emotion. Here, we describe how differences in authenticity affect the acoustic and perceptual properties of laughter. In an acoustic analysis, we contrasted spontaneous, authentic laughter with volitional, fake laughter,...
Unlabelled:
Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or it...
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However...
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI...
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI...
Faces and voices, in isolation, prompt consistent social evaluations. However, most human interactions involve both seeing and talking with another person. Our main goal was to investigate how facial and vocal information are combined to reach an integrated person impression. In Study 1, we asked participants to rate faces and voices separately for...
We recently demonstrated that drowsiness, indexed using EEG, was associated with left-inattention in a group of 26 healthy right-handers. This has been linked to alertness-related modulation of spatial bias in left neglect patients and the greater persistence of left, compared with right, neglect following injury. Despite handedness being among the...
Frontotemporal dementia is an important neurodegenerative disorder of younger life led by profound emotional and social dysfunction. Here we used fMRI to assess brain mechanisms of music emotion processing in a cohort of patients with frontotemporal dementia (n = 15) in relation to healthy age-matched individuals (n = 11). In a passive-listening pa...
There is much interest in the idea that musicians perform better than non-musicians in understand-ing speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational m...
Abstract Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such 'quantized' views by employing measures of recall preci...
What is rhythm? An immediate answer to this question appears simple and might be linked with everyday examples of rhythmic events or behaviours like dancing, listening to a heartbeat or rocking a baby to sleep. However, a unified scientific definition of rhythm remains elusive. For decades, research programmes concerning rhythm and rhythmic organiz...
Speech perception problems lead to many different forms of communication diffi-culties, and remediation for these prob-lems remains of critical interest. A recent study by Kraus et al. (2014b) published in the Journal of Neuroscience, used a ran-domized controlled trial (RCT) approach to identify how low intensity community-based musical enrichment...
Laughter is often considered to be the product of humour. However, laughter is a social emotion, occurring most often in interactions, where it is associated with bonding, agreement, affection, and emotional regulation. Laughter is underpinned by complex neural systems, allowing it to be used flexibly. In humans and chimpanzees, social (voluntary)...
It is well established that emotion recognition of facial expressions declines with age, but evidence for age-related differences in vocal emotions is more limited. This is especially true for nonverbal vocalizations such as laughter, sobs, or sighs. In this study, 43 younger adults (M = 22 years) and 43 older ones (M = 61.4 years) provided multipl...
When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of funct...
It is well established that categorising the emotional content of facial expressions may differ
depending on contextual information. Whether this malleability is observed in the auditory domain
and in genuine emotion expressions is poorly explored. We examined the perception of authentic
laughter and crying in the context of happy, neutral and sad...
Unilateral brain damage can lead to a striking deficit in awareness of stimuli on one side of space called Spatial Neglect. Patient studies show that neglect of the left is markedly more persistent than of the right and that its severity increases under states of low alertness. There have been suggestions that this alertness-spatial awareness link...
This study focuses on the neural processing of English sentences containing unergative, unaccusative and transitive verbs. We demonstrate common responses in bilateral superior temporal gyri in response to listening to sentences containing unaccusative and transitive verbs compared to unergative verbs; we did not detect any activation that was spec...