About
176
Publications
48,746
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,986
Citations
Introduction
Our research focuses on how humans use their voice to communicate emotion and social intentions in the context of spoken language, to advance knowledge of cognitive and affective processes that underlie human communication and social behaviour.
Current institution
Additional affiliations
April 2022 - April 2025
June 2002 - May 2020
Education
September 1993 - February 1997
Publications
Publications (176)
Our decision to believe what another person says can be influenced by vocally expressed confidence in speech and by whether the speaker-listener are members of the same social group. The dynamic effects of these two information sources on neurocognitive processes that promote believability impressions from vocal cues are unclear. Here, English Cana...
Neurocognitive models (e.g., Schirmer & Kotz, 2006) have helped to characterize how listeners incrementally derive meaning from vocal expressions of emotion in spoken language, what neural mechanisms are involved at different processing stages, and their relative time course. But how can these insights be applied to communicative situations in whic...
Emotional voices attract considerable attention. A search on any browser using “emotional prosody” as a key phrase leads to more than a million entries. Such interest is evident in the scientific literature as well; readers are reminded in the introductory paragraphs of countless articles of the great importance of prosody and that listeners easily...
Evaluative statements are routine in interpersonal communication but may evoke different responses depending on the speaker’s identity. Here, thirty participants listened to direct compliments and criticisms spoken in native or foreign accents and rated the speaker's friendliness as their electroencephalogram was recorded. Event-related potentials...
Background
Professional voice users often experience stigma associated with voice disorders and are reluctant to seek medical help. This study deployed empirical and computational tools to (1) quantify the experience of vocal stigma and help-seeking behaviors in performers; and (2) predict their modulations with peer influences in social networks....
Is social support communicated throught the subtle, yet powerful, acoustic variations in speech? This study attempted to answer this question by testing whether acoustic parameters vary when expressing social support. Participants underwent an experiment in which they watched video testimonials of a woman describing either a neutral subject or a se...
In the extensive neuroimaging literature on empathy for pain, few studies have investigated how this phenomenon may relate to everyday social situations such as spoken interactions. The present study used fMRI to assess how complaints, as vocal expressions of pain, are empathically processed by listeners and how these empathic responses may vary ba...
In social interactions, it is becoming more prevailing for at least one party to communicate in their second language (L2). Research using event-related potentials (ERPs) has shown that a foreign accent in speech modulates components related to prelexical processing and components associated with sentence integration and reanalysis in the listener....
When complaining, speakers can use their voice to convey a feeling of pain, even when describing innocuous events. Rapid detection of emotive and identity features of the voice may constrain how the semantic content of complaints is processed, as indexed by N400 and P600 effects evoked by the final, pain-related word. Twenty-six participants listen...
Cultural context shapes the way that emotions are expressed and socially interpreted. Building on previous research looking at cultural differences in judgements of facial expressions, we examined how listeners recognize speech-embedded emotional expressions and make inferences about a speaker’s feelings in relation to their vocal display. Canadian...
Emerging sociolinguistic studies show that when listeners rate utterances varying in prosodic impressions of politeness (e.g., level of sincerity, friendliness), foreign-accented speech tends to be assessed as carrying less emotive meaning than native speech. These perceptual findings suggest that hearing a foreign accent alters the listener’s abil...
Introduction
Parkinson’s Disease (PD) commonly affects cognition and communicative functions, including the ability to perceive socially meaningful cues from nonverbal behavior and spoken language (e.g., a speaker’s tone of voice). However, we know little about how people with PD use social information to make decisions in daily interactions (e.g.,...
During vocal emotion communication, it is important to continuously monitor, analyze, and update information from multiple sources (e.g., verbal and vocal channels) to build up social impressions and utterance representations. To examine the temporal dynamics and the underlying neurocognitive process associated with vocal emotional processing, elec...
Interpersonal communication often involves sharing our feelings with others; complaining, for example, aims to elicit empathy in listeners by vocally expressing a speaker's suffering. Despite the growing neuroscientific interest in the phenomenon of empathy, few have investigated how it is elicited in real time by vocal signals (prosody), and how t...
When we hear an emotional voice, does this alter how the brain perceives and evaluates a subsequent face? Here, we tested this question by comparing event-related potentials evoked by angry, sad, and happy faces following vocal expressions which varied in form (speech-embedded emotions, non-linguistic vocalizations) and emotional relationship (cong...
The current study explored the judgment of communicative appropriateness while processing a dialogue between two individuals. All stimuli were presented as audio-visual as well as audio-only vignettes and 24 young adults reported their social impression (appropriateness) of literal, blunt, sarcastic, and teasing statements. On average, teasing stat...
La perception de la parole accentuée crée un certain nombre de biais, mais les preuves expérimentales concernant leur nature et les facteurs en jeu dans ces processus sont rares. La présente étude a porté sur les populations francophones de Montréal, évaluant les attitudes implicites et explicites fondées sur l'accent entre les groupes québécois (c...
Parkinson’s disease (PD) is a neurodegenerative illness that leads to motor difficulties, cognitive impairments and impairments in social communication abilities that impact negatively on the psychosocial well-being of those living with the disease. Communication difficulties can arise as secondary consequences of motor symptoms (e.g. slurring of s...
Information in the tone of voice alters social impressions of a speaker and underlying brain activity as listeners evaluate the interpersonal relevance of an utterance. Here, we presented basic requests that expressed politeness distinctions through the speaker’s voice (polite/rude) and the use of explicit linguistic markers (half of the requests b...
Emotional cues from different modalities have to be integrated during communication, a process that can be shaped by an individual’s cultural background. We explored this issue in 25 Chinese participants by examining how listening to emotional prosody in Mandarin influenced participants’ gazes at emotional faces in a modified visual search task. We...
Emotive speech is a social act in which a speaker displays emotional signals with a specific intention; in the case of third-party complaints, this intention is to elicit empathy in the listener. The present study assessed how the emotivity of complaints was perceived in various conditions. Participants listened to short statements describing painf...
In social interactions, speakers often use their tone of voice (“prosody”) to communicate their interpersonal stance to pragmatically mark an ironic intention (e.g., sarcasm). The neurocognitive effects of prosody as listeners process ironic statements in real time are still poorly understood. In this study, 30 participants judged the friendliness...
People often evaluate speakers with nonstandard accents as being less competent or
trustworthy, which is often attributed to in-group favoritism. However, speakers can
also modulate social impressions in the listener through their vocal expression (e.g., by
speaking in a confident vs. a doubtful tone of voice). Here, we addressed how both
accents a...
Speakers modulate their voice (prosody) to communicate non-literal meanings, such as sexual innuendo (She inspected his package this morning, where “package” could refer to a man’s penis). Here, we analyzed event-related potentials to illuminate how listeners use prosody to interpret sexual innuendo and what neurocognitive processes are involved. P...
The way that speakers communicate their stance towards the listener is often vital for understanding the interpersonal relevance of speech acts, such as basic requests. To establish how interpersonal dimensions of an utterance affect neurocognitive processing, we compared event-related potentials elicited by requests that linguistically varied in h...
To investigate the impact of culture on emotion processing, we conducted a study comparing the intensity ratings of external expression (intensity level of speaker’s vocal expression) and speaker’s internal feeling (intensity level participants think the speaker is experiencing). Specifically, a group of Canadian and Chinese participants categorize...
This study investigated how explicit cues of group-membership modulate Anglophone and Francophone Montrealers’ decisions to trust accented speakers. Accented recordings were paired with labels specifying the speakers’ native language or provenance (city of origin). In the native language condition, factors other than group-membership (e.g. general...
Social decision-making in everyday life involves attending to subtle social cues and using them to guide decisions in ambiguous situations; a process that depends on an intact cognitive functioning.
In the present study, we investigated the relationship between the perception of subtle social meanings conveyed through vocal-linguistic cues (the to...
Complaining is a form of emotive speech, in which a speaker attempts to gain empathy from their interlocutor by intentionally displaying increased affect in their voice. Except for a general sense of expressivity, little is known about the acoustic and perceptual attributes of complaints and how they relate to genuine emotions. To investigate on th...
Evidence suggests that observers can accurately perceive a speaker's static confidence level, related to their personality and social status, by only assessing their visual cues. However, less is known about the visual cues that speakers produce to signal their transient confidence level in the content of their speech. Moreover, it is unclear what...
In daily life, humans often tell lies to make another person feel better about themselves, or to be polite, or socially appropriate in situations when telling the blunt truth would be perceived as inappropriate. Prosocial lies are a form of non-literal communication used cross-culturally, but how they are evaluated depends on socio-moral values, an...
Having a non-standard accent can be a disadvantage in social settings. Negative effects can be observed in domains such as professional opportunities, access to services, and interpersonal impressions. It has been proposed that the high sensitivity of humans to detect them reflects an evolved mechanism that allows identifying group-membership and f...
The way we speak allows communicating subtle pragmatic distinctions and social meanings. We know that listeners are sensitive to this information and tend to infer more general social impressions of individuals depending on the way they speak. However, the characteristics of the listeners (such as individual differences in social anxiety) can affec...
In spoken discourse, understanding irony requires the apprehension of subtle cues, such as the speaker’s tone of voice (prosody), which often reveal the speaker’s affective stance toward the listener in the context of the utterance. To shed light on the interplay of linguistic content and prosody on impressions of spoken criticisms and compliments...
New research is exploring ways that prosody fulfils different social-pragmatic functions in spoken language by revealing the mental or affective state of the speaker, thereby contributing to an understanding of speaker’s meaning. Prosody is often pivotal in signaling speaker attitudes or stance in the interpersonal context of the speaker-hearer; in...
The indirect nature of sarcasm renders it challenging to interpret: the actual speaker’s intent can only be retrieved when the incongruence between the content and pragmatic cues, such as context or tone of voice, is recognized. The cognitive processes underlying the interpretation of irony and sarcasm, in particular, the effects of contextual inco...
Until recently, research on im/politeness has primarily focused on the role of linguistic strategies while neglecting the contributions of prosody and acoustic cues for communicating politeness. Here, we analyzed a large set of recordings — verbal requests spoken in a direct manner (Lend me a nickel), preceded by the word “Please” or in a conventio...
Humans have an innate set of emotions recognised universally. However, emotion recognition also depends on socio-cultural rules. Although adults recognise vocal emotions universally, they identify emotions more accurately in their native language. We examined developmental trajectories of universal vocal emotion recognition in children. Eighty nati...
The importance of prosodic variations in social interaction contexts has been highlighted but their effects on the regulation of specific behaviors arerarelyaddressed. One of the most widely researched prosodic distinctions in psychology is emotional prosody. In perceptual studies, the capacity for identifying emotions through prosodic variations h...
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been p...
Although linguistic politeness has been studied and theorized about extensively, the role of prosody in the perception of im/polite attitudes has been somewhat neglected. In the present study, we used experimental methods to investigate the interaction of linguistic form, imposition and prosody in the perception of im/polite requests. A written tas...
Extending affective speech communication research in the context of authentic, spontaneous utterances, the present study investigates two signals of affect defined by extreme levels of physiological arousal—Passion and Indifference. Exemplars were mined from podcasts conducted in informal, unstructured contexts to examine communication at extreme l...
In social life, humans do not always communicate their sincere feelings, and speakers often tell ‘prosocial lies’ to prevent others from being hurt by negative truths. Data illuminating how a speaker's voice carries sincere or insincere attitudes in speech, and how social context shapes the expression and perception of (in)sincere utterances, are s...
Introduction: Recognizing emotions in others is a pivotal part of socioemotional functioning and plays a central role in social interactions. It has been shown that individuals suffering from Parkinson’s disease (PD) are less accurate at identifying basic emotions such as fear, sadness, and happiness; however, previous studies have predominantly as...
Our voice provides salient cues about how confident we sound, which promotes inferences about how believable we are. However, the neural mechanisms involved in these social inferences are largely unknown. Employing functional magnetic resonance imaging, we examined the brain networks and individual differences underlying the evaluation of speaker b...
Feeling of knowing (or expressed confidence) reflects a speaker's certainty or commitment to a statement and can be associated with one's trustworthiness or persuasiveness in social interaction. We investigated the perceptual-acoustic correlates of expressed confidence and doubt in spoken language, with a focus on both linguistic and vocal speech c...
To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop t...
Introduction:
Patients with Parkinson's disease (PD) are perceived more negatively than their healthy peers, yet it remains unclear what factors contribute to this negative social perception.
Method:
Based on a cohort of 17 PD patients and 20 healthy controls, we assessed how naïve raters judge the emotion and emotional intensity displayed in dy...
Listeners often encounter conflicting verbal and vocal cues about the speaker's feeling of knowing; these "mixed messages" can reflect online shifts in one's mental state as they utter a statement, or serve different social-pragmatic goals of the speaker. Using a cross-splicing paradigm, we investigated how conflicting cues about a speaker's feelin...
Parkinson's disease (PD) affects patients beyond the motor domain. According to previous evidence, one mechanism that may be impaired in the disease is face processing. However, few studies have investigated this process at the neural level in PD. Moreover, research using dynamic facial displays rather than static pictures is scarce, but highly war...
Next to linguistic content, the human voice carries speaker iden- tity information (e.g. female/male, young/old) and can also carry emotional information. Although various studies have started to specify the brain regions that underlie the diffe rent functions of human voice processing, few studies have aimed to specify the time course underlying t...
Indirect forms of speech, such as sarcasm, jocularity (joking), and ‘white lies’ told to spare another’s feelings, occur frequently in daily life and are a problem for many clinical populations. During social interactions, information about the literal or nonliteral meaning of a speaker unfolds simultaneously in several communication channels (e.g....
Evidence that culture modulates on-line neural responses to the emotional meanings encoded by vocal and facial expressions was demonstrated recently in a study comparing English North Americans and Chinese (Liu et al., 2015). Here, we compared how individuals from these two cultures passively respond to emotional cues from faces and voices using an...
Using a gating paradigm, this study investigated the nature of the in-group advantage in vocal emotion recognition by comparing 2 distinct cultures. Pseudoutterances conveying 4 basic emotions, expressed in English and Hindi, were presented to English and Hindi listeners. In addition to hearing full utterances, each stimulus was gated from its onse...
This study uses behavioral measurements (ratings of attractiveness and age), as well as event-related potentials, to test whether speech induced feelings of disgust and happiness can cross-modally influence a person’s judgment of another person’s physical attractiveness. Furthermore, we investigated the type of information driving the effect; namel...
Using a gating paradigm, this study investigated the nature of the in-group advantage in vocal emotion
recognition by comparing 2 distinct cultures. Pseudoutterances conveying 4 basic emotions, expressed in English and Hindi, were presented to English and Hindi listeners. In addition to hearing full utterances,
each stimulus was gated from its onse...
Previous fMRI studies have suggested that different cerebral regions preferentially process human
voice and music. Yet, little is known on the temporal course of the brain processes that decode the
category of sounds and how the expertise in one sound category can impact these processes. To
address this question, we recorded the electroencephalogra...
Previous eye-tracking studies have found that listening to emotionally-inflected utterances guides visual behavior towards an emotionally congruent face (e.g., Rigoulot and Pell, 2012). Here, we investigated in more detail whether emotional speech prosody influences how participants scan and fixate specific features of an emotional face that is con...
Objective:
Our study assessed how nondemented patients with Parkinson's disease (PD) interpret the affective and mental states of others from spoken language (adopt a "theory of mind") in ecologically valid social contexts. A secondary goal was to examine the relationship between emotion processing, mentalizing, and executive functions in PD durin...
The beneficial effect of auditory cueing on gait performance in Parkinson's disease (PD) has been widely documented. Nevertheless, little is known about the neural underpinnings of this effect and the consequences of auditory cueing beyond improved gait kinematics. The therapy relies on processing the temporal regularity in an auditory signal to wh...
This study aims to investigate the perceptual-acoustic correlates of vocal confidence. Statements with different communicative functions (e.g., stating facts, making judgments) were spoken in confident, close-to-confident, unconfident and neutral voices. Statements with preceding linguistic cues (e.g. I'm positive, Most likely, Maybe, etc.) or no l...
To understand how emotional prosody is processed in Mandarin Chinese and whether it differentiates from that of other languages, we conducted a perceptual-acoustic study on a set of Chinese vocal emotional stimuli and examined how they were perceived and acoustically characterized, in comparison with four other languages, English, Arabic, German, a...
The goal of the present research was to determine whether certain speaker intentions conveyed through prosody in an unfamiliar language can be accurately recognized. English and Cantonese utterances expressing sarcasm, sincerity, humorous irony, or neutrality through prosody were presented to English and Cantonese listeners unfamiliar with the othe...
Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and K...
Background: In this study, we investigated the influence of two types of emotional auditory primes - vocalizations and pseudoutterances - on the ability to judge a subsequently presented emotional facial expression in an event-related potential (ERP) study using the facial-affect decision task. We hypothesized that accuracy would be greater for con...
The ability to accurately perceive emotions is crucial for effective social interaction. Many questions remain regarding how different sources of emotional cues in speech (e.g., prosody, semantic information) are processed during emotional communication. Using a cross-modal emotional priming paradigm (Facial affect decision task), we compared the r...
Parkinson's disease (PD) has been related to impaired processing of emotional speech intonation (emotional prosody). One distinctive feature of idiopathic PD is motor symptom asymmetry, with striatal dysfunction being strongest in the hemisphere contralateral to the most affected body side. It is still unclear whether this asymmetry may affect voca...
To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These e...
Everyday communication involves processing nonverbal emotional cues from auditory and visual stimuli. To characterize whether emotional meanings are processed with category-specificity from speech prosody and facial expressions, we employed a cross-modal priming task (the Facial Affect Decision Task; Pell, 2005a) using emotional stimuli with the sa...
This study investigated cross-modal effects of emotional voice tone (prosody) on face processing during instructed visual search. Specifically, we evaluated whether emotional prosodic cues in speech have a rapid, mandatory influence on eye movements to an emotionally-related face, and whether these effects persist as semantic information unfolds. P...
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to f...
This study investigated whether the recognition of emotions from speech prosody occurs in a similar manner and has a similar time course when adults listen to their native language versus a foreign language. Native English listeners were presented emotionally-inflected pseudo-utterances produced in English or Hindi which had been gated to different...
How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral...
The goal of the present research was to determine whether certain speaker intentions conveyed through prosody in an unfamiliar language can be accurately recognized. English and Cantonese utterances expressing sarcasm, sincerity, humorous irony, or neutrality through prosody were presented to English and Cantonese listeners unfamiliar with the othe...
Patients with behavioural variant frontotemporal dementia demonstrate abnormalities in behaviour and social cognition, including deficits in emotion recognition. Recent studies suggest that the neuropeptide oxytocin is an important mediator of social behaviour, enhancing prosocial behaviours and some aspects of emotion recognition across species. T...
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness...
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However,
few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion
perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli b...
This paper reviews the major findings and hypotheses to emerge in the literature concerned with speech prosody. Both production and perception of prosody are considered. Evidence from studies of patients with lateralized left or right hemisphere damage are presented, as well as relevant data from anatomical and functional imaging studies.
The influence of emotional prosody on the evaluation of emotional facial expressions was investigated in an event-related brain potential (ERP) study using a priming paradigm, the facial affective decision task. Emotional prosodic fragments of short (200-msec) and medium (400-msec) duration were presented as primes, followed by an emotionally relat...
Perception of emotion in voice is impaired following traumatic brain injury (TBI). This study examined whether an inability to concurrently process semantic information (the "what") and emotional prosody (the "how") of spoken speech contributes to impaired recognition of emotional prosody and whether impairment is ameliorated when little or no sema...
Parkinson's disease (PD) is linked to impairments for recognizing emotional expressions, although the extent and nature of these communication deficits are uncertain. Here, we compared how adults with and without PD recognize dynamic expressions of emotion in three channels, involving lexical-semantic, prosody, and/or facial cues (each channel was...
The present study examined the relative contributions of prosody and semantic context in the implicit processing of emotions from spoken language. In three separate tasks, we compared the degree to which happy and sad emotional prosody alone, emotional semantic context alone, and combined emotional prosody and semantic information would prime subse...
To determine the neural mechanisms involved in vocal emotion processing, the current study employed functional magnetic resonance imaging (fMRI) to investigate the neural structures engaged in processing acoustic cues to infer emotional meaning. Two critical acoustic cues – pitch and speech rate – were systematically manipulated and presented in a...