Sophie K. Scott's research while affiliated with University College London and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (19)
What’s the point of public engagement? Why can’t we just be neuroscience researchers? In this Comment I will argue that communicating our science is a key aspect of being a neuroscientist and that our science can be enriched by this.
Functional near-infrared spectroscopy and behavioural methods were used to examine the neural basis of the behavioural contagion and authenticity of laughter. We demonstrate that the processing of laughter sounds recruits networks previously shown to be related to empathy and auditory-motor mirror networks. Additionally, we found that the differenc...
There are anatomical and functional links between auditory and somatosensory processing. We suggest that these links form the basis for the popular internet phenomenon where people enjoy a sense of touch from auditory (and often audiovisual) stimuli.
This study tested the idea that stuttering is caused by over-reliance on auditory feedback. The theory is motivated by the observation that many fluency-inducing situations, such as synchronised speech and masked speech, alter or obscure the talker’s feedback. Typical speakers show ‘speaking-induced suppression’ of neural activation in superior tem...
Human speech perception is a paradigm example of the complexity of human linguistic processing; however, it is also the dominant way of expressing vocal identity and is critically important for social interactions. Here, I review the ways that the speech, the talker, and the social nature of speech interact and how this may be computed in the human...
In two experiments, we explore how speaker sex recognition is affected by vocal flexibility, introduced by volitional and spontaneous vocalizations. In Experiment 1, participants judged speaker sex from two spontaneous vocalizations, laughter and crying, and volitionally produced vowels. Striking effects of speaker sex emerged: For male vocalizatio...
Speech rhythm can be described as the temporal patterning by which sequences of vocalic and gestural actions unfold, within and between interlocutors. Despite efforts to quantify and model speech rhythm across languages, it remains a scientifically enigmatic aspect of prosody. For example, the existence and/or form of a basic speech rhythmic unit i...
The ability to discriminate between emotion in vocal signals is highly adaptive in social species. It may also be adaptive for domestic species to distinguish such signals in humans. Here we present a playback study investigating whether horses spontaneously respond in a functionally relevant way towards positive and negative emotion in human nonve...
Speech is a complex acoustic stimulus. According to the earliest observations of Wernicke, deficits in the perceptual processing of speech were associated with damage to the left superior temporal gyrus. Wernicke’s observations were acute, since the dorsolateral temporal lobes (including the superior temporal gyri) contain primary auditory cortex a...
Auditory verbal hallucinations (hearing voices) are typically associated with psychosis, but a minority of the general population also experience them frequently and without distress. Such 'non-clinical' experiences offer a rare and unique opportunity to study hallucinations apart from confounding clinical factors, thus allowing for the identificat...
Speech and language are considered to be uniquely human abilities. Animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, language must have emerged from neural mechanisms at least partially available in animals. In this chapter, we demonstrate ho...
Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceiv...
When talkers speak in masking sounds, their speech undergoes a variety of acoustic and phonetic changes. These changes are known collectively as the Lombard effect. Most behavioural research and neuroimaging research in this area has concentrated on the effect of energetic maskers such as white noise on Lombard speech. Previous fMRI studies have ar...
Dialogues and collaborations between scientists and nonscientists are now widely understood as important elements of scientific research and public engagement with science. In recognition of this, the authors, a neuroscientist and a poet, use a dialogical approach to extend questions and ideas first shared during a lab-based poetry residency. They...
We argue that a comprehensive model of human vocal behaviour must address both voluntary and involuntary aspects of articulate speech and non-verbal vocalizations. Within this, plasticity of vocal output should be acknowledged and explained as part of the mature speech production system.
Good Foundations, Poor Access
Dyslexia makes reading and spelling difficult. Boets et al. (p. 1251 ) analyzed whether for adult readers with dyslexia the internal references for word sounds are poorly constructed or whether accessing those references is abnormally difficult. Brain imaging during phonetic discrimination tasks suggested that the inte...
Citations
... These data are coherent with similar results obtained from adjacent regions of the motor system, showing that temporal impairments of the motor system result in a variety of deficits in the understanding, recognition, and prediction of others' hand actions (Avenanti et al., 2013;Michael et al., 2014). In addition, a recent study comparing the brain activity following the perception of spontaneous and contagious vs. volitional noncontagious laughter showed that the latter, but not the former, activates the voluntary motor system (Billing et al., 2021). ...
... Existing reviews on ASMR are brief, and typically serve as broad overviews of the topic for medical practitioners or academic professionals who are unaware of its online popularity and pleasurable effects (see Niven & Scott, 2021;Reddy & Mohabatt, 2020). These are helpful for providing information that is concise and scrutable but fail to capture the depth of ASMR research. ...
... Language and social cognition are two fundamental abilities of the human species. They are deeply interrelated with each other in cognitive development (de Villiers, 2007;Richardson et al., 2020), daily communication (Scott, 2019), and evolution (Dunbar, 2004). At the brain level, overlaps of regions underlying language and social cognition have been found in the left ventral temporoparietal junction (vTPJ; consisting of the ventral portion of the angular gyrus and its adjacent temporal cortex) and lateral anterior temporal lobe (lATL) (Bzdok et al., 2016;Mar, 2011;Mellem et al., 2016). ...
... For example, when two spoken utterances are presented simultaneously, increasing the fundamental frequency difference (ΔF0) between the voices of the target and background talkers increases intelligibility of the target (Assmann & Summerfield, 1989, 1990Brokx & Nooteboom, 1982). Because male and female voices typically have fairly large ΔF0s (Lavan et al., 2019;Poon & Ng, 2015;Whiteside, 1998), the ΔF0 can act as a strong cue for the segregation of male and female voices in a multitalker context, and contribute to a release from speech-on-speech masking when the target and background talkers are of different sex (Brungart, 2001). ...
... For example, horses, dogs, cats and goats react to the emotional facial expressions of humans [5][6][7][8][9] . Horses, dogs and cats also perceive human emotions in vocalizations [10][11][12][13] . Moreover, cross-modal experiments have shown that horses, dogs and cats can integrate visual and vocal stimuli of humans expressing anger and joy, indicating that these species have multimodal mental representations of these emotions (i.e., they have mental representations of human emotions that combine visual and vocal features [12][13][14][15] ). ...
... understanding the meaning of words and sentences); and 3) expression (i.e. using speech, writing, or gesture to express ideas) (Coleman et al., 2007;Friederici, 2002;Price, 2012;Tzourio, Crivello, Mellet, Nkanga-Ngila, & Mazoyer, 1998). This canonical language network is anchored by bilateral nodes in the superior temporal gyrus (STG) and inferior frontal gyrus (IFG) (Demonet et al., 1992;Scott, 2000;Wise et al., 1991). However, the role of the STG, IFG, and other cortical regions in recovery of language function after severe TBI has not been studied longitudinally. ...
... a. Mediums and medium-like new age practitioners (interviewed at Yale University, Durham University, and King's College London). [7][8][9][10][11][12] Mediums experience themselves as talking with the dead, and with other immaterial beings, to receive information which is then communicated to other humans. b. ...
... Furthermore, TMS cannot adequately target subcortical structures, which are known to play an essential role in smile processing [17]. Second, most neuroimaging studies on the perception of emotion authenticity have employed static images of emotional expressions [18,19] or non-visual stimuli [9,20]. This begs the question of how the visual presentation of dynamic smiles influences the neural correlates of the emotion authenticity judgments reported in previous research. ...
... Additionally, artists create links between instruments that rarely interact with each other. While there is a body of research about art and neuroscience (Supper 2013, Wilkes and Scott 2016, King 2018, there are no precedents at the time of writing this article about an artistic project that brings together neuroscience and radio astronomy. ...
... The effects of background noise during continuous scanning could have implications for the interpretation of fMRI results, as one may observe neural processes related to this adaptation in addition to any effects of interest. For example, a small scaled fMRI study by Meekings et al., (2016) showed changes in the activity of the superior temporal gyrus (STG) related to speaking under different noise masking conditions. If discrepancies in patterns of STG activation are found between continuous fMRI paradigms and PET or sparsely sampled fMRI paradigms, it will be worth considering whether the differences are related to the engagement of the Lombard effect. ...