Ruth Campbell

Ruth Campbell
University College London | UCL · Deafness Cognition and Language Research Centre

BA, PhD

About

228
Publications
84,812
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
14,014
Citations
Additional affiliations
January 1996 - September 2008
University College London
Position
  • Professor
September 1981 - August 1984
University College London
Position
  • Research Associate
September 2017 - August 2020
University of Bristol
Position
  • Senior Researcher

Publications

Publications (228)
Chapter
Full-text available
Congenital deafness impacts on the cognitive development of the child in many and various ways. The aim of this chapter is to show how the life events and experiences of a deaf child launch her on a different developmental and neurocognitive trajectory than that of a typical hearing child. Congenital deafness can bring about some reconfiguration of...
Chapter
Full-text available
This article demonstrates that watching speech as well as listening to it is very much part of what a person does when looking at other people, and that observing someone talking can relate to other aspects of face processing. The perception of speaking faces can offer insights into how human communication skills develop, and the role of the face a...
Article
Full-text available
In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these...
Article
Simple negation in natural languages represents a complex interrelationship of syntax, prosody, semantics and pragmatics, and may be realised in various ways: lexically, morphologically and prosodically. In almost all spoken languages, the first two of these are the primary realisations of syntactic negation. In contrast, in many signed languages n...
Article
Full-text available
Uma característica chave das crianças com autismo está relacionada com a sua compreensão das outras pessoas e das suas intenções. As crianças com autismo muitas vezes têm dificuldades em interpretar e produzir expressões faciais – de emoção, intenção e comunicação. As línguas de sinais dos surdos requerem o uso fluente de marcadores não manuais (MN...
Chapter
Sign languages of Deaf people communicate meanings. They go beyond the gestural systems that people use to communicate without speech, since they are fully fledged languages in their own right. But a neurosemiotics of language, including sign language (SL), asks what are the cortical mechanisms that extract meanings from such gestural body actions?...
Chapter
Full-text available
The study of childhood deafness offers researchers many interesting insights into the role of experience and sensory inputs for the development of language and cognition. This volume provides a state of the art look at these questions and how they are being applied in the areas of clinical and educational settings. It also marks the career and cont...
Chapter
The study of deafness and sign language has provided a means of dissociating modality specificity from higher level abstract processes in the brain. Differentiating these is fundamental for establishing the relationship between sensorimotor representations and functional specialisation in the brain. Early deafness in humans provides a unique insigh...
Article
Full-text available
This study examined facial expressions produced during a British Sign Language (BSL) narrative task (Herman et al., International Journal of Language and Communication Disorders 49(3):343–353, 2014) by typically developing deaf children and deaf children with autism spectrum disorder. The children produced BSL versions of a video story in which two...
Article
Full-text available
In this study we followed Greek children with and without dyslexia for eighteen months, assessing them twice on a battery of phonological, reading and spelling tasks, aiming to document the relative progress achieved and to uncover any specific effects of dyslexia in the development of reading and spelling beyond the longitudinal associations among...
Article
Full-text available
We examined the manifestation of dyslexia in a cross-linguistic study contrasting English and Greek children with dyslexia compared to chronological age and reading-level control groups on reading accuracy and fluency, phonological awareness, short-term memory, rapid naming, orthographic choice, and spelling. Materials were carefully matched across...
Article
In contrast with two widely held and contradictory views – that sign languages of deaf people are “just gestures,” or that sign languages are “just like spoken languages” – the view from sign linguistics and developmental research in cognition presented by Goldin-Meadow & Brentari (G-M&B) indicates a more complex picture. We propose that neuroscien...
Article
Full-text available
One way in which we figure out how people are feeling is by looking at their faces. Being able to do this allows us to react in the right way in social situations. But, are young children good at recognizing facial expressions showing emotion? And how does this ability develop throughout childhood and the teenage years? Children are able to recogni...
Article
Full-text available
Background: Vocabulary knowledge and speechreading are important for deaf children's reading development but it is unknown whether they are independent predictors of reading ability. Aims: This study investigated the relationships between reading, speechreading and vocabulary in a large cohort of deaf and hearing children aged 5 to 14 years. Me...
Research
Full-text available
Inaugural lecture, Goldsmith's College, 1992: a case of Optic Aphasia (AG), investigated jointly with Liliane Manning, and what it tells us about how naming images may be instantiated in the brain
Article
Full-text available
Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescen...
Article
Full-text available
Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such...
Article
Full-text available
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their ab...
Article
Full-text available
We investigated the spelling of derivational and inflectional suffixes by 10–13-year-old Greek children. Twenty children with dyslexia (DYS), 20 spelling-level-matched (SA) and 20 age-matched (CA) children spelled adjectives, nouns, and verbs in dictated word pairs and sentences. Children spelled nouns and verbs more accurately than adjectives and...
Article
Full-text available
Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex...
Data
Full-text available
These guidance notes are based on input from Speechreading science forum (London, November 25 th 2009) and Expert Speechreaders' forum (London, December 8 th , 2009), organised by Laraine Callow (Deafworks) and Ruth Campbell (UCL). The evidence-base for the opinions expressed here can be found in an accompanying report, 'Speechreading for informati...
Article
Full-text available
Purpose In this article, the authors describe the development of a new instrument, the Test of Child Speechreading (ToCS), which was specifically designed for use with deaf and hearing children. Speechreading is a skill that is required for deaf children to access the language of the hearing community. ToCS is a deaf-friendly, computer-based test t...
Chapter
Full-text available
Introduction This volume confirms that the ability to extract linguistic information from viewing the talker’s face is a fundamental aspect of speech processing. In this chapter we explore the cortical substrates that support these processes. There are a number of reasons why the identification of these visual speech circuits in the brain is import...
Article
In the context of face processing, the skill of processing speech from faces (speechreading) occupies a unique cognitive and neuropsychological niche. Neuropsychological dissociations in two cases (Campbell et al., 1986) suggested a very clear pattern: speechreading, but not face recognition, can be impaired by left-hemisphere damage, while face-re...
Article
Nineteen English nine-year-olds from a single mixed-ability school class were given two reading-related tasks: a task which required spelling nonsense words to dictation (phonic spelling) and a written sentence classification task. Visual laterality (unilateral letter naming) was also tested. Significant intercorrelations emerged between performanc...
Article
Full-text available
In recent years there has been a growing interest in the role of attention in the processing of social stimuli in individuals with autism spectrum disorders (ASD). Research has demonstrated that, for typical adults, faces have a special status in attention and are processed in an automatic and mandatory fashion even when participants attempt to ign...
Article
Full-text available
Preferential attention to biological motion can be seen in typically developing infants in the first few days of life and is thought to be an important precursor in the development of social communication. We examined whether children with autism spectrum disorder (ASD) aged 3-7 years preferentially attend to point-light displays depicting biologic...
Article
Two experiments investigated the nature of the code in which lip-read speech is processed. In Experiment 1 subjects repeated words, presented with lip-read and masked auditory components out of synchrony by 600 ms. In one condition the lip-read input preceded the auditory input, and in the second condition the auditory input preceded the lip-read i...
Article
Recent findings suggest that children with autism may be impaired in the perception of biological motion from moving point-light displays. Some children with autism also have abnormally high motion coherence thresholds. In the current study we tested a group of children with autism and a group of typically developing children aged 5 to 12 years of...
Article
Full-text available
Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic knowled...
Article
Full-text available
Recent findings suggest that children with autism may be impaired in the perception of biological motion from moving point-light displays. There have also been reports that some children with autism have abnormally high motion coherence thresholds. In the current study we tested a group of children with autism and a group of typically developing ch...
Article
It has been suggested that the locus of selective attention (early vs. late in processing) is dependent on the perceptual load of the task. When perceptual load is low, irrelevant distractors are processed (late selection), whereas when perceptual load is high, distractor interference disappears (early selection). Attentional abnormalities have lon...
Conference Paper
Full-text available
This paper describes the development of a new Test of Child Speechreading (ToCS) that was specifically designed to be suitable for use with deaf children. Speechreading is a skill which is required for deaf children to access the language of the hearing community. ToCS is a child-friendly, computer-based, speechreading test that measures speechread...
Conference Paper
Full-text available
Background: Children with autism tend to look less at others’ faces (Klin, Jones et al., 2002; Dawson et al, 2004, 2005) and show deficits on a range of face processing tasks compared to their peers (Schultz 2005). Such impairments might have specific consequences for deaf children with autism who use sign language, as the face plays an important...
Article
Full-text available
In a single study, silent speechreading and signed language processing were investigated using fMRI. Deaf native signers of British sign language (BSL) who were also proficient speechreaders of English were the focus of the research. Separate analyses contrasted different aspects of the data. In the first place, we found that the left superior temp...
Article
How does speechreading (lipreading) work at the cognitive and neurobiological level? In this review I summarise work over the last fifteen years that has used everyday speechreading in hearing participants to illuminate issues of cerebral localisation and cognitive function. The implications of this work for deepening our understanding of speechrea...
Article
Most of our knowledge about the neurobiological bases of language comes from studies of spoken languages. By studying signed languages, we can determine whether what we have learnt so far is characteristic of language per se or whether it is specific to languages that are spoken and heard. Overwhelmingly, lesion and neuroimaging studies indicate th...
Article
Full-text available
Linguists have suggested that non-manual and manual markers are used in sign languages to indicate prosodic and syntactic boundaries. However, little is known about how native signers interpret non-manual and manual cues with respect to sentence boundaries. Six native signers of British Sign Language (BSL) were asked to mark sentence boundaries in...
Article
Full-text available
Spoken languages use one set of articulators—the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a...
Conference Paper
Full-text available
Background: A number of studies have reported that individuals with autism show reduced orienting to faces, voices and other social cues. Recent studies have reported data suggesting that 2 year old children with autism viewing a split screen, with an upright point-light display of biological motion on one side and an inverted point-light display o...
Article
Full-text available
This fMRI study explored the functional neural organisation of seen speech in congenitally deaf native signers and hearing non-signers. Both groups showed extensive activation in perisylvian regions for speechreading words compared to viewing the model at rest. In contrast to earlier findings, activation in left middle and posterior portions of sup...
Article
A commonly used test of non-verbal memory, which measures recognition for unfamiliar face pictures, was developed by Warrington (1984), the Recognition Memory for Faces (RMF) test. The task has been widely used in adults in relation to neurological impairment of face recognition. We examined the relationship of RMF scores to age in 500 young people...
Chapter
IntroductionNo SoundCued Speech: the Brussels ExperienceNewport's Hypothesis: When Cognition Outstrips Language DevelopmentSecond Language Learning, Speechreading and LiteracyGesture, Sign and Speech in DevelopmentSome Things the People with Hearing Impairment (may) do BetterSign Neuropsychology and Space in SignComing RoundConclusion Acknowledgeme...
Article
Full-text available
How are signed languages processed by the brain? This review briefly outlines some basic principles of brain structure and function and the methodological principles and techniques that have been used to investigate this question. We then summarize a number of different studies exploring brain activity associated with sign language processing espec...
Article
Full-text available
: Many users of signed languages also have access to a spoken language. They are bilin-gual in two modalities : spoken language and signed language. Here we consider some fmri findings relevant to bimodal bilingualism. We explored comprehension of signs and of seen spoken words in bimodal bilinguals – native signers of British Sign Language (bsl) w...
Article
Full-text available
In this selective review, I outline a number of ways in which seeing the talker affects auditory perception of speech, including, but not confined to, the McGurk effect. To date, studies suggest that all linguistic levels are susceptible to visual influence, and that two main modes of processing can be described: a complementary mode, whereby visio...
Article
http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0017.1991.tb00180.x/abstract
Article
Full-text available
In fingerspelling, different hand configurations are used to represent the different letters of the alphabet. Signers use this method of representing written language to fill lexical gaps in a signed language. Using fMRI, we compared cortical networks supporting the perception of fingerspelled, signed, written, and pictorial stimuli in deaf native...
Article
Full-text available
We hypothesized that women with Turner syndrome (45,X) with a single X-chromosome inherited from their mother may show mentalizing deficits compared to women of normal karyotype with two X-chromosomes (46,X). Simple geometrical animation events (two triangles moving with apparent intention in relation to each other) which usually elicit mental-stat...
Article
Ruth Campbell is an experimental psychologist with long-standing interests in the psychological and neural bases of visual speech. She completed a Ph.D. with Max Coltheart at Birkbeck College in London in 1979 on hemispheric asymmetries in processing faces and then took research and faculty posts in Canada (University of Toronto at Mississauga), Lo...
Article
Full-text available
Reading and speechreading are both visual skills based on speech and language processing. Here we explore individual differences in speechreading in profoundly prelingually deaf adults, hearing adults with a history of dyslexia, and hearing adults with no history of a literacy disorder. Speechreading skill distinguished the three groups: the deaf g...
Article
Full-text available
Children with autistic spectrum disorder and controls performed tasks of coherent motion and form detection, and motor control. Additionally, the ratio of the 2nd and 4th digits of these children, which is thought to be an indicator of foetal testosterone, was measured. Children in the experimental group were impaired at tasks of motor control, and...
Article
Full-text available
Aspects of face processing, on the one hand, and theory of mind (ToM) tasks, on the other hand, show specific impairment in autism. We aimed to discover whether a correlation between tasks tapping these abilities was evident in typically developing children at two developmental stages. One hundred fifty-four normal children (6-8 years and 16-18 yea...
Article
Studies of spoken and written language suggest that the perception of sentences engages the left anterior and posterior temporal cortex and the left inferior frontal gyrus to a greater extent than non-sententially structured material, such as word lists. This study sought to determine whether the same is true when the language is gestural and perce...
Article
Full-text available
One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for...
Article
Full-text available
Recent evidence has indicated that some children with autistic spectrum disorder (ASD) show reduced ability to detect visual motion. The data suggest that this impairment is present in children with a range of autistic spectrum diagnoses, but not present in all children diagnosed with ASD. The occurrence of abnormal motion perception in children wi...
Article
Full-text available
Individual speechreading abilities have been linked with a range of cognitive and language-processing factors. The role of specifically visual abilities in relation to the processing of visible speech is less studied. Here we report that the detection of coherent visible motion in random-dot kinematogram displays is related to speechreading skill i...