Publications

  • [Show abstract] [Hide abstract]
    ABSTRACT: Previous research indicates that motion-sensitive brain regions are engaged when comprehending motion semantics expressed by words or sentences. Using fMRI, we investigated whether such neural modulation can occur when the linguistic signal itself is visually dynamic and motion semantics is expressed by movements of the hands. Deaf and hearing users of American Sign Language (ASL) were presented with signed sentences that conveyed motion semantics ("The deer walked along the hillside.") or were static, conveying little or no motion ("The deer slept along the hillside."); sentences were matched for the amount of visual motion. Motion-sensitive visual areas (MT+) were localized individually in each participant. As a control, the Fusiform Face Area (FFA) was also localized for the deaf participants. The whole-brain analysis revealed static (locative) sentences engaged regions in left parietal cortex more than motion sentences, replicating previous results implicating these regions in comprehending spatial language for sign languages. Greater activation was observed in the functionally defined MT+ ROI for motion than static sentences for both deaf and hearing signers. No modulation of neural activity by sentence type was observed in the FFA. Deafness did not affect modulation of MT+ by motion semantics, but hearing signers exhibited stronger neural activity in MT+ for both sentence types, perhaps due to differences in exposure and/or use of ASL. We conclude that top down modulation of motion-sensitive cortex by linguistic semantics is not disrupted by the visual motion that is present in sign language sentences.
    NeuroImage 06/2012; 63(1):111-8. · 6.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the 'uncanny valley' phenomenon.
    Social Cognitive and Affective Neuroscience 04/2011; 7(4):413-22. · 5.04 Impact Factor
  • Source
    Robert Leech, Ayse Pinar Saygin
    [Show abstract] [Hide abstract]
    ABSTRACT: Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial extent of temporal cortices. Most importantly, regions previously reported as selective for speech over environmental sounds also contained distributed information. The results indicate that temporal cortices supporting complex auditory processing, including regions previously described as speech-selective, are in fact highly heterogeneous.
    Brain and Language 02/2011; 116(2):83-90. · 3.39 Impact Factor
  • Source
    Ayse Pinar Saygin, Jennifer Cook, Sarah-Jayne Blakemore
    [Show abstract] [Hide abstract]
    ABSTRACT: Perception of biological motion is linked to the action perception system in the human brain, abnormalities within which have been suggested to underlie impairments in social domains observed in autism spectrum conditions (ASC). However, the literature on biological motion perception in ASC is heterogeneous and it is unclear whether deficits are specific to biological motion, or might generalize to form-from-motion perception. We compared psychophysical thresholds for both biological and non-biological form-from-motion perception in adults with ASC and controls. Participants viewed point-light displays depicting a walking person (Biological Motion), a translating rectangle (Structured Object) or a translating unfamiliar shape (Unstructured Object). The figures were embedded in noise dots that moved similarly and the task was to determine direction of movement. The number of noise dots varied on each trial and perceptual thresholds were estimated adaptively. We found no evidence for an impairment in biological or non-biological object motion perception in individuals with ASC. Perceptual thresholds in the three conditions were almost identical between the ASC and control groups. Impairments in biological motion and non-biological form-from-motion perception are not across the board in ASC, and are only found for some stimuli and tasks. We discuss our results in relation to other findings in the literature, the heterogeneity of which likely relates to the different tasks performed. It appears that individuals with ASC are unaffected in perceptual processing of form-from-motion, but may exhibit impairments in higher order judgments such as emotion processing. It is important to identify more specifically which processes of motion perception are impacted in ASC before a link can be made between perceptual deficits and the higher-level features of the disorder.
    PLoS ONE 01/2010; 5(10):e13491. · 3.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Can linguistic semantics affect neural processing in feature-specific visual regions? Specifically, when we hear a sentence describing a situation that includes motion, do we engage neural processes that are part of the visual perception of motion? How about if a motion verb was used figuratively, not literally? We used fMRI to investigate whether semantic content can "penetrate" and modulate neural populations that are selective to specific visual properties during natural language comprehension. Participants were presented audiovisually with three kinds of sentences: motion sentences ("The wild horse crossed the barren field."), static sentences, ("The black horse stood in the barren field."), and fictive motion sentences ("The hiking trail crossed the barren field."). Motion-sensitive visual areas (MT+) were localized individually in each participant as well as face-selective visual regions (fusiform face area; FFA). MT+ was activated significantly more for motion sentences than the other sentence types. Fictive motion sentences also activated MT+ more than the static sentences. Importantly, no modulation of neural responses was found in FFA. Our findings suggest that the neural substrates of linguistic semantics include early visual areas specifically related to the represented semantics and that figurative uses of motion verbs also engage these neural systems, but to a lesser extent. These data are consistent with a view of language comprehension as an embodied process, with neural substrates as far reaching as early sensory brain areas that are specifically related to the represented semantics.
    Journal of Cognitive Neuroscience 11/2009; 22(11):2480-90. · 4.49 Impact Factor
  • Source
    Ayse Pinar Saygin, Robert Leech, Frederic Dick
    [Show abstract] [Hide abstract]
    ABSTRACT: We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.
    Neuropsychologia 09/2009; 48(1):107-13. · 3.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We compared psychophysical thresholds for biological and non-biological motion detection in adults with autism spectrum conditions (ASCs) and controls. Participants watched animations of a biological stimulus (a moving hand) or a non-biological stimulus (a falling tennis ball). The velocity profile of the movement was varied between 100% natural motion (minimum-jerk (MJ) for the hand; gravitational (G) for the ball) and 100% constant velocity (CV). Participants were asked to judge which animation was 'less natural' in a two-interval forced-choice paradigm and thresholds were estimated adaptively. There was a significant interaction between group and condition. Thresholds in the MJ condition were lower than in the G condition for the NC group whereas there was no difference between the thresholds in the two conditions for the ASC group. Thus, unlike the controls, the ASC group did not show an increased sensitivity for perturbation to biological over non-biological velocity profiles.
    Neuropsychologia 08/2009; 47(14):3275-8. · 3.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To examine how young children recognize the association between two different types of meaningful sounds and their visual referents, we compared 15-, 20-, and 25-month-old infants' looking time responses to familiar naturalistic environmental sounds, (e.g., the sound of a dog barking) and their empirically matched verbal descriptions (e.g., "Dog barking") in an intermodal preferential looking paradigm. Across all three age groups, performance was indistinguishable over the two domains. Infants with the largest vocabularies were more accurate in processing the verbal phrases than the environmental sounds. However, after taking into account each child's verbal comprehension/production and the onomatopoetic test items, all cross-domain differences disappeared. Correlational analyses revealed that the processing of environmental sounds was tied to chronological age, while the processing of speech was linked to verbal proficiency. Overall, while infants' ability to recognize the two types of sounds did not differ behaviorally, the underlying processes may differ depending on the type of auditory input.
    Language Learning and Development 07/2009; 5(3):172-190.
  • Source
    Ayse Pinar Saygin, Jon Driver, Virginia R de Sa
    [Show abstract] [Hide abstract]
    ABSTRACT: Observers judged whether a periodically moving visual display (point-light walker) had the same temporal frequency as a series of auditory beeps that in some cases coincided with the apparent footsteps of the walker. Performance in this multisensory judgment was consistently better for upright point-light walkers than for inverted point-light walkers or scrambled control stimuli, even though the temporal information was the same in the three types of stimuli. The advantage with upright walkers disappeared when the visual "footsteps" were not phase-locked with the auditory events (and instead offset by 50% of the gait cycle). This finding indicates there was some specificity to the naturally experienced multisensory relation, and that temporal perception was not simply better for upright walkers per se. These experiments indicate that the gestalt of visual stimuli can substantially affect multisensory judgments, even in the context of a temporal task (for which audition is often considered dominant). This effect appears to be constrained by the ecological validity of the particular pairings.
    Psychological Science 06/2008; 19(5):469-75. · 4.43 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To clarify how different the processing of verbal information is from the processing of meaningful non-verbal information, the present study characterized the developmental changes in neural responses to words and environmental sounds from pre-adolescence (7-9 years) through adolescence (12-14 years) to adulthood (18-25 years). Children and adults' behavioral and electrophysiological responses (the N400 effect of event-related potentials) were compared during the processing of words and environmental sounds presented in semantically matching and mismatching picture contexts. Behavioral accuracy of picture-sound matching improved until adulthood, while reaction time measures leveled out by age 12. No major electrophysiological changes in the N400 effect were observed between pre-adolescence and adolescence. When compared to adults, children demonstrated significant maturational changes including longer latencies and larger amplitudes of the N400 effect. Interestingly, these developmental differences were driven by stimulus type: the Environmental Sound N400 effect decreased in latency from adolescence to adulthood, while no age effects were observed in response to Words. Thus, while the semantic processing of single words is well established by 7 years of age, the processing of environmental sounds continues to improve throughout development.
    Brain Research 06/2008; 1208:137-49. · 2.88 Impact Factor
  • Source
    Ayse Pinar Saygin, Martin I Sereno
    [Show abstract] [Hide abstract]
    ABSTRACT: Novel mapping stimuli composed of biological motion figures were used to study the extent and layout of multiple retinotopic regions in the entire human brain and to examine the independent manipulation of retinotopic responses by visual stimuli and by attention. A number of areas exhibited retinotopic activations, including full or partial visual field representations in occipital cortex, the precuneus, motion-sensitive temporal cortex (extending into the superior temporal sulcus), the intraparietal sulcus, and the vicinity of the frontal eye fields in frontal cortex. Early visual areas showed mainly stimulus-driven retinotopy; parietal and frontal areas were driven primarily by attention; and lateral temporal regions could be driven by both. We found clear spatial specificity of attentional modulation not just in early visual areas but also in classical attentional control areas in parietal and frontal cortex. Indeed, strong spatiotopic activity in these areas could be evoked by directed attention alone. Conversely, motion-sensitive temporal regions, while exhibiting attentional modulation, also responded significantly when attention was directed away from the retinotopic stimuli.
    Cerebral Cortex 02/2008; 18(9):2158-68. · 8.31 Impact Factor
  • Source
    Ayse Pinar Saygin
    [Show abstract] [Hide abstract]
    ABSTRACT: We tested biological motion perception in a large group of unilateral stroke patients (N = 60). Both right and left hemisphere lesioned patients were significantly impaired compared with age-matched controls. Voxel-based lesion analyses revealed that lesions in superior temporal and premotor frontal areas had the greatest effect on biological motion perception. Moreover, the effect in each region was independent, and not attributable to indirect effects of lesions in the other area. When we explored functional magnetic resonance imaging (fMRI) data collected from neurologically healthy controls in a separate experiment in relation to the lesion maps, we found that the two methods converged on their findings. We thus establish that superior temporal and premotor areas are not only involved in biological motion perception, but also have causal relationships to deficits in biological motion perception. While the precise functional roles of each region remain to be identified, this network has been implicated in the perception of action stimuli in many studies and as such patients' deficits may reflect an inability to effectively engage the action observation system.
    Brain 10/2007; 130(Pt 9):2452-61. · 9.92 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We assess brain areas involved in speech production using a recently developed lesion-symptom mapping method (voxel-based lesion-symptom mapping, VLSM) with 50 aphasic patients with left-hemisphere lesions. Conversational speech was collected through a standardized biographical interview, and used to determine mean length of utterance in morphemes (MLU), type token ratio (TTR) and overall tokens spoken for each patient. These metrics are used as indicators of grammatical complexity, semantic variation, and amount of speech, respectively. VLSM analysis revealed that damage to the anterior insula was predictive of low scores on MLU and tokens, consistent with prior findings of the role of this region in speech production [Dronkers, N. F. (1996). A new brain region for coordinating speech articulation. Nature, 384(6605), 159-161]. Additionally, the inferior frontal gyrus, sensorimotor and anterior temporal areas were also associated with lower scores on both of these measures. Overall, token and MLU maps were highly similar, suggesting an overlap between grammatical language networks and overall fluency. TTR maps also shared some portions of this network, but damage to posterior temporal regions also reduced scores on this measure. These results represent the first voxel-based lesion analysis of speech production performance in aphasic patients.
    Neuropsychologia 07/2007; 45(11):2525-33. · 3.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We used functional magnetic resonance imaging (fMRI) in conjunction with a voxel-based approach to lesion symptom mapping to quantitatively evaluate the similarities and differences between brain areas involved in language and environmental sound comprehension. In general, we found that language and environmental sounds recruit highly overlapping cortical regions, with cross-domain differences being graded rather than absolute. Within language-based regions of interest, we found that in the left hemisphere, language and environmental sound stimuli evoked very similar volumes of activation, whereas in the right hemisphere, there was greater activation for environmental sound stimuli. Finally, lesion symptom maps of aphasic patients based on environmental sounds or linguistic deficits [Saygin, A. P., Dick, F., Wilson, S. W., Dronkers, N. F., & Bates, E. Shared neural resources for processing language and environmental sounds: Evidence from aphasia. Brain, 126, 928-945, 2003] were generally predictive of the extent of blood oxygenation level dependent fMRI activation across these regions for sounds and linguistic stimuli in young healthy subjects.
    Journal of Cognitive Neuroscience 06/2007; 19(5):799-816. · 4.49 Impact Factor
  • Source
    Donald J Hagler, Ayse Pinar Saygin, Martin I Sereno
    [Show abstract] [Hide abstract]
    ABSTRACT: Cortical surface-based analysis of fMRI data has proven to be a useful method with several advantages over 3-dimensional volumetric analyses. Many of the statistical methods used in 3D analyses can be adapted for use with surface-based analyses. Operating within the framework of the FreeSurfer software package, we have implemented a surface-based version of the cluster size exclusion method used for multiple comparisons correction. Furthermore, we have a developed a new method for generating regions of interest on the cortical surface using a sliding threshold of cluster exclusion followed by cluster growth. Cluster size limits for multiple probability thresholds were estimated using random field theory and validated with Monte Carlo simulation. A prerequisite of RFT or cluster size simulation is an estimate of the smoothness of the data. In order to estimate the intrinsic smoothness of group analysis statistics, independent of true activations, we conducted a group analysis of simulated noise data sets. Because smoothing on a cortical surface mesh is typically implemented using an iterative method, rather than directly applying a Gaussian blurring kernel, it is also necessary to determine the width of the equivalent Gaussian blurring kernel as a function of smoothing steps. Iterative smoothing has previously been modeled as continuous heat diffusion, providing a theoretical basis for predicting the equivalent kernel width, but the predictions of the model were not empirically tested. We generated an empirical heat diffusion kernel width function by performing surface-based smoothing simulations and found a large disparity between the expected and actual kernel widths.
    NeuroImage 01/2007; 33(4):1093-103. · 6.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: gical motion; premotor cortex; functional MRI; action observation; motion; frontal Introduction The perception of other individuals' movements and actions is important for tracking and hunting prey, detecting and avoiding predators, and, in many species, social interaction. In humans and at least some other primates, premotor areas are involved in the perception of others' actions. Recent research has shown that there are "mirror neurons" in the macaque frontal cortex in area F5 that fire during both action production and action perception (Gallese et al., 1996; Rizzolatti et al., 1996a, 2001; Ferrari et al., 2003). Studies on humans have also demonstrated the involvement of motor and premotor areas in action observation, indicating that humans may use information from their own body representations in understanding the actions of others (Fadiga et al., 1995; Grafton et al., 1996; Rizzolatti et al., 1996b; Decety et al., 1997; Iacoboni et al., 1999; Buccino et al., 2001; Grezes et al
    08/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Motion cues can be surprisingly powerful in defining objects and events. Specifically, a handful of point-lights attached to the joints of a human actor will evoke a vivid percept of action when the body is in motion. The perception of point-light biological motion activates posterior cortical areas of the brain. On the other hand, observation of others' actions is known to also evoke activity in motor and premotor areas in frontal cortex. In the present study, we investigated whether point-light biological motion animations would lead to activity in frontal cortex as well. We performed a human functional magnetic resonance imaging study on a high-field-strength magnet and used a number of methods to increase signal, as well as cortical surface-based analysis methods. Areas that responded selectively to point-light biological motion were found in lateral and inferior temporal cortex and in inferior frontal cortex. The robust responses we observed in frontal areas indicate that these stimuli can also recruit action observation networks, although they are very simplified and characterize actions by motion cues alone. The finding that even point-light animations evoke activity in frontal regions suggests that the motor system of the observer may be recruited to "fill in" these simplified displays.
    Journal of Neuroscience 08/2004; 24(27):6181-8. · 6.91 Impact Factor
  • Elizabeth Bates, Jesus Salcedo, Ayse Pinar Saygin
    [Show abstract] [Hide abstract]
    ABSTRACT: Double dissociations play an important role in neuropsychology, but they are often identified through subjective estimates of "high" versus "low" performance, without considering the probability that such an outcome might have occurred by chance. To determine whether two measures "come apart" in an interesting way in brain-damaged patients, it is important to know the degree to which variance in one measure can be predicted by variance in the other. This study introduces a statistical procedure to determine the probability of a double dissociation when the correlation between measures is taken into account. Different quantitative definitions of dissociations were compared in two large samples of neurological patients, and applied to four pairs of measures (two for language, two for hemispatial neglect) with different degrees of intercorrelation (ranging from .21 to .84).
    04/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The most parsimonious account of language evolution is one where incremental, quantitative changes in primates' vocal tract, fiber pathways, and neuroanatomy converge with social and cultural developments. From this convergence arises the framework upon which complex language skills could build. Such an 'Emergentist' view emphasizes phylogenetic continuity in the neural substrates that mediate language, with language processing embedded in systems with more ancient sensorimotor roots. (Alternatively, 'Mental Organ' theories - such as Chomsky, 1988 - stress the discontinuity of language from all other mental/neural systems in humans and all other species).
    Cortex 03/2004; 40(1):226-7. · 6.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Double dissociations play an important role in neuropsychology, but they are often identified through subjective estimates of "high" versus "low" performance, without considering the probability that such an outcome might have occurred by chance. To determine whether two measures "come apart" in an interesting way in brain-damaged patients, it is important to know the degree to which variance in one measure can be predicted by variance in the other. This study introduces a statistical procedure to determine the probability of a double dissociation when the correlation between measures is taken into account. Different quantitative definitions of dissociations were compared in two large samples of neurological patients, and applied to four pairs of measures (two for language, two for hemispatial neglect) with different degrees of intercorrelation (ranging from +.21 to +.84). If the correlation between measures is not taken into account, large numbers of dissociated cases may be missed, especially for measures that are highly correlated. There are also qualitative differences between methods in the identity of those individuals who meet each definition.
    Journal of Clinical and Experimental Neuropsychology 01/2004; 25(8):1128-53. · 2.16 Impact Factor

9 Following View all

39 Followers View all