William J Talkington

West Virginia University, MGW, West Virginia, United States

Are you William J Talkington?

Claim your profile

Publications (6)25.71 Total impact

  • William J Talkington · Jared P Taglialatela · James W Lewis
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans and several non-human primates possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). However, the use of speech and other broadly defined categories of behaviorally relevant natural sounds has led to many discrepancies regarding where voice-sensitivity occurs, and more generally the identification of cortical networks, "proto-networks" or protolanguage networks, and pathways that may be sensitive or selective for certain aspects of vocalization processing. In this prospective review we examine different approaches for exploring vocal communication processing, including pathways that may be, or become, specialized for conspecific utterances. In particular, we address the use of naturally produced non-stereotypical vocalizations (mimicry of other animal calls) as another category of vocalization for use with human and non-human primate auditory systems. We focus this review on two main themes, including progress and future ideas for studying vocalization processing in great apes (chimpanzees) and in very early stages of human development, including infants and fetuses. Advancing our understanding of the fundamental principles that govern the evolution and early development of cortical pathways for processing non-verbal communication utterances is expected to lead to better diagnoses and early intervention strategies in children with communication disorders, improve rehabilitation of communication disorders resulting from brain injury, and develop new strategies for intelligent hearing aid and implant design that can better enhance speech signals in noisy environments.
    Hearing research 08/2013; 305(1). DOI:10.1016/j.heares.2013.08.009 · 2.97 Impact Factor
  • Source
    William J Talkington · Kristina M Rapuano · Laura A Hitt · Chris A Frum · James W Lewis
    [Show abstract] [Hide abstract]
    ABSTRACT: Numerous species possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). In humans, the superior temporal sulci (STSs) putatively represent homologous voice-sensitive areas of cortex. However, superior temporal sulcus (STS) regions have recently been reported to represent auditory experience or "expertise" in general rather than showing exclusive sensitivity to human vocalizations per se. Using functional magnetic resonance imaging and a unique non-stereotypical category of complex human non-verbal vocalizations-human-mimicked versions of animal vocalizations-we found a cortical hierarchy in humans optimized for processing meaningful conspecific utterances. This left-lateralized hierarchy originated near primary auditory cortices and progressed into traditional speech-sensitive areas. Our results suggest that the cortical regions supporting vocalization perception are initially organized by sensitivity to the human vocal tract in stages before the STS. Additionally, these findings have implications for the developmental time course of conspecific vocalization processing in humans as well as its evolutionary origins.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 06/2012; 32(23):8084-93. DOI:10.1523/JNEUROSCI.1118-12.2012 · 6.34 Impact Factor
  • Source
    James W Lewis · William J Talkington · Katherine C Tallaksen · Chris A Frum
    [Show abstract] [Hide abstract]
    ABSTRACT: Whether viewed or heard, an object in action can be segmented from a background scene based on a number of different sensory cues. In the visual system, salient low-level attributes of an image are processed along parallel hierarchies, and involve intermediate stages, such as the lateral occipital cortices, wherein gross-level object form features are extracted prior to stages that show object specificity (e.g. for faces, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, a distinct acoustic event or “auditory object” can also be readily extracted from a background acoustic scene. However, it remains unclear whether cortical processing strategies used by the auditory system similarly extract gross-level aspects of “acoustic object form” that may be inherent to many real-world sounds. Examining mechanical and environmental action sounds, representing two distinct categories of non-biological and non-vocalization sounds, we had participants assess the degree to which each sound was perceived as a distinct object versus an acoustic scene. Using two functional magnetic resonance imaging (fMRI) task paradigms, we revealed bilateral foci along the superior temporal gyri (STG) showing sensitivity to the “object-ness” ratings of action sounds, independent of the category of sound and independent of task demands. Moreover, for both categories of sounds these regions also showed parametric sensitivity to spectral structure variations—a measure of change in entropy in the acoustic signals over time (acoustic form)—while only the environmental sounds showed parametric sensitivity to mean entropy measures. Thus, similar to the visual system, the auditory system appears to include intermediate feature extraction stages that are sensitive to the acoustic form of action sounds, and may serve as a stage that begins to dissociate different categories of real-world auditory objects.
    Frontiers in Systems Neuroscience 05/2012; 6:27. DOI:10.3389/fnsys.2012.00027
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here, we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, whereas the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when hearing and attempting to recognize action sounds.
    Human Brain Mapping 12/2011; 32(12):2241-55. DOI:10.1002/hbm.21185 · 5.97 Impact Factor
  • James W Lewis · William J Talkington · Aina Puce · Lauren R Engel · Chris Frum
    [Show abstract] [Hide abstract]
    ABSTRACT: In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.
    Journal of Cognitive Neuroscience 08/2011; 23(8):2079-101. DOI:10.1162/jocn.2010.21570 · 4.09 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g., harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using functional MRI, we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human nonverbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for the presence of spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 03/2009; 29(7):2283-96. DOI:10.1523/JNEUROSCI.4145-08.2009 · 6.34 Impact Factor

Publication Stats

89 Citations
25.71 Total Impact Points


  • 2012–2013
    • West Virginia University
      • • Department of Neurobiology & Anatomy
      • • Department of Physiology & Pharmacology
      MGW, West Virginia, United States
    • University of Virginia
      • Department of Neuroscience
      Charlottesville, Virginia, United States