Article

Coding of Vocalizations by Single Neurons in Ventrolateral Prefrontal Cortex.

Dept. Neurobiology & Anatomy, Univ. of Rochester, Box 603, Rochester, NY 14642.
Hearing research (Impact Factor: 2.85). 07/2013; DOI: 10.1016/j.heares.2013.07.011
Source: PubMed

ABSTRACT Neuronal activity in single prefrontal neurons has been correlated with behavioral responses, rules, task variables and stimulus features. In the non-human primate, neurons recorded in ventrolateral prefrontal cortex (VLPFC) have been found to respond to species-specific vocalizations. Previous studies have found multisensory neurons which respond to simultaneously presented faces and vocalizations in this region. Behavioral data suggests that face and vocal information are inextricably linked in animals and humans and therefore may also be tightly linked in the coding of communication calls in prefrontal neurons. In this study we therefore examined the role of VLPFC in encoding vocalization call type information. Specifically, we examined previously recorded single unit responses from the VLPFC in awake, behaving rhesus macaques in response to 3 types of species-specific vocalizations made by 3 individual callers. Analysis of responses by vocalization call type and caller identity showed that ∼ 19 % of cells had a main effect of call type with fewer cells encoding caller. Classification performance of VLPFC neurons was ∼ 42% averaged across the population. When assessed at discrete time bins, classification performance reached 70 percent for coos in the first 300 ms and remained above chance for the duration of the response period, though performance was lower for other call types. In light of the sub-optimal classification performance of the majority of VLPFC neurons when only vocal information is present, and the recent evidence that most VLPFC neurons are multisensory, the potential enhancement of classification with the addition of accompanying face information is discussed and additional studies recommended. Behavioral and neuronal evidence has shown a considerable benefit in recognition and memory performance when faces and voices are presented simultaneously. In the natural environment both facial and vocalization information is present simultaneously and neural systems no doubt evolved to integrate multisensory stimuli during recognition.

1 Follower
 · 
65 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. Copyright © 2015 the authors 0270-6474/15/350960-12$15.00/0.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 01/2015; 35(3):960-71. DOI:10.1523/JNEUROSCI.1328-14.2015 · 6.75 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Hippocampal CA1 and CA3 neurons sampled randomly in large numbers in primate brain show conclusive examples of hierarchical encoding of task specific information. Hierarchical encoding allows multi-task utilization of the same hippocampal neural networks via distributed firing between neurons that respond to subsets, attributes or "categories" of stimulus features which can be applied in events in different contexts. In addition, such networks are uniquely adaptable to neural systems unrestricted by rigid synaptic architecture (i.e. columns, layers or "patches") which physically limits the number of possible task-specific interactions between neurons. Also hierarchical encoding is not random; it requires multiple exposures to the same types of relevant events to elevate synaptic connectivity between neurons for different stimulus features that occur in different task-dependent contexts. The large number of cells within associated hierarchical circuits in structures such as hippocampus provides efficient processing of information relevant to common memory-dependent behavioral decisions within different contextual circumstances. Copyright © 2014. Published by Elsevier B.V.
    Brain Research 12/2014; DOI:10.1016/j.brainres.2014.12.037 · 2.83 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music consists of strings of sound that vary over time. Technical devices, such as tape recorders, store musical melodies by transcribing event times of temporal sequences into consecutive locations on the storage medium. Playback occurs by reading out the stored information in the same sequence. However, it is unclear how the brain stores and retrieves auditory sequences. Neurons in the anterior lateral belt of auditory cortex are sensitive to the combination of sound features in time, but the integration time of these neurons is not sufficient to store longer sequences that stretch over several seconds, minutes or more. Functional imaging studies in humans provide evidence that music is stored instead within the auditory dorsal stream, including premotor and prefrontal areas. In monkeys, these areas are the substrate for learning of motor sequences. It appears, therefore, that the auditory dorsal stream transforms musical into motor sequence information and vice versa, realizing what are known as forward and inverse models. The basal ganglia and the cerebellum are involved in setting up the sensorimotor associations, translating timing information into spatial codes and back again.
    Frontiers in Systems Neuroscience 08/2014; 8:149. DOI:10.3389/fnsys.2014.00149