Tepring Piquado

University of California, Irvine, Irvine, California, United States

Are you Tepring Piquado?

Claim your profile

Publications (4)11.95 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild- to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall.
    International journal of audiology 06/2012; 51(8):576-83. DOI:10.3109/14992027.2012.684403 · 1.84 Impact Factor
  • Kenneth I Vaden · Tepring Piquado · Gregory Hickok ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Many models of spoken word recognition posit that the acoustic stream is parsed into phoneme level units, which in turn activate larger representations [McClelland, J. L., & Elman, J. L. The TRACE model of speech perception. Cognitive Psychology, 18, 1-86, 1986], whereas others suggest that larger units of analysis are activated without the need for segmental mediation [Greenberg, S. A multitier theoretical framework for understanding spoken language. In S. Greenberg & W. A. Ainsworth (Eds.), Listening to speech: An auditory perspective (pp. 411-433). Mahwah, NJ: Erlbaum, 2005; Klatt, D. H. Speech perception: A model of acoustic-phonetic analysis and lexical access. Journal of Phonetics, 7, 279-312, 1979; Massaro, D. W. Preperceptual images, processing time, and perceptual units in auditory perception. Psychological Review, 79, 124-145, 1972]. Identifying segmental effects in the brain's response to speech may speak to this question. For example, if such effects were localized to relatively early processing stages in auditory cortex, this would support a model of speech recognition in which segmental units are explicitly parsed out. In contrast, segmental processes that occur outside auditory cortex may indicate that alternative models should be considered. The current fMRI experiment manipulated the phonotactic frequency (PF) of words that were auditorily presented in short lists while participants performed a pseudoword detection task. PF is thought to modulate networks in which phoneme level units are represented. The present experiment identified activity in the left inferior frontal gyrus that was positively correlated with PF. No effects of PF were found in temporal lobe regions. We propose that the observed phonotactic effects during speech listening reflect the strength of the association between acoustic speech patterns and articulatory speech codes involving phoneme level units. On the basis of existing lesion evidence, we interpret the function of this auditory-motor association as playing a role primarily in production. These findings are consistent with the view that phoneme level units are not necessarily accessed during speech recognition.
    Journal of Cognitive Neuroscience 10/2011; 23(10):2665-74. DOI:10.1162/jocn.2011.21620 · 4.09 Impact Factor
  • Source
    Tepring Piquado · Katheryn A.Q. Cousins · Arthur Wingfield · Paul Miller ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory.
    Brain research 09/2010; 1365:48-65. DOI:10.1016/j.brainres.2010.09.070 · 2.84 Impact Factor
  • Source
    Tepring Piquado · Derek Isaacowitz · Arthur Wingfield ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Two experiments examined the effectiveness of the pupillary response as a measure of cognitive load in younger and older adults. Experiment 1 measured the change in pupil size of younger and older adults while they listened to spoken digit lists that varied in length and retained them briefly for recall. In Experiment 2 changes in relative pupil size were measured while younger and older adults listened to sentences for later recall that varied in syntactic complexity and sentence length. Both age groups' pupil sizes were sensitive to the size of the memory set in Experiment 1 and sentence length in Experiment 2, with the older adults showing a larger effect of the memory load on a normalized measure of pupil size relative to the younger adults. By contrast, only the younger adults showed a difference in the pupillary response to a change in syntactic complexity, even with an adjustment for the reduced reactivity of the older pupil.
    Psychophysiology 05/2010; 47(3):560-9. DOI:10.1111/j.1469-8986.2009.00947.x · 3.18 Impact Factor

Publication Stats

79 Citations
11.95 Total Impact Points


  • 2011
    • University of California, Irvine
      • Department of Cognitive Sciences
      Irvine, California, United States
  • 2010
    • Brandeis University
      • Department of Psychology
      Waltham, Massachusetts, United States