Article

The rise and fall of priming: How visual exposure shapes cortical representations of objects

Harvard University, Cambridge, Massachusetts, United States
Cerebral Cortex (Impact Factor: 8.31). 12/2005; 15(11):1655-65. DOI: 10.1093/cercor/bhi060
Source: PubMed

ABSTRACT How does the amount of time for which we see an object influence the nature and content of its cortical representation? To address this question, we varied the duration of initial exposure to visual objects and then measured functional magnetic resonance imaging (fMRI) signal and behavioral performance during a subsequent repeated presentation of these objects. We report a novel 'rise-and-fall' pattern relating exposure duration and the corresponding magnitude of fMRI cortical signal. Compared with novel objects, repeated objects elicited maximal cortical response reduction when initially presented for 250 ms. Counter-intuitively, initially seeing an object for a longer duration significantly reduced the magnitude of this effect. This 'rise-and-fall' pattern was also evident for the corresponding behavioral priming. To account for these findings, we propose that the earlier interval of an exposure to a visual stimulus results in a fine-tuning of the cortical response, while additional exposure promotes selection of a subset of key features for continued representation. These two independent mechanisms complement each other in shaping object representations with experience.

0 Bookmarks
 · 
89 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Repetition can boost memory and perception. However, repeating the same stimulus several times in immediate succession also induces intriguing perceptual transformations and illusions. Here, we investigate the Speech to Song Transformation (S2ST), a massed repetition effect in the auditory modality, which crosses the boundaries between language and music. In the S2ST, a phrase repeated several times shifts to being heard as sung. To better understand this unique cross-domain transformation, we examined the perceptual determinants of the S2ST, in particular the role of acoustics. In two Experiments, the effect of two pitch properties and three rhythmic properties on the probability and speed of occurrence of the transformation were examined. Results showed that both pitch and rhythmic properties are key features fostering the transformation. However, some properties proved to be more conducive to the S2ST than others. Stable tonal targets that allowed for the perception of a musical melody led more often and quickly to the S2ST than scalar intervals. Recurring durational contrasts arising from segmental grouping favoring a metrical interpretation of the stimulus also facilitated the S2ST. This was, however, not the case for a regular beat structure within and across repetitions. In addition, individual perceptual abilities allowed to predict the likelihood of the S2ST. Overall, the study demonstrated that repetition enables listeners to re-interpret specific prosodic features of spoken utterances in terms of musical structures. The findings underline a tight link between language and music, but they also reveal important differences in communicative functions of prosodic structure in the two domains.
    Journal of Experimental Psychology Human Perception & Performance 05/2014; Advance Online Publication. DOI:10.1037/a0036858 · 3.11 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Neural adaptation paradigms have been used in the electrophysiological and neuroimaging literature to characterise neural populations underlying face and object perception. It was recently reported by Nemrodov and Itier (2012) that adaptation of the N170 event-related potential (ERP) component is not stimuli category-specific over rapid adapting stimulus durations (S1 durations) and interstimulus intervals (ISIs). We therefore tested the category-specificity of adaptation over a range of S1 durations and ISIs. Faces and chairs were presented at S1 (for 200, 500 or 1000ms) and S2 (for 200ms), over a variable ISI (200 or 500ms). Mean amplitudes of the P1, N170 and P2 visual ERP components were measured following S1 and S2 stimuli. Faces at S1 led to the smallest (i.e. most adapted) N170 amplitudes to both faces and chairs at S2, more than chairs at S1. N170s at S2 were smallest after a 500ms S1 duration; but N170 amplitude did not vary over ISI. Effects were also seen for the two surrounding positive components, the P1 and P2. Presenting faces at S1 led to enhanced P1 amplitudes evoked by S2 chair stimuli. The P2 showed smallest amplitudes following the shorter 200ms ISI. These results indicate that adaptation of the N170 is not actually category-specific but instead dependent on the S1 category (regardless of S2 category), and may also be influenced by earlier effects at the P1 (i.e. not specific to the N170). This challenges the assumption that N170 category adaptation indexes effects on distinct neural populations that differ between faces and non-face objects. Copyright © 2015. Published by Elsevier B.V.
    International journal of psychophysiology: official journal of the International Organization of Psychophysiology 03/2015; DOI:10.1016/j.ijpsycho.2015.02.030 · 3.05 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Repetition can boost memory and perception. However, repeating the same stimulus several times in immediate succession also induces intriguing perceptual transformations and illusions. Here, we investigate the Speech to Song Transformation (S2ST), a massed repetition effect in the auditory modality, which crosses the boundaries between language and music. In the S2ST, a phrase repeated several times shifts to being heard as sung. To better understand this unique cross-domain transformation, we examined the perceptual determinants of the S2ST, in particular the role of acoustics. In 2 Experiments, the effects of 2 pitch properties and 3 rhythmic properties on the probability and speed of occurrence of the transformation were examined. Results showed that both pitch and rhythmic properties are key features fostering the transformation. However, some properties proved to be more conducive to the S2ST than others. Stable tonal targets that allowed for the perception of a musical melody led more often and quickly to the S2ST than scalar intervals. Recurring durational contrasts arising from segmental grouping favoring a metrical interpretation of the stimulus also facilitated the S2ST. This was, however, not the case for a regular beat structure within and across repetitions. In addition, individual perceptual abilities allowed to predict the likelihood of the S2ST. Overall, the study demonstrated that repetition enables listeners to reinterpret specific prosodic features of spoken utterances in terms of musical structures. The findings underline a tight link between language and music, but they also reveal important differences in communicative functions of prosodic structure in the 2 domains.

Full-text

Download
33 Downloads
Available from
May 27, 2014