Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Does memory prepare us to act? Long-term memory can facilitate signal detection, though the degree of benefit varies and can even be absent. To dissociate between learning and behavioral expression of learning, we used high-density electroencephalography to assess memory retrieval and response processing. At learning, participants heard everyday sounds. Half of these sounds were paired with an above-threshold lateralized tone, such that it was possible to form incidental associations between the sound and the location of the tone. Importantly, attention was directed to either the sound (Experiment 1) or the tone (Experiment 2). Participants then completed a novel detection task that separated cued retrieval from response processing. At retrieval, we observed a striking brain-behavior dissociation. Learning was observed neurally in both experiments. Behaviorally, however, signal detection was only facilitated in Experiment 2, for which there was an accompanying explicit memory for tone presence. Further, implicit neural memory for tone location correlated with the degree of response preparation, but not response execution. Together, the findings suggest 1) that attention at learning affects memory-biased action and 2) that memory prepared action via both explicit and implicit associative memory, with the latter triggering response preparation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The primary distinction between previous studies and the current one is our examination of cue-related activity in isolation. While earlier research has focused on perceptual processing 21 and response-related processing, [29][30][31][32] the current study centers on cue-related processing to better understand how memory retrieval contributes to perceptual benefits in reaction time (RT). Additionally, a key distinction of our study is the focus on source-localized effects rather than traditional event-related potentials. ...
... We hypothesized that response time to detect the target tone would be faster for predictive than nonpredictive trials, in line with previous work that has demonstrated that auditory memory can enhance perception. 29,33 To understand how these perceptual benefits man-ifest, we focused our analyses on changes in theta (4-7 Hz) and alpha (8)(9)(10)(11)(12) power and source-localized these differences using a multiple-source beamformer to complement and extend neurocognitive models derived from the functional magnetic resonance imaging (fMRI) literature in the field. Specifically, we predicted that changes in theta and alpha power would distinguish between predictive and nonpredictive conditions, indexing associative retrieval. ...
... Sixty-nine participants took part in one of two nearly identical memory-guided electroencephalography (EEG) studies conducted by our group. 29 We pooled data across these studies to increase our statistical power for the present source analyses. One participant did not have EEG data, and four participants were excluded due to poor-quality EEG recordings, resulting in a total of 64 participants (M age = 22.8 years; SD age = 4.0; 41 female and 23 male). ...
Article
Full-text available
How does memory influence auditory perception, and what are the underlying mechanisms that drive these interactions? Most empirical studies on the neural correlates of memory‐guided perception have used static visual tasks, resulting in a bias in the literature that contrasts with recent research highlighting the dynamic nature of memory retrieval. Here, we used electroencephalography to track the retrieval of auditory associative memories in a cue–target paradigm. Participants (N = 64) listened to real‐world soundscapes that were either predictive of an upcoming target tone or nonpredictive. Three key results emerged. First, targets were detected faster when embedded in predictive than in nonpredictive soundscapes (memory‐guided perceptual benefit). Second, changes in theta and alpha power differentiated soundscape contexts that were predictive from nonpredictive contexts at two distinct temporal intervals from soundscape onset (early—950 ms peak for theta and alpha, and late—1650 ms peak for alpha only). Third, early theta activity in the left anterior temporal lobe was correlated with memory‐guided perceptual benefits. Together, these findings underscore the role of distinct neural processes at different time points during associative retrieval. By emphasizing temporal sensitivity and by isolating cue‐related activity, we reveal a two‐stage retrieval mechanism that advances our understanding of how memory influences auditory perception.
... We hypothesized that response time to detect the target tone would be faster for predictive than nonpredictive trials, in line with previous work that has demonstrated that auditory memory can enhance perception. 29,33 To understand how these perceptual benefits manifest, we focused our analyses on changes in theta (4-7 Hz) and alpha (8)(9)(10)(11)(12) power and source localized these differences using a multiple-source beamformer to complement and extend neurocognitive models derived from the fMRI literature in the field. Specifically, we predicted that changes in theta and alpha power would distinguish between predictive and nonpredictive conditions, indexing associative retrieval. ...
... Sixty-nine participants took part in one of two nearly identical memory-guided EEG studies conducted by our group. 29,36 We pooled data across these studies to increase our statistical power for the present source analyses. One participant did not have EEG data, and four participants were excluded due to poor quality EEG recordings, resulting in a total of 64 participants (Mage= 22.8 years; SDage = 4.0; 41 female and 23 male). ...
Preprint
Full-text available
How does memory influence auditory perception, and what are the underlying mechanisms that drive these interactions? Most empirical investigations on the neural correlates of memory-guided perception have used static visual tasks creating a bias in the literature that contrasts with recent research highlighting the dynamic nature of memory retrieval. Here, we used electroencephalography (EEG) to track the retrieval of auditory associative memories in a cue-target paradigm. Participants (N=64) listened to real-world soundscapes (cue) that were either predictive of an upcoming target tone or nonpredictive. Three key results emerged: First, targets were detected faster when embedded in predictive than in nonpredictive soundscapes (memory-guided perceptual benefit). Second, changes in theta and alpha power differentiated soundscape contexts that were predictive from nonpredictive contexts at two distinct temporal intervals from cue onset (early - 950 ms peak for theta and alpha, and late - 1650 ms peak for alpha only). Third, early theta activity in the left anterior temporal lobe was correlated with memory-guided perceptual benefits. Together, these findings underscore the role of distinct neural processes at different time points during cued retrieval. By emphasizing temporal sensitivity and isolating cue-related activity, we reveal a two-stage retrieval mechanism that advances understanding of how memory influences auditory perception.
Article
Full-text available
The role of attention in implicit sequence learning was investigated in 3 experiments in which participants were presented with a serial reaction time (SRT) task under single- or dual-task conditions. Unlike previous studies using this paradigm, these experiments included only probabilistic sequences of locations and arranged a counting task performed on the same stimulus on which the SRT task was being carried out. Another sequential contingency was also arranged between the dimension to be counted and the location of the next stimulus. Results indicate that the division of attention barely affected learning but that selective attention to the predictive dimensions was necessary to learn about the relation between these dimensions and the predicted one. These results are consistent with a theory of implicit sequence learning that considers this learning as the result of an automatic associative process running independently of attentional load, but that would associate only those events that are held simultaneously in working memory.
Article
Full-text available
Developmental amnesia (DA) is associated with early hippocampal damage and subsequent episodic amnesia emerging in childhood alongside age-appropriate development of semantic knowledge. We employed fMRI to assess whether patients with DA show evidence of 'cortical reinstatement', a neural correlate of episodic memory, despite their amnesia. At study, 23 participants (5 patients) were presented with words overlaid on a scene or a scrambled image for later recognition. Scene reinstatement was indexed by scene memory effects (greater activity for previously presented words paired with a scene rather than scrambled images) that overlapped with scene perception effects. Patients with DA demonstrated scene reinstatement effects in the parahippocampal and retrosplenial cortex that were equivalent to those shown by healthy controls. Behaviourally, however, patients with DA showed markedly impaired scene memory. The data indicate that reinstatement can occur despite hippocampal damage, but that cortical reinstatement is insufficient to support accurate memory performance. Furthermore, scene reinstatement effects were diminished during a retrieval task in which scene information was not relevant for accurate responding, indicating that strategic mnemonic processes operate normally in DA. The data suggest that cortical reinstatement of trial-specific contextual information is decoupled from the experience of recollection in the presence of severe hippocampal atrophy.
Article
Full-text available
In this work, we relied on electrophysiological methods to characterize the processing stages that are affected by the presence of regularity in a visual search task. EEG was recorded for 72 participants while they completed a visual search task. Depending on the group, the task contained a consistent-mapping condition, a random-mapping condition, or both consistent and random conditions intermixed (mixed group). Contrary to previous findings, the control groups allowed us to demonstrate that the contextual cueing effect that was observed in the mixed group resulted from interference, not facilitation, to the target selection, response selection, and response execution processes (N2-posterior-contralateral, stimulus-locked lateralized readiness potential [LRP], and response-locked LRP components). When the regularity was highly valid (consistent-only group), the presence of regularity drove performance beyond general practice effects, through facilitation in target selection and response selection (N2-posterior-contralateral and stimulus-locked LRP components). Overall, we identified two distinct effects created by the presence of regularity: a global effect of validity that dictates the degree to which all information is taken into account and a local effect of activating the information on every trial. We conclude that, when considering the influence of regularity on behavior, it is vital to assess how the overall reliability of the incoming information is affected.
Article
Full-text available
To extract meaningful information from complex auditory scenes like a noisy playground, rock concert, or classroom, children can direct attention to different sound streams. One means of accomplishing this might be to align neural activity with the temporal structure of a target stream, such as a specific talker or melody. However, this may be more difficult for children with ADHD, who can struggle with accurately perceiving and producing temporal intervals. In this EEG study, we found that school-aged children's attention to one of two temporally-interleaved isochronous tone 'melodies' was linked to an increase in phase-locking at the melody's rate, and a shift in neural phase that aligned the neural responses with the attended tone stream. Children's attention task performance and neural phase alignment with the attended melody were linked to performance on temporal production tasks, suggesting that children with more robust control over motor timing were better able to direct attention to the time points associated with the target melody. Finally, we found that although children with ADHD performed less accurately on the tonal attention task than typically developing children, they showed the same degree of attentional modulation of phase locking and neural phase shifts, suggesting that children with ADHD may have difficulty with attentional engagement rather than attentional selection.
Article
Full-text available
Auditory long-term memory has been shown to facilitate signal detection. However, the nature and timing of the cognitive processes supporting such benefits remain equivocal. We measured neuroelectric brain activity while young adults were presented with a contextual memory cue designed to assist with the detection of a faint pure tone target embedded in an audio clip of an everyday environmental scene (e.g., the soundtrack of a restaurant). During an initial familiarization task, participants heard such audio clips, half of which included a target sound (memory cue trials) at a specific time and location (left or right ear), as well as audio clips without a target (neutral trials). Following a one-hour or twenty-four-hour retention interval, the same audio clips were presented, but now all included a target. Participants were asked to press a button as soon as they heard the pure tone target. Overall, participants were faster and more accurate during memory than neutral cue trials. The auditory contextual memory effects on performance coincided with three temporally and spatially distinct neural modulations, which encompassed changes in the amplitude of event-related potential as well as changes in theta, alpha, beta and gamma power. Brain electrical source analyses revealed greater source activity in memory than neutral cue trials in the right superior temporal gyrus and left parietal cortex. Conversely, neutral trials were associated with greater source activity than memory cue trials in the left posterior medial temporal lobe. Target detection was associated with increased negativity (N2), and a late positive (P3b) wave at frontal and parietal sites, respectively. The effect of auditory contextual memory on brain activity preceding target onset showed little lateralization. Together, these results are consistent with contextual memory facilitating retrieval of target-context associations and deployment and management of auditory attentional resources to when the target occurred. The results also suggest that the auditory cortices, parietal cortex, and medial temporal lobe may be parts of a neural network enabling memory-guided attention during auditory scene analysis.
Article
Full-text available
Everyday behavior depends upon the operation of concurrent cognitive processes. In visual search, studies that examine memory-attention interactions have indicated that long-term memory facilitates search for a target (e.g., contextual cueing), but the potential for memories to capture attention and decrease search efficiency has not been investigated. To address this gap in the literature, five experiments were conducted to examine whether task-irrelevant encoded objects might capture attention. In each experiment, participants encoded scene-object pairs. Then, in a visual search task, 6-object search displays were presented and participants were told to make a single saccade to targets defined by shape (e.g., diamond among differently colored circles; Experiments 1, 4, and 5) or by color (e.g., blue shape among differently shaped gray objects; Experiments 2 and 3). Sometimes, one of the distractors was from the encoded set, and occasionally the scene that had been paired with that object was presented prior to the search display. Results indicated that eye movements were made, in error, more often to encoded distractors than to baseline distractors, and that this effect was greatest when the corresponding scene was presented prior to search. When capture did occur, participants looked longer at encoded distractors if scenes had been presented, an effect that we attribute to the representational match between a retrieved associate and the identity of the encoded distractor in the search display. In addition, the presence of a scene resulted in slower saccade deployment when participants made first saccades to targets, as instructed. Experiments 4 and 5 suggest that this slowdown may be due to the relatively rare and therefore, surprising, appearance of visual stimulus information prior to search. Collectively, results suggest that information encoded into episodic memory can capture attention, which is consistent with the recent proposal that selection history can guide attentional selection.
Article
Full-text available
How does the brain follow a sound that is mixed with others in a noisy environment? One possible strategy is to allocate attention to task-relevant time intervals. Prior work has linked auditory selective attention to alignment of neural modulations with stimulus temporal structure. However, since this prior research used relatively easy tasks and focused on analysis of main effects of attention across participants, relatively little is known about the neural foundations of individual differences in auditory selective attention. Here we investigated individual differences in auditory selective attention by asking participants to perform a 1-back task on a target auditory stream while ignoring a distractor auditory stream presented 180° out of phase. Neural entrainment to the attended auditory stream was strongly linked to individual differences in task performance. Some variability in performance was accounted for by degree of musical training, suggesting a link between long-term auditory experience and auditory selective attention. To investigate whether short-term improvements in auditory selective attention are possible, we gave participants 2 h of auditory selective attention training and found improvements in both task performance and enhancements of the effects of attention on neural phase angle. Our results suggest that although there exist large individual differences in auditory selective attention and attentional modulation of neural phase angle, this skill improves after a small amount of targeted training.
Article
Full-text available
Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants heard audio clips, some of which included a lateralized target (p = 50%). On each trial participants indicated whether the target was presented from the left, right, or was absent. Following a 1 hr retention interval, participants were presented with the same audio clips, which now all included a target. In Experiment 1, participants showed memory-based gains in response time and d'. Experiment 2 showed that temporal expectations modulate attention, with greater memory-guided attention effects on performance when temporal context was reinstated from learning (i.e., when timing of the target within audio clips was not changed from initially learned timing). Experiment 3 showed that while conscious recall of target locations was modulated by exposure to target-context associations during learning (i.e., better recall with higher number of learning blocks), the influence of LTM associations on spatial attention was not reduced (i.e., number of learning blocks did not affect memory-guided attention). Both Experiments 2 and 3 showed gains in performance related to target-context associations, even for associations that were not explicitly remembered. Together, these findings indicate that memory for audio clips is acquired quickly and is surprisingly robust; both implicit and explicit LTM for the location of a faint target tone modulated auditory spatial attention. (PsycINFO Database Record
Article
Full-text available
Visual search through previously encountered contexts typically produces reduced reaction times compared with search through novel contexts. This contextual cueing benefit is well established, but there is debate regarding its underlying mechanisms. Eye-tracking studies have consistently shown reduced number of fixations with repetition, supporting improvements in attentional guidance as the source of contextual cueing. However, contextual cueing benefits have been shown in conditions in which attentional guidance should already be optimal-namely, when attention is captured to the target location by an abrupt onset, or under pop-out conditions. These results have been used to argue for a response-related account of contextual cueing. Here, we combine eye tracking with response time to examine the mechanisms behind contextual cueing in spatially cued and pop-out conditions. Three experiments find consistent response time benefits with repetition, which appear to be driven almost entirely by a reduction in number of fixations, supporting improved attentional guidance as the mechanism behind contextual cueing. No differences were observed in the time between fixating the target and responding-our proxy for response related processes. Furthermore, the correlation between contextual cueing magnitude and the reduction in number of fixations on repeated contexts approaches 1. These results argue strongly that attentional guidance is facilitated by familiar search contexts, even when guidance is near-optimal. (PsycINFO Database Record
Article
Full-text available
The most critical attribute of human language is its unbounded combinatorial nature: smaller elements can be combined into larger structures on the basis of a grammatical system, resulting in a hierarchy of linguistic units, such as words, phrases and sentences. Mentally parsing and representing such structures, however, poses challenges for speech comprehension. In speech, hierarchical linguistic structures do not have boundaries that are clearly defined by acoustic cues and must therefore be internally and incrementally constructed during comprehension. We found that, during listening to connected speech, cortical activity of different timescales concurrently tracked the time course of abstract linguistic structures at different hierarchical levels, such as words, phrases and sentences. Notably, the neural tracking of hierarchical linguistic structures was dissociated from the encoding of acoustic cues and from the predictability of incoming words. Our results indicate that a hierarchy of neural processing timescales underlies grammar-based internal construction of hierarchical linguistic structure.
Article
Full-text available
Our visual brain is remarkable in extracting invariant properties from the noisy environment, guiding selection of where to look and what to identify. However, how the brain achieves this is still poorly understood. Here we explore interactions of local context and global structure in the long-term learning and retrieval of invariant display properties. Participants searched for a target among distractors, without knowing that some "old" configurations were presented repeatedly (randomly inserted among "new" configurations). We simulated tunnel vision, limiting the visible region around fixation. Robust facilitation of performance for old versus new contexts was observed when the visible region was large but not when it was small. However, once the display was made fully visible during the subsequent transfer phase, facilitation did become manifest. Furthermore, when participants were given a brief preview of the total display layout prior to tunnel view search with 2 items visible, facilitation was already obtained during the learning phase. The eye movement results revealed contextual facilitation to be coupled with changes of saccadic planning, characterized by slightly extended gaze durations but a reduced number of fixations and shortened scan paths for old displays. Taken together, our findings show that invariant spatial display properties can be acquired based on scarce, para-/foveal information, while their effective retrieval for search guidance requires the availability (even if brief) of a certain extent of peripheral information. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
Conducted 10 experiments to evaluate the notion of "depth of processing" in human memory. Undergraduate Ss were asked questions concerning the physical, phonemic, or semantic characteristics of a long series of words; this initial question phase was followed by an unexpected retention test for the words. It was hypothesized that "deeper" (semantic) questions would take longer to answer and be associated with higher retention of the target words. These ideas were confirmed by the 1st 4 experiments. Exps V-X showed (a) it is the qualitative nature of a word's encoding which determines retention, not processing time as such; and (b) retention of words given positive and negative decisions was equalized when the encoding questions were equally salient or congruous for both types of decision. While "depth" (the qualitative nature of the encoding) serves a useful descriptive purpose, results are better described in terms of the degree of elaboration of the encoded trace. Finally, results have implications for an analysis of learning in terms of its constituent encoding operations. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In everyday situations, we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location [Summerfield, J. J., Rao, A., Garside, N., & Nobre, A. C. Biasing perception by spatial long-term memory. The Journal of Neuroscience, 31, 14952–14960, 2011; Summerfield, J. J., Lepsien, J., Gitelman, D. R., Mesulam, M. M., & Nobre, A. C. Orienting attention based on long-term memory experience. Neuron, 49, 905–916, 2006]. This study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d′) for targets located in remembered spatial contexts. We used the N2pc ERP to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top–down sources of information.
Article
Full-text available
This paper reviews the literature on the Nl wave of the human auditory evoked potential. It concludes that at least six different cerebral processes can contribute to (he negative wave recorded from the scalp with a peak latency between 50 and 150 ms: a component generated in the auditory-cortex on the supratemporal plane, a component generated in the association cortex on the lateral aspect of the temporal and parietal cortex, a component generated in the motor and premotor cortices, the mismatch negativity, a temporal component of the processing negativity, and a frontal component of the processing negativity, The first three, which can be considered ‘true’ N1 components, are controlled by the physical and temporal aspects of the stimulus and by the general state of the subject. The other three components are not necessarily elicited by a stimulus but depend on the conditions in which the stimulus occurs. They often last much longer than the true N1 components that they overlap.
Article
Full-text available
In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision. We let normal-sighted participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter search times and more efficient exploration of the display for repeated compared to novel search arrays and thus exhibited contextual cueing. When visual search was impaired by the central scotoma, search facilitation for repeated displays was eliminated. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g., may lead to deficits in high-level visual functions well beyond the immediate consequences of a scotoma.
Article
Full-text available
This paper briefly reviews the evidence for multistore theories of memory and points out some difficulties with the approach. An alternative framework for human memory research is then outlined in terms of depth or levels of processing. Some current data and arguments are reexamined in the light of this alternative framework and implications for further research considered.
Article
Full-text available
Human perception is highly flexible and adaptive. Selective processing is tuned dynamically according to current task goals and expectations to optimize behavior. Arguably, the major source of our expectations about events yet to unfold is our past experience; however, the ability of long-term memories to bias early perceptual analysis has remained untested. We used a noninvasive method with high temporal resolution to record neural activity while human participants detected visual targets that appeared at remembered versus novel locations within naturalistic visual scenes. Upon viewing a familiar scene, spatial memories changed oscillatory brain activity in anticipation of the target location. Memory also enhanced neural activity during early stages of visual analysis of the target and improved behavioral performance. Both measures correlated with subsequent target-detection performance. We therefore demonstrated that memory can directly enhance perceptual functions in the human brain.
Article
Full-text available
Knowledge of musical rules and structures has been reliably demonstrated in humans of different ages, cultures, and levels of music training, and has been linked to our musical preferences. However, how humans acquire knowledge of and develop preferences for music remains unknown. The present study shows that humans rapidly develop knowledge and preferences when given limited exposure to a new musical system. Using a non-traditional, unfamiliar musical scale (Bohlen-Pierce scale), we created finite-state musical grammars from which we composed sets of melodies. After 25-30 min of passive exposure to the melodies, participants showed extensive learning as characterized by recognition, generalization, and sensitivity to the event frequencies in their given grammar, as well as increased preference for repeated melodies in the new musical system. Results provide evidence that a domain-general statistical learning mechanism may account for much of the human appreciation for music.
Article
Full-text available
Repetition of context can facilitate search for targets in distractor-filled displays. This contextual cueing goes along with enhanced event-related brain potentials in visual cortex, as previously demonstrated with depth electrodes in the human brain. However, modulation of the BOLD-response in striate and peristriate cortices has, to our knowledge, not yet been reported as a consequence of contextual cueing. Here, we report a selective reduction of the BOLD onset latency for repeated distractor configurations in these areas. In addition, the same onset latency reduction was observed in posterior inferior frontal cortex, a potential source area for feedback signals to early visual areas.
Article
Full-text available
Mechanisms of implicit spatial and temporal orienting were investigated by using a moving auditory stimulus. Expectations were set up implicitly, using the information inherent in the movement of a sound, directing attention to a specific moment in time with respect to a specific location. There were four conditions of expectation: temporal and spatial expectation; temporal expectation only; spatial expectation only; and no expectation. Event-related brain potentials were recorded while participants performed a go/no-go task, set up by anticipation of the reappearance of a target tone through a white noise band. Results showed that (1) temporal expectations alone speeded reaction time and increased response accuracy; and (2) implicit temporal expectations alone independently enhanced target detection at early processing stages, prior to motor response. This was reflected at stages of perceptual analysis, indexed by P1 and N1 components, as well as in task-related stages indexed by N2; and (3) spatial expectations had an effect at later response-related processing stages but only in combination with temporal expectations, indexed by the P3 component. Thus, the results, in addition to indicating a primary role for temporal orienting in audition, suggest that multiple mechanisms of attention interact in different phases of auditory target detection. Our results are consistent with the view from vision research that spatial and temporal attentional control is based on the activity of partly overlapping, and partly functionally specialized neural networks.
Article
Full-text available
Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multitalker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4-8 Hz, in auditory cortex. In addition, the difference in alpha power (8-12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual's attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex.
Article
Full-text available
Anterior prefrontal cortex is usually associated with high level executive functions. Here, we show that the frontal pole, specifically left lateral frontopolar cortex, is involved in signaling change in implicitly learned spatial contexts, in the absence of conscious change detection. In a variant of the contextual cueing paradigm, participants first learned contingencies between distractor contexts and target locations implicitly. After learning, repeated distractor contexts were paired with new target locations. Left lateral frontopolar [Brodmann area (BA) 10] and superior frontal (BA9) cortices showed selective signal increase for this target location change in repeated displays in an event-related fMRI experiment, which was most pronounced in participants with high contextual facilitation before the change. The data support the view that left lateral frontopolar cortex is involved in signaling contextual change to posterior brain areas as a precondition for adaptive changes of attentional resource allocation. This signaling occurs in the absence of awareness of learned contingencies or contextual change.
Article
Full-text available
Author Summary Attention is the cognitive process underlying our ability to focus on specific aspects of our environment while ignoring others. By its very definition, attention plays a key role in differentiating foreground (the object of attention) from unattended clutter, or background. We investigate the neural basis of this phenomenon by engaging listeners to attend to different components of a complex acoustic scene. We present a spectrally and dynamically rich, but highly controlled, stimulus while participants perform two complementary tasks: to attend either to a repeating target note in the midst of random interferers (“maskers”), or to the background maskers themselves. Simultaneously, the participants' neural responses are recorded using the technique of magnetoencephalography (MEG). We hold all physical parameters of the stimulus fixed across the two tasks while manipulating one free parameter: the attentional state of listeners. The experimental findings reveal that auditory attention strongly modulates the sustained neural representation of the target signals in the direction of boosting foreground perception, much like known effects of visual attention. This enhancement originates in auditory cortex, and occurs exclusively at the frequency of the target rhythm. The results show a strong interaction between the neural representation of the attended target with the behavioral task demands, the bottom-up saliency of the target, and its perceptual detectability over time.
Article
Full-text available
Revealing the relationships between perceptual representations in the brain and mechanisms of adult perceptual learning is of great importance, potentially leading to significantly improved training techniques both for improving skills in the general population and for ameliorating deficits in special populations. In this review, we summarize the essentials of reverse hierarchy theory for perceptual learning in the visual and auditory modalities and describe the theory's implications for designing improved training procedures, for a variety of goals and populations.
Article
In recent years, there has been growing interest and excitement over the newly discovered cognitive capacities of the sleeping brain, including its ability to form novel associations. These recent discoveries raise the possibility that other more sophisticated forms of learning may also be possible during sleep. In the current study, we tested whether sleeping humans are capable of statistical learning – the process of becoming sensitive to repeating, hidden patterns in environmental input, such as embedded words in a continuous stream of speech. Participants' EEG was recorded while they were presented with one of two artificial languages, composed of either trisyllabic or disyllabic nonsense words, during slow-wave sleep. We used an EEG measure of neural entrainment to assess whether participants became sensitive to the repeating regularities during sleep-exposure to the language. We further probed for long-term memory representations by assessing participants' performance on implicit and explicit tests of statistical learning during subsequent wake. In the disyllabic—but not trisyllabic—language condition, participants’ neural entrainment to words increased over time, reflecting a gradual gain in sensitivity to the embedded regularities. However, no significant behavioural effects of sleep-exposure were observed after the nap, for either language. Overall, our results indicate that the sleeping brain can detect simple, repeating pairs of syllables, but not more complex triplet regularities. However, the online detection of these regularities does not appear to produce any durable long-term memory traces that persist into wake – at least none that were revealed by our current measures and sample size. Although some perceptual aspects of statistical learning are preserved during sleep, the lack of memory benefits during wake indicates that exposure to a novel language during sleep may have limited practical value.
Article
The ability to selectively attend to a speech signal amid competing sounds is a significant challenge, especially for listeners trying to comprehend non-native speech. Attention is critical to direct neural processing resources to the most essential information. Here, neural tracking of the speech envelope of an English story narrative and cortical auditory evoked potentials (CAEPs) to non-speech stimuli were simultaneously assayed in native and non-native listeners of English. Although native listeners exhibited higher narrative comprehension accuracy, non-native listeners exhibited enhanced neural tracking of the speech envelope and heightened CAEP magnitudes. These results support an emerging view that although attention to a target speech signal enhances neural tracking of the speech envelope, this mechanism itself may not confer speech comprehension advantages. Our findings suggest that non-native listeners may engage neural attentional processes that enhance low-level acoustic features, regardless if the target signal contains speech or non-speech information.
Article
The current study addressed the relation between awareness, attention, and memory, by examining whether merely presenting a tone and audio-clip, without deliberately associating one with other, was sufficient to bias attention to a given side. Participants were exposed to 80 different audio-clips (half included a lateralized pure tone) and told to classify audio-clips as natural (e.g., waterfall) or manmade (e.g., airplane engine). A surprise memory test followed, in which participants pressed a button to a lateralized faint tone (target) embedded in each audio-clip. They also indicated if the clip was (i) old/new; (ii) recollected/familiar; and (iii) if the tone was on left/right/not present when they heard the clip at exposure. The results demonstrate good explicit memory for the clip, but not for tone location. Response times were faster for old than for new clips but did not vary according to the target-context associations. Neuro-electric activity revealed an old-new effect at midline-frontal sites and a difference between old clips that were previously associated with the target tone and those that were not. These results support the attention-dependent learning hypothesis and suggest that associations were formed incidentally at a neural level (silent memory trace or engram), but these associations did not guide attention at a level that influenced behavior either explicitly or implicitly.
Article
Experienced musicians outperform non-musicians in understanding speech-in-noise (SPIN). The benefits of lifelong musicianship endure into older age, where musicians experience smaller declines in their ability to understand speech in noisy environments. However, it is presently unknown whether commencing musical training in old age can also counteract age-related decline in speech perception, and whether such training induces changes in neural processing of speech. Here, we recruited older adult non-musicians and assigned them to receive a short course of piano or videogame training, or no training. Participants completed two sessions of functional Magnetic Resonance Imaging where they performed a SPIN task prior to and following training. While we found no direct benefit of musical training upon SPIN perception, an exploratory Region of Interest analysis revealed increased cortical responses to speech in left Middle Frontal and Supramarginal Gyri which correlated with changes in SPIN task performance in the group which received music training. These results suggest that short-term musical training in older adults may enhance neural encoding of speech, with the potential to reduce age-related decline in speech perception.
Article
An unresolved issue in speech perception concerns whether top-down linguistic information influences perceptual responses. We addressed this issue using the event-related-potential technique in two experiments that measured cross-modal sequential-semantic priming effects on the auditory N1, an index of acoustic-cue encoding. Participants heard auditory targets (e.g., “potatoes”) following associated visual primes (e.g., “MASHED”), neutral visual primes (e.g., “FACE”), or a visual mask (e.g., “XXXX”). Auditory targets began with voiced (/b/, /d/, /g/) or voiceless (/p/, /t/, /k/) stop consonants, an acoustic difference known to yield differences in N1 amplitude. In Experiment 1 (N = 21), semantic context modulated responses to upcoming targets, with smaller N1 amplitudes for semantic associates. In Experiment 2 (N = 29), semantic context changed how listeners encoded sounds: Ambiguous voice-onset times were encoded similarly to the voicing end point elicited by semantic associates. These results are consistent with an interactive model of spoken-word recognition that includes top-down effects on early perception.
Article
Long-term memory (LTM) helps to efficiently direct and deploy the scarce resources of the attentional system; however, the neural substrates that support LTM-guidance of visual attention are not well understood. Here, we present results from fMRI experiments that demonstrate that cortical and subcortical regions of a network defined by resting-state functional connectivity are selectively recruited for LTM-guided attention, relative to a similarly demanding stimulus-guided attention paradigm that lacks memory retrieval and relative to a memory retrieval paradigm that lacks covert deployment of attention. Memory-guided visuospatial attention recruited posterior callosal sulcus, posterior precuneus, and lateral intraparietal sulcus bilaterally. Additionally, 3 subcortical regions defined by intrinsic functional connectivity were recruited: the caudate head, mediodorsal thalamus, and cerebellar lobule VI/Crus I. Although the broad resting-state network to which these nodes belong has been referred to as a cognitive control network, the posterior cortical regions activated in the present study are not typically identified with supporting standard cognitive control tasks. We propose that these regions form a Memory-Attention Network that is recruited for processes that integrate mnemonic and stimulus-based representations to guide attention. These findings may have important implications for understanding the mechanisms by which memory retrieval influences attentional deployment.
Article
When a sound occurs at a predictable time, it gets processed more efficiently. Predictability of the temporal structure of acoustic inflow has been found to influence the P3b of event-related potentials in young adults, such that highly predictable compared to less predictable input leads to earlier P3b peak latencies. In our study, we wanted to investigate the influence of predictability on target processing indexed by the P3b in children (10-12 years old) and young adults. To do that, we used an oddball paradigm with two conditions of predictability (high and low). In the High-predictability condition, a high-pitched target tone occurred most of the time in the fifth position of a five-tone pattern (after four low-pitched non-target sounds), whereas in the Low-predictability condition, no such rule was implemented. The target tone occurred randomly following 2, 3, 4, 5, or 6 non-target tones. In both age groups, reaction time to predictable targets was faster than to non-predictable targets. Remarkably, this effect was largest in children. Consistent with the behavioral responses, the onset latency of the P3b response elicited by targets in both groups was earlier in the predictable than the unpredictable conditions. However, only the children had significantly earlier peak latency responses for predictable targets. Our results demonstrate that target stimulus predictability increases processing speed in children and adults even when predictability was only implicitly derived by the stimulus statistics. Children did have larger effects of predictability, seeming to benefit more from predictability for target detection.
Article
Two four-choice reaction time (RT) experiments used the lateralized readiness potential (LRP) and the limb selection potential (LSP) to assess the effects of spatial S-R compatibility on motor processes. Individual stimuli were presented at one corner of a square centered at fixation, and each response was made with the left or right hand or foot. In Experiment 1, the correct response was determined by stimulus location, whereas in Experiment 2 it was determined by stimulus identity. Horizontal and vertical compatibility affected both RT and response accuracy, but the LRP and LSP results suggested that compatibility had little or no direct effect on the duration of motor processes. In addition, the results suggest that the relatively new LSP measure is a useful index of motor activation processes. Its insensitivity to horizontal stimulus artifacts makes it especially useful for studying the effects of horizontal spatial compatibility.
Article
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Article
There is a widely accepted notion that movement elements are assembled prior to movement execution in a central motor programming stage. However, it is not clear how this stage is structured-whether it is a unitary stage allowing different motor parameters to cross talk or whether there are several independent processes dealing with each motor parameter. We addressed this question by orthogonally manipulating two movement-related factors: response sequence complexity and movement duration. Both factors yielded main effects on reaction time but no interaction. Additive effects of both factors on the onsets of response- but not stimulus-synchronized lateralized readiness potentials suggest separable motoric loci of sequence complexity and duration. These findings are at variance with the notion of a unitary movement programming stage.
Article
G*Power (Erdfelder, Faul, & Buchner, 1996) was designed as a general stand-alone power analysis program for statistical tests commonly used in social and behavioral research. G*Power 3 is a major extension of, and improvement over, the previous versions. It runs on widely used computer platforms (i.e., Windows XP, Windows Vista, and Mac OS X 10.4) and covers many different statistical tests of the t, F, and chi2 test families. In addition, it includes power analyses for z tests and some exact tests. G*Power 3 provides improved effect size calculators and graphic options, supports both distribution-based and design-based input modes, and offers all types of power analyses in which users might be interested. Like its predecessors, G*Power 3 is free.
Article
The ability to focus on and understand one talker in a noisy social environment is a critical social-cognitive capacity, whose underlying neuronal mechanisms are unclear. We investigated the manner in which speech streams are represented in brain activity and the way that selective attention governs the brain's representation of speech using a "Cocktail Party" paradigm, coupled with direct recordings from the cortical surface in surgical epilepsy patients. We find that brain activity dynamically tracks speech streams using both low-frequency phase and high-frequency amplitude fluctuations and that optimal encoding likely combines the two. In and near low-level auditory cortices, attention "modulates" the representation by enhancing cortical tracking of attended speech streams, but ignored speech remains represented. In higher-order regions, the representation appears to become more "selective," in that there is no detectable tracking of ignored speech. This selectivity itself seems to sharpen as a sentence unfolds.
Article
Behavioral evidence suggests that the use of implicitly learned spatial contexts for improved visual search may depend on visual working memory resources. Working memory may be involved in contextual cueing in different ways: (1) For keeping implicitly learned working memory contents available during search or (2) for the capture of attention by contexts retrieved from memory. We mapped brain areas that were modulated by working memory capacity. Within these areas, activation was modulated by contextual cueing along the descending segment of the intraparietal sulcus, an area that has previously been related to maintenance of explicit memories. Increased activation for learned displays, but not modulated by the size of contextual cueing, was observed in temporo-parietal junction area, previously associated with capture of attention by explicitly retrieved memory items, and in ventral visual cortex. This pattern of activation extends previous research on dorsal versus ventral stream functions in memory guidance of attention to the realm of attentional guidance by implicit memory.
Article
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants' subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment.
Article
The purpose of the present study was to investigate the effects of selective attention and levels of processing (LOPs) at study on long-term repetition priming vis-a-vis their effects on explicit recognition. In a series of three experiments we found parallel effects of LOP and attention on long-term repetition priming and recognition performance when the manipulation of these factors at encoding was blocked. When a mixed study condition was used, both factors affected explicit recognition, while their effect on repetition priming was determined by the nature of the test. Shallow processing at test did not benefit from long-term repetition, regardless of whether the words had been studied deeply or shallowly. Selective attention affected long-term repetition priming in a semantic, but not in a lexical decision (LD), test. Regardless of study condition, retention lag affected long-term repetition priming only in the semantic test. These results suggest that if the experimental conditions allow scrupulous selection of attended and unattended information or narrow tuning to a shallow, pre-lexical LOP, implicit access to unattended or shallowly studied items is significantly reduced, as is explicit recognition. We suggest a conceptual framework for understanding the effects of LOP, attention, and retention interval on performance of explicit and implicit tests of memory.PsycINFO classification: 2343; 2340
Article
We examined auditory cortical potentials in normal hearing subjects to spectral changes in continuous low and high frequency pure tones. Cortical potentials were recorded to increments of frequency from continuous 250 or 4000Hz tones. The magnitude of change was random and varied from 0% to 50% above the base frequency. Potentials consisted of N100, P200 and a slow negative wave (SN). N100 amplitude, latency and dipole magnitude with frequency increments were significantly greater for low compared to high frequencies. Dipole amplitudes were greater in the right than left hemisphere for both base frequencies. The SN amplitude to frequency changes between 4% and 50% was not significantly related to the magnitude of spectral change. Modulation of N100 amplitude and latency elicited by spectral change is more pronounced with low compared to high frequencies. These data provide electrophysiological evidence that central processing of spectral changes in the cortex differs for low and high frequencies. Some of these differences may be related to both temporal- and spectral-based coding at the auditory periphery. Central representation of frequency change may be related to the different temporal windows of integration across frequencies.
Article
Priming and recollection are expressions of human memory mediated by different brain events. These brain events were monitored while people discriminated words from nonwords. Mean response latencies were shorter for words that appeared in an earlier study phase than for new words. This priming effect was reduced when the letters of words in study-phase presentations were presented individually in succession as opposed to together as complete words. Based on this outcome, visual word-form priming was linked to a brain potential recorded from the scalp over the occipital lobe about 450 ms after word onset. This potential differed from another potential previously associated with recollection, suggesting that distinct operations associated with these two types of memory can be monitored at the precise time that they occur in the human brain.
Article
Humans must often focus attention onto relevant sensory signals in the presence of simultaneous irrelevant signals. This type of attention has been explored in vision with the N2pc component, and the present study sought to find an analogous auditory effect. In Experiment 1, two 750-ms sounds were presented simultaneously, one from each of two lateral speakers. On each trial, participants indicated whether one of the two sounds was a pre-defined target. We found that targets elicited an N2ac component: a negativity in the N2 latency range at anterior contralateral electrodes. We also observed a later and more posterior contralateral positivity. Experiment 2 replicated these effects and demonstrated that they arose from competition between attended and unattended tones rather than reflecting lateralized effects of attention for individual tones. The N2ac component may provide a useful tool for studying selective attention within auditory scenes.
Article
Auditory cortical N100s were examined in ten auditory neuropathy (AN) subjects as objective measures of impaired hearing. Latencies and amplitudes of N100 in AN to increases of frequency (4-50%) or intensity (4-8 dB) of low (250 Hz) or high (4000 Hz) frequency tones were compared with results from normal-hearing controls. The sites of auditory nerve dysfunction were pre-synaptic (n=3) due to otoferlin mutations causing temperature sensitive deafness, post-synaptic (n=4) affecting other cranial and/or peripheral neuropathies, and undefined (n=3). AN consistently had N100s only to the largest changes of frequency or intensity whereas controls consistently had N100s to all but the smallest frequency and intensity changes. N100 latency in AN was significantly delayed compared to controls, more so for 250 than for 4000 Hz and more so for changes of intensity compared to frequency. N100 amplitudes to frequency change were significantly reduced in ANs compared to controls, except for pre-synaptic AN in whom amplitudes were greater than controls. N100 latency to frequency change of 250 but not of 4000 Hz was significantly related to speech perception scores. As a group, AN subjects' N100 potentials were abnormally delayed and smaller, particularly for low frequency. The extent of these abnormalities differed between pre- and post-synaptic forms of the disorder. Abnormalities of auditory cortical N100 in AN reflect disorders of both temporal processing (low frequency) and neural adaptation (high frequency). Auditory N100 latency to the low frequency provides an objective measure of the degree of impaired speech perception in AN.
Article
Reaction times (RT) to targets are faster in repeated displays relative to novel ones when the spatial arrangement of the distracting items predicts the target location (contextual cueing). It is assumed that visual-spatial attention is guided more efficiently to the target resulting in reduced RTs. In the present experiment, contextual cueing even occurred when the target location was previously peripherally cued. Electrophysiologically, repeated displays elicited an enhanced N2pc component in both conditions and resulted in an earlier onset of the stimulus-locked lateralized readiness potential (s-LRP) in the cued condition and in an enhanced P3 in the uncued condition relative to novel displays. These results indicate that attentional guidance is less important than previously assumed but that other cognitive processes, such as attentional selection (N2pc) and response-related processes (s-LRP, P3) are facilitated by context familiarity.
Article
Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.
Article
The present study investigated the predictions of two prominent models (Klapp, 1995, 2003; Rosenbaum, Inhoff, & Gordon, 1984) of programming of response sequences with the help of behavioral data, the foreperiod Contingent Negative Variation (CNV) and the Lateralized Readiness Potential (LRP). Participants performed one-key and three-key responses with their left or right hand in a precuing task. Sequence length was manipulated across blocks and precues provided either no information, partial information about hand or start finger, or full information about the response. A sequence length effect was indicated by reaction time when the precue provided partial or full information. The LRP data suggested that the duration of motor processes increases with sequence length. Foreperiod LRP and CNV revealed that participants preprogram only the first element of the sequence and prepare multiple responses if the precue provides only partial information. We discuss the implications of current findings for the two models.
Article
To examine auditory cortical potentials in normal-hearing subjects to intensity increments in a continuous pure tone at low, mid, and high frequency. Electrical scalp potentials were recorded in response to randomly occurring 100 ms intensity increments of continuous 250, 1000, and 4000 Hz tones every 1.4 s. The magnitude of intensity change varied between 0, 2, 4, 6, and 8 dB above the 80 dB SPL continuous tone. Potentials included N100, P200, and a slow negative (SN) wave. N100 latencies were delayed whereas amplitudes were not affected for 250 Hz compared to 1000 and 4000 Hz. Functions relating the magnitude of the intensity change and N100 latency/amplitude did not differ in their slope among the three frequencies. No consistent relationship between intensity increment and SN was observed. Cortical dipole sources for N100 did not differ in location or orientation between the three frequencies. The relationship between intensity increments and N100 latency/amplitude did not differ between tonal frequencies. A cortical tonotopic arrangement was not observed for intensity increments. Our results are in contrast to prior studies of brain activities to brief frequency changes showing cortical tonotopic organization. These results suggest that intensity and frequency discrimination employ distinct central processes.
Article
This paper reviews the actual and potential benefits of a marriage between cognitive psychology and psychophysiology. Psychophysiological measures, particularly those of the event-related brain potential, can be used as markers for psychological events and physiological events. Thus, they can serve as "windows" on the mind and as "windows" on the brain. These ideas are illustrated in the context of a series of studies utilizing the lateralized readiness potential, a measure of electrical brain activity that is related to preparation for movement. This measure has been used to illuminate presetting processes that prepare the motor system for action, to demonstrate the presence of the transmission of partial information in the cognitive system, and to identify processes responsible for the inhibition of responses. The lateralized readiness potential appears to reflect activity in motor areas of cortex. Thus, this measure, along with other psychophysiological measures, can be used to understand how the functions of the mind are implemented in the brain.
Article
Attentional state during acquisition is an important determinant of performance on direct memory tests. In two experiments we investigated the effects of dividing attention during acquisition on conceptually driven and data-driven indirect memory tests. Subjects read a list of words with or without distraction. Memory for the words was later tested with an indirect memory test or a direct memory test that differed only in task instructions. In Experiment 1, the indirect test was category-exemplar production (a conceptually driven task) and the direct test was category-cued recall. In Experiment 2, the indirect test was word-fragment completion (a data-driven task) and the direct test was word-fragment cued recall. Dividing attention at encoding decreased performance on both direct memory tests. Of the indirect tests, category-exemplar production but not word-fragment completion was affected. The results indicate that conceptually driven indirect memory tests, like direct memory tests, are affected by divided attention, whereas data-driven indirect tests are not. These results are interpreted within the transfer-appropriate processing framework.