Article

Selective attention modulates implicit learning

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The effect of selective attention on implicit learning was tested in four experiments using the "contextual cueing" paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Например, в эксперименте Ю. Цзян и М.М. Чун участники выполняли поиск целевого стимула среди дистракторов, 50% которых были одного цвета с целью (красного), а 50% -другого (зеленого) (Jiang, Chun, 2001). Испытуемых просили обращать внимание только на стимулы определенного цвета («воспринимаемый контекст») и игнорировать стимулы другого цвета («игнорируемый контекст»). ...
... Увеличение скорости поиска в данном случае происходит за счет сокращения времени, затрачиваемого на обработку дистракторов. Возможно, по этой причине в рассмотренных выше экспериментах (Jiang, Chun, 2001;Kawahara, 2003) повторение конфигураций игнорируемых дистракторов не оказало значимого воздействия на эффективность поиска. Сокращение количества игнорируемых дистракторов до одного или двух затруднит игнорирование объектов, выделяющихся на фоне остальных стимулов. ...
... Это допущение расходится с некоторыми положениями, представленными во введении. Как уже упоминалось, возможность усвоения дополнительного контекста или отрицается (Jiang, Chun, 2001;Kawahara, 2003), или же предполагается, что усвоение контекстуальной информации протекает латентно, не оказывая заметного влияния на эффективность поиска цели (Jiang, Leung, 2005). Вместе с тем другие исследования демонстрируют, что не связанная с целью контекстуальная информация может влиять на ее обнаружение. ...
Article
Full-text available
p> Objective. The study tested the assumption that heterogeneity (low degree of similarity) of visual context elements facilitates the search for a given target under conditions of implicit internalization of contextual configurations. A visual search task was used: subjects had to detect a target (a black Landolt ring with a right or left gap) among distractor configurations of two types (similar and dissimilar to the target). Methods and materials. The subjects were divided into experimental and control groups. The primary distractors in both groups were black Landolt rings with a 45º, 135º, 225º, or 315º tear angle. The type of additional distractors differed: figures (triangles, squares, crosses, and stars) of different colors were demonstrated in the experimental group, and white Landolt rings were demonstrated in the control group. In both groups, some distractor configurations (contexts) were repeated throughout the procedure, while others were changed. The main procedure included 24 blocks (32 tasks per block), which were grouped into 6 epochs (4 blocks per epoch). The effects of implicit contextual internalization were assessed by the results of the last epoch. Results. The most pronounced contextual influence on the efficiency of target retrieval was found when the configurations of the main and additional distractors were maintained. When only the configuration of the additional distractors was repeated, target retrieval in both experimental and control groups took longer than when both contexts were changed. Conclusions. This result demonstrates the effect of contextual interference. The paper provides an interpretation of this effect.</p
... Most shockingly, some experimental results suggest that contextual cueing can take place even for stimuli that participants are actively trying to ignore. The question of whether contextual cueing depends on selective attention was initially explored in two, now classic, studies by Jiang and Chun (2001) and Jiang and Leung (2005). The first of these studies presented participants with visual search displays including stimuli in two colors. ...
... Participants were instructed that the target would always appear in one of these colors (e.g., red) and that consequently they did not need to pay attention to the stimuli presented in the other color (e.g., green), which would always be distractors. This procedure allowed Jiang and Chun (2001) to independently manipulate whether only attended distractors, only ignored distractors 1 or both predicted the location of the target. One of their experiments suggested that, counterintuitively, participants could show a contextual cueing effect even for the ignored set of distractors, although the effect was much smaller in magnitude that the contextual cueing effect elicited by the attended set of distractors. ...
... One of their experiments suggested that, counterintuitively, participants could show a contextual cueing effect even for the ignored set of distractors, although the effect was much smaller in magnitude that the contextual cueing effect elicited by the attended set of distractors. Jiang and Chun (2001) hypothesized that the reason for the small contextual cueing effect observed for ignored distractors might be that the task was too easy for participants, allowing them to pay some residual attention even to the irrelevant set of distractors (Lavie, 1995). Consistent with this hypothesis, in a subsequent experiment using a slightly more demanding version of the task, they found no evidence whatsoever of contextual cueing for the ignored distractors. ...
Article
Full-text available
Visual search usually improves with repeated exposure to a search display. Previous research suggests that such a “contextual cueing” effect may be supported even by aspects of the search display that participants have been explicitly asked to ignore. Based on this evidence, it has been suggested that the development of contextual cueing over trials does not depend on selective attention. In the present series of experiments, we show that the most common strategy used to prevent participants from paying attention to task-irrelevant distractors often results in suboptimal selection. Specifically, we show that visual search is slower when search displays include many irrelevant distractors. Eye-tracking data show that this happens, at least in part, because participants fixate on them. These results cast doubts on previous demonstrations that contextual cueing is independent of selective attention.
... The typical finding in this paradigm is 67 that participants' search performance improves for repeated relative to random, non-repeated Collectively, these studies showed that information encoded into episodic long-term memory can 92 capture attention and guide visual search (see also Nickel, Hopkins, Minor, & Hannula, 2020). 93 Although it has been proposed consistently that repeated contexts boost selective attention and 94 subsequently guide visual search, the effect seems to work the other way around too: the 95 availability of selective attentional resources can influence acquisition of invariant spatial 96 configurations (e.g., Jiang & Chun, 2001;Jiang & Leung, 2005). For instance, the contextual 97 cueing effect was overall weaker and developed slower when the search display was randomly 98 divided into two subsets by color or size relative to the homogeneous display (Conci & von 99 Mühlenen, 2011). ...
... This result suggested that segmentation of a single display into various subsets 100 increases competition for attentional resources (i.e., each segment attracts attention) and in turn 101 weakens contextual learning. Further, segmentation of search displays could block context 102 acquisition, as shown by Jiang and Chun (2001) who manipulated the task relevance (task-relevant 103 vs. -irrelevant) of two color subsets (a repeated and a non-repeated) in a display, and observed 104 contextual cueing effect only when the repeated context was task-relevant but not when it was 105 task-irrelevant (ignored). Jiang and Leung (2005) took this experiment one step further by 106 introducing a subsequent transfer phase, where the colors of previously relevant and irrelevant 107 items were swapped after the initial training phase. ...
... 114 Note that the observed blocking effect in Jiang and Leung (2005) could be explained by task-115 irrelevant feature suppression. In more detail, it has been shown previously that participants are 116 able to filter out consistently task-irrelevant features (e.g, task-relevant subset of items being 117 consistently black has been shown to suppress task-irrelevant white subset even strongly; see shown in previous studies (e.g., Jiang & Chun, 2001;Jiang & Leung, 2005) can at least partially 120 be explained by the suppression of task-irrelevant color, given that the task-relevant and -irrelevant 121 colors were never changed for a given observer and the visual system could adapt to ignore a 122 certain color in a top-down manner (Vatterott & Vecera, 2012). In line with this hypothesis, Geyer 123 and colleagues (2010) showed that alternating task-relevant and -irrelevant colors across trials 124 resulted in a significant learning effect also for unattended items. ...
Article
Full-text available
Repeatedly presenting a target within a stable search array facilitates visual search, an effect termed contextual cueing. Previous solo-performance studies have shown that successful acquisition of contextual memories requires explicit allocation of attentional resources to the task-relevant repeated contexts. By contrast, repeated but task-irrelevant contexts could not be learned when presented together with repeated task-relevant contexts due to a blocking effect. Here we investigated if such blocking of context learning could be diminished in a social context, when the task-irrelevant context is task-relevant for a co-actor in a joint action search mode. We adopted the contextual cueing paradigm and extended this to the co-active search mode. Participants learned a context-cued subset of the search displays (color-defined) in the training phase, and their search performance was tested in the transfer phase, where previously irrelevant and relevant subsets were swapped. The experiments were conducted either in a solo search mode (Experiments 1 and 3) or in a co-active search mode (Experiment 2). Consistent with the classical contextual cueing studies, contextual cueing was observed in the training phase of all three experiments. Importantly, however, in the “swapped” test session, a significant contextual cueing effect was manifested only in the co-active search mode, not in the solo search mode. Our findings suggest that social context may widen the scope of attention, thus facilitating the acquisition of task-irrelevant contexts.
... Meaning that in addition to the classic attentional guidance account cited above, the availability of attentional resources can further modulate the acquisition of invariant spatial configurations (e.g., Jiang and Chun, 2001;Geyer et al., 2010; but see Jiang and Leung, 2005). This was first demonstrated by Jiang and Chun (2001, Experiment 3), where participants searched for a colored (red and green) target item that was presented among equal subsets of green and red distractors. ...
... These findings were further corroborated and extended by Jiang and Leung (2005) who used a similar design to that of Jiang and Chun (2001) regarding the training session. For instance, participants searched for a target letter that was presented within one of the two target colors (black and white) under four search conditions, namely, both-old, bothnew, attended-old, and attended-new. ...
... Applied to the current work, the learning of the task-irrelevant context could not be blocked under fast presentation because attention was directed to both the task-relevant and -irrelevant contexts. Moreover, these results are also with the suggestion by Vadillo et al. (2020) that previous studies underestimated the importance of selective attention in contextual cueing (Jiang and Chun, 2001;Jiang and Leung, 2005). For instance, it demonstrates that attentional selection is important for contextual learning; however, task requirements determine which level of perceptual learning can be achieved. ...
Article
Full-text available
In the contextual cueing task, visual search is faster for targets embedded in invariant displays compared to targets found in variant displays. However, it has been repeatedly shown that participants do not learn repeated contexts when these are irrelevant to the task. One potential explanation lays in the idea of associative blocking, where salient cues (task-relevant old items) block the learning of invariant associations in the task-irrelevant subset of items. An alternative explanation is that the associative blocking rather hinders the allocation of attention to task-irrelevant subsets, but not the learning per se. The current work examined these two explanations. In two experiments, participants performed a visual search task under a rapid presentation condition (300 ms) in Experiment 1, or under a longer presentation condition (2,500 ms) in Experiment 2. In both experiments, the search items within both old and new displays were presented in two colors which defined the irrelevant and task-relevant items within each display. The participants were asked to search for the target in the relevant subset in the learning phase. In the transfer phase, the instructions were reversed and task-irrelevant items became task-relevant (and vice versa). In line with previous studies, the search of task-irrelevant subsets resulted in no cueing effect post-transfer in the longer presentation condition; however, a reliable cueing effect was generated by task-irrelevant subsets learned under the rapid presentation. These results demonstrate that under rapid display presentation, global attentional selection leads to global context learning. However, under a longer display presentation, global attention is blocked, leading to the exclusive learning of invariant relevant items in the learning session.
... Selective attentional engagement is critical for efficient and effective learning (Jiang and Chun, 2001). Sustaining attention to a single continuous stream of information is a constant challenge, especially when competing sensory stimuli are present. ...
... It is suspected that PS can be used to measure shared attention , where shared attention may be an underlying explaining factor of other findings such as those by (Gashi et al., 2019) on audience engagement. Attention itself plays an important role in learning capabilities (Jiang and Chun, 2001;Jamet et al., 2008), task performance (Pashler et al., 2001), and social interactions (Andrade et al., 2009). Jamet et al. demonstrated the benefits of using attention guiding techniques to facilitate learning, resulting in improved performance for retention (e.g. ...
Thesis
Full-text available
Attentional engagement – the emotional, cognitive and behavioral connection with information to which the attention is focused – is important in all settings where humans process information. Measures of attentional engagement could be helpful to, for instance, support teachers in online classrooms, or individuals working together in teams. This thesis aims to use physiological synchrony, the similarity in neurophysiological responses across individuals, as an implicit measure of attentional engagement. The research is divided into two parts: the first investigates how different attentional modulations affect physiological synchrony in brains and bodies, the second explores the feasibility of using physiological synchrony as a tool to monitor attention in real-life settings. In Part I, the effect of different manipulations of attention on physiological synchrony in brain and body is explored. We find that physiological synchrony does not only reflect attentional engagement when measured in the electroencephalogram (EEG), but also when measured in electrodermal activity (EDA) or heart rate. Moreover, we find that physiological synchrony can reflect both sensory and top-down variations in attention, where top-down focus of attention is best reflected by synchrony in EEG, and where emotionally salient events attracting attention are best reflected by EDA and heart rate. Part II transitions into the practical applications of physiological synchrony in real-life contexts. Wearables are employed to measure physiological synchrony in EDA and heart rate, demonstrating comparable accuracy to high-end lab-grade equipment. The research also incorporates machine learning techniques, showing that physiological synchrony can be combined with novel unsupervised learning algorithms. Finally, measurements in classrooms reveal that physiological synchrony can be successfully monitored in real-life settings. While the findings are promising, the thesis acknowledges limitations in terms of sufficient data that are required for robust monitoring of attentional engagement and in terms of limited variance in attention explained by physiological synchrony. To advance the field, future work should focus on the applied, methodological and ethical questions that remain unanswered.
... shapes had a 10% offset in the line junction to increase search difficulty [16] and were rotated a random multiple of 90°. The target T was tilted either to the left or to the right (90°), and the participant was instructed to indicate the orientation of the T with a left or right button press. ...
... It is possible that temporal predictive context is encoded by observers, but not exploited. Within contextual cueing, it has been shown that the exploitation of spatial predictive context knowledge depends on attention, even though its acquisition does not [16,25]. Additionally, learned predictive context can be visible at the neural level without being expressed in behaviour [13,15]. ...
Article
Full-text available
The human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context based on sequence order, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was also true when we looked specifically at participants who revealed a sensitivity to spatial predictive context. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.
... An alternative and potentially more parsimonious explanation focuses on the role of selective attention toward reward-associated features. Research from other experimental paradigms suggests that implicit learning often depends on selective attention (Duncan et al., 2024;Jiang & Chun, 2001;Jiménez & Méndez, 1999;Vadillo et al., 2020;Vadillo et al., 2024). In the paradigm used by Le Pelley et al. (2015), distractors in the additional singleton task capture attention in a bottom-up manner (Theeuwes, 1992(Theeuwes, , 1994. ...
Preprint
Full-text available
Stimuli that reliably predict reward can capture attention. Value‑Modulated Attentional Capture (VMAC) is typically viewed as independent of task goals or physical salience, arising from Pavlovian learning. However, recent evidence suggests that the awareness of the stimulus‑reward contingency may be necessary during the acquisition of such attentional biases, although the underlying mechanism remains unclear. One possibility is that awareness mediates the learning process of VMAC by directing selective, top-down attention toward the reward‑predictive feature. The present preregistered study tested whether reward‑related attentional biases arise primarily from such selective attention, independently of awareness. Participants performed a visual search task in which one of two singleton distractors—one predicting high reward, the other low reward—appeared on a subset of trials. Selective attention to the reward‑predictive feature (distractor color) was manipulated between groups: In some trials, one group reported the distractor’s color, while the other group reported an irrelevant feature (its location). Otherwise, the stimulus–reward contingencies remained identical for both groups. VMAC, as measured by slower response times for the high‑value compared to the low‑value distractor, emerged only in the group that reported the color. Critically, the previous result cannot be explained by individual differences in awareness. These findings demonstrate a causal role of selective attention in the acquisition of reward-related attentional biases.
... The influence of cognitive factors on short-term monocular deprivation is also an intriguing question [35][36][37] . Attention, as a fundamental cognitive process that selectively prioritizes relevant information, is a key prerequisite for learning 38,39 , such as motor skill acquisition 40 and language learning 41 . However, perceptual learning, which relies on plasticity in the sensory cortex, can occur without attention [42][43][44][45] . ...
Article
Full-text available
Monocular deprivation during the critical period impairs the cortical structure and visual function of the deprived eye. Conversely, transient occlusion of one eye in adults enhances the predominance of that eye. This counter-intuitive effect of short-term monocular deprivation is a form of homeostatic plasticity. However, whether this sensory plasticity requires attention, and the underlying neural mechanisms remain unclear. Here, through a psychophysical experiment, we demonstrate that the deprivation effect is dramatically attenuated in the absence of attention. We develop a neural computational model incorporating the Hebbian learning rule in interocular inhibitory synapses (i.e., anti-Hebbian learning) to explain the deprivation effect. Our model predicts both the boosting of the deprived eye and its dependence on attention. Moreover, it accounts for other forms of binocular plasticity, including plasticity observed in prolonged binocular rivalry. We suggest that short-term binocular plasticity arises from the plasticity in inhibitory connections between the two monocular pathways.
... Investigations of the use of color to influence and guide search have notably been conducted in the domain of contextual cueing 47,48 . Surprisingly, while the use of color as a marker to distinguish task-relevant and taskirrelevant context is well documented [49][50][51][52][53][54] , few studies have investigated how color itself can be used as a cue for predicting a target's location. Kunar et al. 55,56 found that repetition of color background speeded target processing but provided little or no visual search guidance, especially when other cues such as spatial layout were provided. ...
Article
Full-text available
Humans are good at picking up statistical regularities in the environment. Probability cueing paradigms have demonstrated that the location of a target can be predicted based on spatial regularities. This is assumed to rely on flexible spatial priority maps that are influenced by visual context. We investigated whether stimulus features such as color distributions differing in mean and variance can cue location regularities. In experiment 1, participants searched for an oddly colored target diamond in a 6 × 6 set. On each trial, the distractors were drawn from one of two color distributions centered on different color averages. Each distribution was associated with different target location probabilities, one distribution where the target had an 80% chance to appear on the left (the rich location), while the rich location would be on the right for the other distribution. Participants were significantly faster at locating the target when it appeared in the rich location for both distributions, demonstrating learning of the relationship between color average and location probability. In experiments 2 and 3, observers performed a similar search task, but the distributions had different variances with the same average color. There was no evidence that search became faster when the target appeared in a rich location, suggesting that contingencies between target probabilities and color variance were not learned. These results demonstrate how statistical location learning is flexible, with different visual contexts leading to different spatial priority maps, but they also reveal important limits to such learning.
... Dotted line represents chance accuracy during the choice task (33%) effects, prime congruency may also mediate implicit learning. Implicit learning is driven by selectively attending task relevant stimuli (e.g., Baker et al., 2004;Jiang & Chun, 2001;Turk-Browne et al., 2005), whereas prime-target congruency is thought to facilitate perceptual processing by shifting attention to the target location (Scharlau, 2002). Environments that provide a framework to organise complex information can enhance the efficiency of visual processing (Biederman, 1972). ...
Article
Full-text available
Sequential behaviour is underpinned by the selection and inhibition of movement at appropriate points in space and time. Sequences embedded among movement patterns must be learnt, yet the contribution of response selection and inhibition to the acquisition of motor sequences remains poorly understood. We addressed this issue by overlaying the serial reaction time task (SRTT) with subliminal masked primes that differentially weighed response tendencies. In Experiment 1, twenty-four healthy young adults, and in Experiment 2, thirty-six participants, performed the SRTT with congruent (same position), incongruent (different position), or neutral (no prime) subliminal masked primes. Each condition featured an embedded eight-digit (Experiment 1) or ten-digit (Experiment 2) second-order sequence, with conditions presented in counterbalanced order during a single session. Sequence specific learning was observed under neutral and congruent prime conditions. Independent of sequence awareness, congruent primes reduced initial response latency and led to greater sequence specific learning compared with neutral primes. However, incongruent primes appeared to attenuate learning (Experiment 1). These results demonstrate that prime congruency modulates sequence specific learning below the threshold of conscious awareness. Congruent primes may elevate the salience of stimulus–response compounds and accentuate learning, but at the cost of increased awareness. Incongruent primes, and the induction of response conflict, attenuate sequence specific learning (Experiment 1) and may prevent the formation of cross-temporal contingencies necessary for implicit motor sequence learning.
... Studies that manipulate probabilistic or statistical regularities of the location of the target within an array of distractors show strong and lasting performance benefits for targets that appear at more frequent locations. Studies that manipulate contextual associations show improved performance for targets that appear in a consistent location within a specific configuration of distracting items [13][14][15]. How statistical and contextual memories guide attention and performance has mainly been studied separately. ...
Article
Full-text available
During visual search, we quickly learn to attend to an object’s likely location. Research has shown that this process can be guided by learning target locations based on consistent spatial contextual associations or other statistical regularities. Here, we tested how different types of associations guide learning and the utilisation of established memories for different purposes. Participants learned contextual associations or rule-like statistical regularities that predicted target locations within different scenes. The consequences of this learning for subsequent performance were then evaluated on attention-orienting and memory-recall tasks. Participants demonstrated facilitated attention-orienting and recall performance based on both contextual associations and statistical regularities. Contextual associations facilitated attention orienting with a different time course compared to statistical regularities. Benefits to memory-recall performance depended on the alignment between the learned association or regularity and the recall demands. The distinct patterns of behavioural facilitation by contextual associations and statistical regularities show how different forms of long-term memory may influence neural information processing through different modulatory mechanisms.
... Search scenes consisted of scenes with eight stimuli: one target letter T and seven distractor L shapes, with a small (10%) offset in the line junction to increase search difficulty (Jiang & Chun, 2001). Stimuli measured 2.5°× 2.5°a nd were displayed as mid gray on a dark gray background. ...
Article
Full-text available
The human visual system is equipped to rapidly and implicitly learn and exploit the statistical regularities in our environment. Within visual search, contextual cueing demonstrates how implicit knowledge of scenes can improve search performance. This is commonly interpreted as spatial context in the scenes becoming predictive of the target location, which leads to a more efficient guidance of attention during search. However, what drives this enhanced guidance is unknown. First, it is under debate whether the entire scene (global context) or more local context drives this phenomenon. Second, it is unclear how exactly improved attentional guidance is enabled by target enhancement and distractor suppression. In the present magnetoencephalography experiment, we leveraged rapid invisible frequency tagging to answer these two outstanding questions. We found that the improved performance when searching implicitly familiar scenes was accompanied by a stronger neural representation of the target stimulus, at the cost specifically of those distractors directly surrounding the target. Crucially, this biasing of local attentional competition was behaviorally relevant when searching familiar scenes. Taken together, we conclude that implicitly learned spatial predictive context improves how we search our environment by sharpening the attentional field.
... This study found that participants who acquire corresponding clustering knowledge beforehand can recognize explicitly repeated layouts to some extent. Experiment 1, similar to previous studies, found that participants could not recognize repeated layouts explicitly, indicating that contextual cue learning is implicit (Chun & Jiang, 1998;Chun & Jiang, 1999, 2003Jiang & Chun, 2001). Some arguments suggest that the representations of implicit and explicit memory share a common storage mechanism, differing only in the conscious state during retrieval, and the processing depth affects conscious states (Kroell, Schlagbauer, Zinchenko, Müller, & Geyer, 2019;Turk-Browne, Yi, & Chun, 2006). ...
Article
Full-text available
Contextual cueing is a phenomenon of visual statistical learning observed in visual search tasks. Previous research has found that the degree of deviation of items from its centroid, known as variability, determines the extent of generalization for that repeated scene. Introducing variability increases dissimilarity between multiple occurrences of the same repeated layout significantly. However, current theories do not explain the mechanisms that help to overcome this dissimilarity during contextual cue learning. We propose that the cognitive system initially abstracts specific scenes into scene layouts through an automatic clustering unrelated to specific repeated scenes, and subsequently uses these abstracted scene layouts for contextual cue learning. Experiment 1 indicates that introducing greater variability in search scenes leads to a hindering in the contextual cue learning. Experiment 2 further establishes that conducting extensive visual searches involving spatial variability in entirely novel scenes facilitates subsequent contextual cue learning involving corresponding scene variability, confirming that learning clustering knowledge precedes the contextual cue learning and is independent of specific repeated scenes. Overall, this study demonstrates the existence of multiple levels of learning in visual statistical learning, where item-level learning can serve as material for layout-level learning, and the generalization reflects the constraining role of item-level knowledge on layout-level knowledge.
... The contextual cueing effect has been extensively investigated in several follow-up studies that concentrated on different aspects of it, such as visual search, implicit learning, attention, and related evidence has been found to support the robustness of this cognitive effect (Goujon et al., 2015;Sisk et al., 2019). For example, researchers have found that (1) repeated display configurations could improve visual search performance and boost search speed (Geyer et al., 2010;Makovski & Jiang, 2010;Pollmann, 2019), (2) contextual cueing is (largely) an implicit cognitive ability and does not require the explicit awareness of the contextual repetition (Chun & Jiang, 1998, 2003Colagiuri & Livesey, 2016;Rausei et al., 2007, but see Kroell et al., 2019), and (3) context memory leads individuals to prioritise attention to the location where the target stimulus is most likely to appear (Anderson, 2016;Jiang & Chun, 2001). As the contextual cueing effect has attracted increased attention from researchers, knowledge of this cognitive ability has grown. ...
Article
Full-text available
Visual-spatial contextual cueing learning underpins the daily lives of older adults, enabling them to navigate their surroundings, perform daily activities, and maintain cognitive function. While the contextual cueing effect has received increasing attention from researchers, the relationship between this cognitive ability and healthy aging remains controversial. To investigate whether visual-spatial contextual cueing learning declines with age, we examined the contextual learning patterns of older (60-71 years old) and younger adults (18-26 years old) using a contextual-guided visual search paradigm and response variability measurements. We observed significant contextual learning effects in both age groups, impacting response speed and variability, with these effects persisting for at least 24 days. However, older adults required more repetitions and memorized fewer repeated stimuli during initial learning. Interestingly, their long-term memory maintenance appeared stronger, as their contextual facilitation persisted in both response speed and variability, while younger adults only persisted in response speed but not variability. Overall, our results suggest an age-related complex and diverse contextual cueing pattern, with older adults showing weaker learning but stronger long-term memory maintenance compared to younger adults.
... The square would not capture attention in a bottom-up manner, but it would prevent contextual learning by diverting attention away from the otherwise predictive context in repeated displays. This view would be in line with previous studies reporting that the implicit learning of contextual information critically depends on deploying selective attention to predictive information (Jiang and Chun, 2001;Jiang and Leung, 2005;Goujon et al., 2009). ...
Article
Full-text available
During visual search, the spatial configuration of the stimuli can be learned when the same displays are presented repeatedly, thereby guiding attention more efficiently to the target location (contextual cueing effect). This study investigated how the presence of a task-irrelevant object influences the contextual cueing effect. Experiment 1 used a standard T/L search task with “old” display configurations presented repeatedly among “new” displays. A green-filled square appeared at unoccupied locations within the search display. The results showed that the typical contextual cueing effect was strongly reduced when a square was added to the display. In Experiment 2, the contextual cueing effect was reinstated by simply including trials where the square could appear at an occupied location (i.e., underneath the search stimuli). Experiment 3 replicated the previous experiment, showing that the restored contextual cueing effect did not depend on whether the square was actually overlapping with a stimulus or not. The final two experiments introduced a display change in the last epoch. The results showed that the square does not only hinder the acquisition of contextual information but also its manifestation. These findings are discussed in terms of an account where effective contextual learning depends on whether the square is perceived as part of the search display or as part of the display background.
... Thus, variation in pregameplay speech recognition was expected to vary according to cognitive scores, with better recognition for children who have stronger WM and IC. Moreover, based on studies of implicit learning, it was anticipated that IC could have also affected the likelihood of benefitting from implicit familiarization (Jiang & Chun, 2001;Li & Shi, 2016). That is, children with better IC may be better at learning and encoding the idiosyncratic regularities of a talker's utterances and thus are more likely to show a familiarity benefit. ...
Article
Full-text available
Purpose The goal was to evaluate whether implicit talker familiarization via an interactive computer game, designed for this study, could improve children's word recognition in classroom noise. It was hypothesized that, regardless of age, children would perform better when recognizing words spoken by the talker who was heard during the game they played. Method Using a one-group pretest–posttest experimental design, this study examined the impact of short-term implicit voice exposure on children's word recognition in classroom noise. Implicit voice familiarization occurred via an interactive computer game, played at home for 10 min a day for 5 days. In the game, children (8–12 years) heard one voice, intended to become the “familiar talker.” Pre- and postfamiliarization, children identified words in prerecorded classroom noise. Four conditions were tested to evaluate talker familiarity and generalization effects. Results Results demonstrated an 11% improvement when recognizing words spoken by the voice heard in the game (“familiar talker”). This was observed only for words that were heard in the game and did not generalize to unfamiliarized words. Before familiarization, younger children had poorer recognition than older children in all conditions; however, after familiarization, there was no effect of age on performance for familiarized stimuli. Conclusions Implicit short-term exposure to a talker has the potential to improve children's speech recognition. Therefore, leveraging talker familiarity through gameplay shows promise as a viable method for improving children's speech-in-noise recognition. However, given that improvements did not generalize to unfamiliarized words, careful consideration of exposure stimuli is necessary to optimize this approach.
... Further, a similar effect has also been observed with rewarded stimuli [165], and USN patients would appear to be as sensitive as healthy individuals to the distribution of targets even in the neglected field, responding more quickly when targets appear in the most likely region than when targets appear in the least likely region [166]. This last phenomenon of optimization of visual processes is achieved by contextual cueing, which interplays selective attention and implicit learning [167]. As a robust memory for visual context that exists to guide spatial attention, contextual cueing has been shown to direct spatial attention towards embedded targets when there is a high degree of regularity between targets and distractor context in visual search tasks [168,169]. ...
Article
The ability of the brain to recognize and orient attention to relevant stimuli appearing in the visual field is highlighted by a tuning process, which involves modulating the early visual system by both cortical and subcortical brain areas. Selective attention is coordinated not only by the output of stimulus-based saliency maps but is also influenced by top-down cognitive factors, such as internal states, goals, or previous experiences. The basal ganglia system plays a key role in implicitly modulating the underlying mechanisms of selective attention, favouring the formation and maintenance of implicit sensory-motor memories that are capable of automatically modifying the output of priority maps in sensory-motor structures of the midbrain, such as the superior colliculus. The article presents an overview of the recent literature outlining the crucial contribution of several subcortical structures to the processing of different sources of salient stimuli. In detail, we will focus on how the mesencephalic-basal ganglia closed loops contribute to implicitly addressing and modulating selective attention to prioritized stimuli. We conclude by discussing implicit behavioural responses observed in clinical populations in which awareness is compromised at some level. Implicit (emergent) awareness in clinical conditions that can be accompanied by manifest anosognosic symptomatology (i.e., hemiplegia) or involving abnormal conscious processing of visual information (i.e., unilateral spatial neglect and blind sight) represents interesting neurocognitive "test cases" for inferences about mesencephalic-basal ganglia closed-loops involvement in the formation of implicit sensory-motor memories.
... This idea is supported by many studies, for example, Chun and Jiang (1998) found that context repetition caused a reduced search slope (indexing the time necessary to search items in the configuration), but no change in intercept (the time necessary for all other processes) when the reaction time (RT) of target detection was measured as a function of set size. In another study (Jiang & Chun, 2001), when selective attention was manipulated by using contexts in to-be-attended and to-be-ignored colors, only contexts in to-be-attended colors facilitated the visual search. Eye-tracking studies also support the idea of improved attentional guidance (e.g., Harris & Remington, 2017;Manginelli & Pollmann, 2009). ...
Article
Full-text available
The contextual cueing effect is the phenomenon observed when response time (RT) becomes faster in visual search in repeated context compared with a new one. In the present study, we explored whether the mechanisms involved in the effect are age dependent. We investigated it in younger (N = 20, 12 women, 21.2 ± 1.75 years) and older (N = 19, nine women, 67.05 ± 3.94 years) adults. We found a faster target identification in the repeated configurations with similar magnitude in the two age groups, which indicates that this contextual cueing effect remained intact even in the older participants. To shed light on the underlying mechanisms, we measured and compared the amplitude of three event‐related potentials: N2pc, P3, and response‐locked LRP. In the younger group, the larger contextual cueing effect (novel‐minus‐repeated RT difference) correlated positively with a larger difference in amplitude for repeated compared with novel configurations for both the N2pc and the P3 components, but there was no correlation with the response‐locked lateralized readiness potential (rLRP) amplitude difference. However, in the older group, only the rLRP amplitude difference between novel and repeated configurations showed an enhancement with larger contextual cueing. These results suggest that different mechanisms are responsible for the contextual effect in the two age groups. It has both an early and an intermediate locus in younger adults: effective attentional allocation and successful stimulus categorization, or decision‐making confidence are involved; while in older adults, a late locus was identified: a more efficient response organization led to a faster reaction.
... Attentional engagement is known to be a critical factor in the facilitation of cognitive intervention or strategy training; however, a gap exists in knowledge relevant to the learning of procedurally based skills. In previous studies investigating the role of attention in implicit learning, researchers focused on healthy adult samples and found inconsistent results using a variety of experimental tasks, with some relying on attention (Jiang & Chun, 2001;Stadler, 1995) and others showing only minimal interaction (Mulligan, 1998;Rausei et al., 2007). A meta-analysis examining divided attention manipulations of experimental repetition priming tasks revealed a small, but significant, negative effect on implicit memory (Spataro et al., 2011). ...
Article
Importance: Attentional engagement is essential for successful cognitive rehabilitation, but little is known about longitudinal interactions with skill learning. Objective: To examine how attentional engagement is associated with mobile application skill learning for memory compensation. We hypothesized that patients with greater functional capacity would demonstrate faster learning and attentional engagement drop with skill acquisition, whereas patients with lesser functional capacity would have to maintain attentional engagement to progress throughout training. Design: A case series approach was used with longitudinal skill learning and electroencephalographic (EEG) data recorded across multiple trials and sessions of mobile calendar application training. Setting: The study was run in a hospital-based neuropsychology clinic. Participants: Seven participants (5 with acquired brain injury, 1 with mild cognitive impairment, and 1 healthy older adult) were recruited. Intervention: Mobile application operation was trained for the purpose of memory compensation. Skill learning was facilitated through a structured rehabilitation protocol, including large amounts of guided practice with the integration of errorless learning. Outcomes and measures: We quantified learning using the proportion of application steps completed independently at each session. We measured attentional engagement using an EEG marker: the Brain Engagement Index. Results: For fast learners, attentional engagement generally decreased as mobile application learning progressed. In contrast, slow learners exhibited stable engagement over time with consistent, yet much slower, progress in skill learning. Conclusions and relevance: The present data indicate that when cognitive impairment is more substantial, skill learning may involve greater attentional engagement. What This Article Adds: Patients undergoing memory rehabilitation may benefit from methods to enhance attentional engagement during skill learning when executive dysfunction is a considerable element of their cognitive profile. Monitoring attentional engagement during cognitive rehabilitation may be useful in identifying and addressing barriers to learning in real time.
... In prior studies, researchers have had success training perceptual skill in laboratory settings often using tasks where participants are exposed to many category instances that must be sorted correctly (Gauthier et al., 1998;Searston & Tangen, 2017c;Scott et al., 2008;Tanaka et al., 2005;Wong et al., 2009). However, developing perceptual expertise relies heavily on learning to search for, attend to, and extract useful or diagnostic information (Chua et al., 2014(Chua et al., , 2015Jiang & Chun, 2001;Kellman & Garrigan, 2009). Restricting or cuing participants to diagnostic information can improve decision-making accuracy in tasks such as classifying fish (Baruch et al., 2014), identifying aircraft (Dror et al., 2008), and matching unfamiliar faces (Towler et al., 2021). ...
Article
Full-text available
We used a longitudinal randomized control experiment to compare the effect of specific practice (training on one form of a task) and varied practice (training on various forms of a task) on perceptual learning and transfer. Participants practiced a visual search task for 10 hours over 2 to 4 weeks. The specific practice group searched for features only in fingerprints during each session, whereas the varied practice group searched for features in five different image categories. Both groups were tested on a series of tasks at four time points: before training, midway through training, immediately after training ended, and 6 to 8 weeks later. The specific group improved more during training and demonstrated greater pre-post performance gains than the varied group on a visual search task with untrained fingerprint images. Both groups improved equally on a visual search task with an untrained image category, but only the specific group's performance dropped significantly when tested several weeks later. Finally, both groups improved equally on a series of untrained fingerprint tasks. Practice with respect to a single category (versus many) instills better near transfer, but category-specific and category-general visual search training appear equally effective for developing task-general expertise. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... Vartak et al. [3] suggested that the level of sustained attention in preschoolers could be an indicator of later academic achievement. Many studies confirmed that attention has a significant impact on children's academic performance [4][5][6]. As described above, attention is closely related to learning and it is a key factor affecting learning performance. ...
Article
Full-text available
This study is aimed at investigating the effectiveness of virtual reality (VR) on attention training for elementary school students. A pre-test and post-test design of the quasi-experimental method was adopted and 66 third and fourth graders from an elementary school in Hsinchu, Taiwan were used as experimental subjects, divided into a control group and experimental group. The former used the computerized Attention Process Training (APT) system and the latter used the proposed VR system for attention training, both for two weeks. The attention scale for elementary school children was used to evaluate the participant’s attention before and after training, including the dimensions of focused attention, sustained attention, selective attention, alternating attention, and divided attention. A questionnaire survey was conducted to measure the learning anxiety and cognitive load during the training process. The experimental results indicated: (1) The overall attention was significantly improved after the training process for both groups, and the VR system was more effective than the computerized APT in improving children’s attention. (2) The questionnaire results showed that the experimental group had lower learning anxiety and cognitive load than the control group. According to the experimental results, VR training is more effective in improving the attention of participants while reducing their learning anxiety and cognitive load. Therefore, it is a useful tool for attention training in elementary schools.
... Selective attention modulates visual statistical learning, with greater visual statistical learning for attended information (stimuli participants were instructed to attend) compared with visual statistical learning for unattended information (stimuli participants were not instructed to attend). The modulating role of selective attention in visual statistical learning is consistent with previous findings in the literature (Cox et al., 2022; but see Musz et al., 2015;Turk-Browne et al., 2005), and more broadly, this finding reconciles with the wellestablished theme in visual cognition that selective attention facilitates perceptual processing (e.g., De Weerd et al., 2006;Jiang & Chun, 2001). ...
Article
Full-text available
A cognitive function that is of interest when investigating age-related changes is statistical learning-the ability to detect and extract regularities in sensory information from our rich, dynamic, and complex environment. A previous study has suggested that there were age differences in visual statistical learning, with older adults demonstrating visual statistical learning of attended and unattended information (due to the "hyper-binding effect"). In the present study, we were interested in investigating whether there are age differences in visual statistical learning and whether stimulus category influenced visual statistical learning of unattended information in older adults. We tested two stimulus categories: highly familiar line drawings and abstract shapes. Participants completed a selective-attention task, in which regularities were embedded into both the attended and unattended visual streams. Then, participants completed a triplet-discrimination task, which assessed their ability to extract regularities from the attended and unattended visual streams. We also implemented a 4-point confidence-rating scale in the triplet-discrimination task as an assessment of participants' awareness of these regularities. There were four key findings. First, selective attention modulates visual statistical learning, with greater visual statistical learning for attended information than for unattended information. Second, there were age differences in visual statistical learning, but these differences were only observed for visual statistical learning of attended information. Third, stimulus category did not affect visual statistical learning of unattended information in older adults. Fourth, visual statistical learning occurs with awareness of statistical regularities. Further research is warranted to investigate the age-related mechanisms underlying visual statistical learning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... In our study, children showed a marked decrease in performance in the high attention demand condition, suggesting that distractor symbols did interfere with their learning and that distributed attention negatively impacted their rule induction in the visual domain. These results are consistent with an emerging body of literature that suggests that attention may modulate statistical learning performance (e.g., Bartolotti et al., 2011;Jiang & Chun, 2001;Poepsel & Weiss, 2016;Toro et al., 2005;Turk-Browne et al., 2005). Studies have suggested that attention effects on statistical learning are observed to varying degrees depending on the interference manipulation (e.g., divided attention task vs. dual-task paradigm; e.g., Toro et al., 2005). ...
Article
Full-text available
Purpose The current study examined the role of attention and language ability in nonverbal rule induction performance in a demographically diverse sample of school-age children. Method The participants included 43 English-speaking monolingual and 65 Spanish–English bilingual children between the ages of 5 and 9 years. Core Language Index standard scores from the Clinical Evaluation of Language Fundamentals–Fourth Edition indexed children's language skills. Rule induction was measured via a visual artificial grammar learning task. Two equally complex finite-state artificial grammars were used. Children learned one grammar in a low attention condition (where children were exposed to symbol sequences with no distractors) and another grammar in a high attention condition (where distractor symbols were presented around the perimeter of the target symbol sequences). Results Overall, performance in the high attention condition was significantly worse than performance in the low attention condition. Children with robust language skills performed significantly better in the high attention condition than children with weaker language skills. Despite group differences in socioeconomic status, English language skills, and nonverbal intelligence, monolingual and bilingual children performed similarly to each other in both conditions. Conclusion The results suggest that the ability to extract rules from visual input is attenuated by the presence of competing visual information and that language ability, but not bilingualism, may influence rule induction.
... This explanation resonates with the distinction between attention as mental effort (or resource) and attention as selective processing (Chun and Turk-Browne 2007;Johnston and Dark 1986). Previous studies have shown that implicit learning, for example, can occur independently of attentional load (such as when a cognitive-demanding secondary task is introduced), but requires task-relevant stimuli to be selectively attended (Jiang and Chun 2001;Jiménez and Mendez 1999;Turk-Browne et al. 2005). Our finding also suggests that attention can enhance memory not necessarily through the deployment of additional resources, but through a selective process, in which features that are relevant to task performance are selected for processing. ...
Article
Full-text available
Attention has been shown to enhance the processing of task-relevant information while suppressing the processing of task-irrelevant information. However, it is less clear whether this attentional modulation exists when there is an intrinsic dependence between task-relevant and task-irrelevant information, such as the dependence of temporal processing on spatial information. In this study, we used complex whole-body movement sequences to investigate the extent to which the task-irrelevant spatial information (trajectory) is processed when only the temporal information (rhythm) is in focus. Moreover, we examined, if the task-irrelevant spatial information is “co-selected” with the target temporal information as predicted by the intrinsic spatiotemporal dependence, whether task-driven attention that is actively directed to spatial information provides extra benefits. Through a two-phase experiment (an incidental encoding phase followed by a surprise memory test phase), we found that the task-irrelevant spatial information was not only perceived but also encoded in memory, providing further evidence in support of a relatively automatic co-selection of spatial information in temporal processing. Nevertheless, we also found that movements whose trajectories were intentionally attended to during the encoding phase were recognized better in the test phase than those that were not, indicating a further modulation from attention on incidental memory encoding and information processing.
... Although the two visual streams of stimuli (i.e., attended and unattended visual streams) were differentiated using colour cues (i.e., green or red), successful visual statistical learning requires the active segmentation of the attended information from the unattended information, and the extraction of the sequences of the stimuli without reliance on the colour cues because the stimuli were black in the test phase. The effect of selective attention has also been reported in studies investigating contextual cueing-a form of implicit incidental learning of associations between repeated, unchanged layouts of spatial information and target locations (Chun & Jiang, 1998;Jiang & Chun, 2001). ...
Article
Our visual system is built to extract regularities in how objects within our visual environment appear in relation to each other across time and space (‘visual statistical learning’). Existing research indicates that visual statistical learning is modulated by selective attention. Our attentional system prioritises information that enables behaviour; for example, animates are prioritised over inanimates (the ‘animacy advantage’). The present study examined the effects of selective attention and animacy on visual statistical learning in young adults (N = 284). We tested visual statistical learning of attended and unattended information across four animacy conditions: (i) living things that can self-initiate movement (animals); (ii) living things that cannot self-initiate movement (fruits and vegetables); (iii) non-living things that can generate movement (vehicles); and (iv) non-living things that cannot generate movement (tools and kitchen utensils). We implemented a four-point confidence-rating scale as an assessment of participants’ awareness of the regularities in the visual statistical learning task. There were four key findings. First, selective attention plays a critical role by modulating visual statistical learning. Second, animacy does not play a special role in visual statistical learning. Third, visual statistical learning of attended information cannot be exclusively accounted for by unconscious knowledge. Fourth, performance on the visual statistical learning task is associated with the proportion of stimuli that were named or labelled. Our findings support the notion that visual statistical learning is a powerful mechanism by which our visual system resolves an abundance of sensory input over time.
... Together with earlier work outside of the context of preparation, these results outline a complex interplay between implicit associative guidance and guidance by explicit awareness. For example, research on spatial attention has put forward the hypothesis that attentional selection of statistically regular features are imperative for implicitly learning these regularities (Turk-Browne et al., 2005;Jiang & Chun, 2001), and associations can thereby be shaped by explicit control. Recent work on Stimulus-Response bindings has similarly illustrated how instructions can shape the nature of categorical associations (Longman et al., 2018;Longman, Liefooghe, & Verbruggen, 2019;Waszak, Wenke, & Brass, 2008). ...
Article
Full-text available
There is growing appreciation for the role of long-term memory in guiding temporal preparation in speeded reaction time tasks. In experiments with variable foreperiods between a warning stimulus (S1) and a target stimulus (S2), preparation is affected by foreperiod distributions experienced in the past, long after the distribution has changed. These effects from memory can shape preparation largely implicitly, outside of participants’ awareness. Recent studies have demonstrated the associative nature of memory-guided preparation. When distinct S1s predict different foreperiods, they can trigger differential preparation accordingly. Here, we propose that memory-guided preparation allows for another key feature of learning: the ability to generalize across acquired associations and apply them to novel situations. Participants completed a variable foreperiod task where S1 was a unique image of either a face or a scene on each trial. Images of either category were paired with different distributions with predominantly shorter versus predominantly longer foreperiods. Participants displayed differential preparation to never-before seen images of either category, without being aware of the predictive nature of these categories. They continued doing so in a subsequent Transfer phase, after they had been informed that these contingencies no longer held. A novel rolling regression analysis revealed at a fine timescale how category-guided preparation gradually developed throughout the task, and that explicit information about these contingencies only briefly disrupted memory-guided preparation. These results offer new insights into temporal preparation as the product of a largely implicit process governed by associative learning from past experiences.
Article
Humans can learn to attentionally suppress salient, irrelevant information when it consistently appears at a predictable location. While this ability confers behavioral benefits by reducing distraction, the full scope of its utility is unknown. As people locomote and/or shift between task contexts, known-to-be-irrelevant locations may change from moment to moment. Here we assessed a context-dependent account of learned suppression: can individuals flexibly update the locations they suppress, from trial to trial, as a function of task context? Participants searched for a shape target in displays that sometimes contained a salient, irrelevant color singleton distractor. When one scene category was presented in the background (e.g., forests), the distractor had a greater probability of appearing in one display location than the others; for another scene category (e.g., cities), we used a different high-probability location. Results in Experiments 1 and 2 (and in the Online Supplementary Material) failed to show any context-dependent suppression effects, consistent with earlier work. However, in Experiments 3 and 4, we reinforced the separation between task contexts by using distinct sets of shape and color stimuli as well as distinct kinds of reported features (line orientation vs. gap judgment). Results now showed robust task-dependent signatures of learned spatial suppression and did not appear to be tied to explicit awareness of the relationship between context and high-probability distractor location. Overall, these results reveal a mechanism of learned spatial suppression that is flexible and sensitive to task contexts, albeit one that requires sufficient processing of these contexts.
Article
Full-text available
Our attention can sometimes be disrupted by salient but irrelevant objects in the environment. This distractor interference can be reduced when distractors appear frequently, allowing us to anticipate their presence. However, it remains unknown whether distractor frequency can be learned implicitly across distinct contexts. In other words, can we implicitly learn that in certain situations a distractor is more likely to appear, and use that knowledge to minimize the impact that the distractor has on our behavior? In two experiments, we explored this question by asking participants to find a unique shape target in displays that could contain a color singleton distractor. Forest or city backgrounds were presented on each trial, and unbeknownst to the participants, each image category was associated with a different distractor probability. We found that distractor interference was reduced when the image predicted a high rather than low probability of distractor presence on the upcoming trial, even though the location and (in Experiment 2) the color of the distractor was completely unpredictable. These effects appear to be driven by implicit rather explicit learning. We conclude that implicit learning of context-specific distractor probabilities can drive flexible strategies for the reduction of distractor interference.
Article
Full-text available
With attentional mechanisms, humans select and de-select information from the environment. But does selective attention modulate implicit learning? We tested whether the implicit acquisition of contingencies between features are modulated by the task-relevance of those features. We implemented the contingencies in a novel variant of the contextual cueing paradigm. In such a visual search task, participants could use non-spatial cues to predict target location, and then had to discriminate target shapes. In Experiment 1, the predictive feature for target location was the shape of the distractors (task-relevant). In Experiment 2, the color feature of distractors (task-irrelevant) cued target location. Results showed that participants learned to predict the target location from both the task-relevant and the task-irrelevant feature. Subsequent testing did not suggest explicit knowledge of the contingencies. For the purpose of further testing the significance of task-relevance in a cue competition situation, in Experiment 3, we provided two redundantly predictive cues, shape (task-relevant) and color (task-irrelevant) simultaneously, and subsequently tested them separately. There were no observed costs of single predictive cues when compared to compound cues. The results were not indicative of overshadowing effects, on the group and individual level, or of reciprocal overshadowing. We conclude that the acquisition of contingencies occurs independently of task-relevance and discuss this finding in the framework of the event coding literature.
Article
Full-text available
Statistical learning is a person’s ability to automatically learn environmental regularities through passive exposure. Since the earliest studies of statistical learning in infants, it has been debated exactly how “passive” this learning can be (i.e., whether attention is needed for learning to occur). In Experiment 1 of the current study, participants performed a serial feature search task where they searched for a target shape among heterogenous nontarget shapes. Unbeknownst to the participants, one of these nontarget shapes was presented much more often in location. Even though the regularity concerned a nonsalient, nontarget item that did not receive any attentional priority during search, participants still learned its regularity (responding faster when it was presented at this high-probability location). While this may suggest that not much, if any, attention is needed for learning to occur, follow-up experiments showed that if an attentional strategy (i.e., color subset search or exogenous cueing) effectively prevents attention from being directed to this critical regularity, incidental learning is no longer observed. We conclude that some degree of attention to a regularity is needed for visual statistical learning to occur.
Article
Contextual cueing is a phenomenon in which repeatedly encountered arrays of items can enhance the visual search for a target item. This is widely attributed to attentional guidance driven by contextual memory acquired during visual search. Some studies suggest that children may have an immature ability to use contextual cues compared to adults, while others argue that contextual learning capacity is similar across ages. To test the development of context-guided attention, this study compared contextual cueing effects among three age groups: adults (aged 18–33 years, N = 32), teenagers (aged 15–17 years, N = 41), and younger children (aged 8–9 years, N = 43). Moreover, this study introduced a measure of response time variability that tracks fluctuations in response time throughout the experiment, in addition to the conventional analysis of response times. The results showed that all age groups demonstrated significantly faster responses in repeated than non-repeated search contexts. Notably, adults and teenagers exhibited smaller response time variability in repeated contexts than in non-repeated ones, while younger children did not. This implies that children are less efficient at consolidating contextual information into a stable memory representation, which may lead to less stable attentional guidance during visual search.
Article
Background Contextual cueing refers to the phenomenon in which individuals utilize frequently encountered environmental contexts, comprised of distractors, as cues to expedite a target search. Due to the conflict between the widespread occurrence of contextual cue transfer and the observed impact of changing the identity of distractors on contextual cue learning, the content of contextual cue representations remains contentious. Considering the independent nature of contextual cue learning and expression, our proposition is twofold: (1) Contextual cue representations are stimulus-specific, and (2) their expression is highly flexible. Methods To validate the model, two experiments were conducted. Experiment 1 aimed to confirm the hypothesis that contextual cue representations are stimulus-specific. We manipulated the identity consistency of distractors within repeated scenes during contextual cue learning. Difficulty in contextual cue learning under the identity-changing condition would suggest the necessity of identity within contextual cue representation, indicating the stimulus-specific nature of these representations. Experiment 2 was designed to affirm the conclusion of Experiment 1 and explore the flexibility in the expression of contextual cue representations. This experiment comprised two phases: learning and testing. During the learning phase, participants were exposed to two sets of repeated scenes in different colors under two learning conditions: load and no-load. Working memory load was introduced to interfere with the expression to prevent it from becoming automatic. In the subsequent testing phase, the colors of the two scene sets were interchanged to impede retrieval based on identity. If both load and no-load conditions demonstrate similar levels of contextual cue effects during the testing phase, it implies the flexibility in the expression of contextual cue representations and confirms the conclusion of Experiment 1. Results In Experiment 1, a notable contextual cue learning effect was observed under the identity-consistent condition ( p = 0.001). However, this effect was not evident under the identity-changing condition ( p = 0.286). This finding strongly supports the stimulus-specific nature of contextual cue representation. In Experiment 2, the contextual cueing effect appeared but did not show a significant difference between the two conditions ( t (23) = 0.02, p = 0.987, BF 10 = 0.215), indicating the cognitive system’s ability to flexibly redefine retrieval cues. This adaptability aligns with our hypothesis and confirms the high flexibility in the expression process of contextual cue representations and confirms the conclusion of Experiment 1.
Preprint
Full-text available
During visual search, we quickly learn to attend to an object’s likely location. Research has shown that this process can be guided by learning target locations based on consistent spatial contextual associations or statistical regularities. Here, we tested how these different types of learning aid the utilisation of established memories for different purposes. Participants learned contextual associations or statistical regularities that predicted target locations within different scenes. The consequences of this learning for subsequent performance were then evaluated on attention-orienting and memory-recall tasks. Participants demonstrated facilitated attention-orienting and recall performance based on both contextual associations and statistical regularities. Contextual associations facilitated attention orienting with a different time course compared to statistical regularities. Benefits to memory-recall performance depended on the alignment between the learned association or regularity and the recall demands. The distinct patterns of behavioural facilitation by contextual associations and statistical regularities show how different forms of long-term memory may influence neural information processing through different modulatory mechanisms.
Article
Air pollution contributes to global warming and climate change, leading to extreme weather events and rising sea levels. Promoting sustainable practices has become the focus of policy programs and awareness campaigns. In this study, we propose an effective and powerful way to promote eco-driving behaviors by drawing on data storytelling. Our study shows that animated narrative and narrative sequence can trigger varying emphases on the feasibility and desirability of eco-driving practices, affecting actual driving behaviors and attitudes toward efficient driving. Specifically, in two experiments, we find that a chronological narrative sequence with animation improves subsequent driving efficiency and efficient driving attitudes. Visualization designers may consider employing narrative sequence and animation to facilitate individuals’ information comprehension and behavioral changes. Policymakers can also encourage ecological practices through effective designs of data storytelling.
Article
Full-text available
p>The article presents an analysis of the effect of context in cognitive activity. Contextual influences are expressed in changes of productivity and time of problem solving under the influence of actual irrelevant information or previously formed knowledge structures. The importance of studying contextual variables stems from their fundamental role in cognitive processes. Examples of contextual mediation are the effects of dependence of perception of an object (figure) on the perceptual environment (background), priming effects, effects of awareness of multivalued information, effects of context-dependent memory, effects of contextual cues, effects of functional fixation in solving thinking tasks, etc. By analogy with types of memory it is proposed to differentiate ultra-short-term, short-term and long-term (stable) contexts. A prospect in the study of contextual influences can become the study of types and character of interaction of contexts having different characteristics. The latter include: "homogeneity/heterogeneity" of the context, "relevance/irrelevance" to the task, "power" - the integration of local contexts in a single context, "congruence/dissociation" as the correspondence/dissimilarity of different contexts to each other.</p
Chapter
This volume provides the most comprehensive and up-to-date compendium of theory and research in the field of human intelligence. Each of the 42 chapters is written by world-renowned experts in their respective fields, and collectively, they cover the full range of topics of contemporary interest in the study of intelligence. The handbook is divided into nine parts: Part I covers intelligence and its measurement; Part II deals with the development of intelligence; Part III discusses intelligence and group differences; Part IV concerns the biology of intelligence; Part V is about intelligence and information processing; Part VI discusses different kinds of intelligence; Part VII covers intelligence and society; Part VIII concerns intelligence in relation to allied constructs; and Part IX is the concluding chapter, which reflects on where the field is currently and where it still needs to go.
Article
Full-text available
Studies on joint action show that when two actors turn-takingly attend to each other’s target that appears one at a time, a partner’s target is accumulated in memory. However, in the real world, actors may not be certain that they attend to the same object because multiple objects often appear simultaneously. In this study, we asked participant pairs to search for different targets in parallel from multiple objects and investigated the memory of a partner’s target. We employed the contextual cueing paradigm, in which repetitive search forms associative memory between a target and a configuration of distractors that facilitates search. During the learning phase, exemplars of three target categories (i.e., bird, shoe, and tricycle) were presented among unique objects, and participant pairs searched for them. In Experiment 1, it was followed by a memory test about target exemplars. Consequently, the partner’s target was better recognized than the target that nobody searched for. In Experiments 2a and 2b, the memory test was replaced with the transfer phase, where one individual from the pair searched for the category that nobody had searched for while the other individual searched for the category the partner had searched for in the learning phase. The transfer phase did not show search facilitation underpinned by associative memory between the partner’s target and distractors. These results suggest that when participant pairs search for different targets in parallel, they accumulate the partner’s target in memory but may not form its associative memory with the distractors that facilitates its search.
Article
Full-text available
Visual features previously associated with reward can capture attention even when task-irrelevant, a phenomenon known as value-driven attention capture (VDAC). VDAC persists without reinforcement, unlike other forms of learning, where removing reinforcement typically leads to extinction. In five experiments, factors common to many studies were manipulated to examine their impact on VDAC and its extinction. All experiments included learning and test phases. During learning, participants completed a visual search task during which one of two target colors was associated with a reward, and the other with no reward. During test, 1 week later, participants completed another visual search task in which the reward association was not reinforced. When a rewarded feature remained task-relevant (Experiment 1), VDAC was observed. When the rewarded feature was made task-irrelevant (Experiments 2–5) there was no evidence of a VDAC effect, except when the target feature was physically salient and there was a reduction in the frequency of exposure to the reward-associated feature (Experiment 5). We failed to find evidence of VDAC in Experiments 2–4, suggesting that VDAC may depend on the demands of the task resulting in vulnerability to VDAC. When VDAC was observed, extinction was also observed. This indicates that VDAC is subject to extinction as would be expected from an effect driven by reinforcement learning.
Article
Contextual cueing can depend on global configuration or local item position. We investigated the role of these two kinds of cues in the lateralization of contextual cueing effects. Cueing by item position was tested by recombing two previously learned displays, keeping the individual item locations intact, but destroying the global configuration. In contrast, cueing by configuration was investigated by rotating learned displays, thereby keeping the configuration intact but changing all item positions. We observed faster search for targets in the left display half, both for repeated and new displays, along with more first fixation locations on the left. Both position and configuration cues led to faster search, but the search time reduction compared to new displays due to position cues was comparable in the left and right display half. In contrast, configural cues led to increased search time reduction for right half targets. We conclude that only configural cues enabled memory-guided search for targets across the whole search display, whereas position cueing guided search only to targets in the vicinity of the fixation. The right-biased configural cueing effect is a consequence of the initial leftward search bias and does not indicate hemispheric dominance for configural cueing.
Article
Visual search is facilitated when a target item is positioned within an invariant arrangement of task-irrelevant distractor elements (relative to non-repeated arrangements), because learnt target-distractor spatial associations guide visual search. While such configural search templates stored in long-term memory (LTM) cue focal attention towards the search-for target after only a few display repetitions, adaptation of existing configural LTM requires extensive training. The current work examined the important question whether individuals claimed to have better attention performance (i.e., action video game players; AVGP) show improved acquisition vs. adaptation of configural LTM (relative to no-gamers; NAVGP) in a visual-search task with repeated and non-repeated search configurations and consisting of an initial learning phase and, following target relocation, a subsequent adaptation phase. We found that contextual facilitation of search reaction times was more pronounced for AVGP relative to NAVGP in initial learning, probably reflecting enhanced learn-to-learn capabilities in the former individuals. However, this advantage did not carry over to the adaptation phase, in which gamers and non-gamers exhibited similar performance and suggesting that attention control required for overcoming visual distraction from previously learned (but no more relevant) target positions is relatively uninfluenced by action-game experience.
Article
Perception of our external environment is not isolated from the influence of our internal thoughts, and past evidence points to a possible common associative mechanism underlying both the perception of scenes and our internal thought. Here, we investigated the nature of the interaction between an associative mindset and scene perception, hypothesizing a functional advantage to an associative thought pattern in the perception of scenes. Experiments 1 and 2 showed that associative thinking facilitates scene perception, which evolved over the course of the experiments. In contrast to scene perception, Experiment 3 showed that associative thinking hinders the perception of mundane objects, in which associative information is minimized. Nevertheless, object perception was facilitated when associative thinking was reduced. This double dissociation suggests that an associative mind is more receptive of externally perceived associative information, and that a match between the orientation of internal and external processing may be key for perception.
Article
Full-text available
Мы можем выделять и усваивать пространственные закономерности расположения предметов в окружающей среде. Иногда это происходит неосознанно в процессе решения каких-либо задач, не связанных с этими закономерностями, и может влиять на наши последующие действия. Однако пространственные закономерности усваиваются не всегда, и остается неясным, какие условия являются определяющими для их имплицитного усвоения. Может ли пространственная закономерность усваиваться без двигательной активности? В какой степени усвоение пространственной закономерности основывается на перцептивных особенностях воспринимаемых стимулов? Настоящее исследование связано с поиском ответов на эти вопросы. В двух экспериментах испытуемым было предложено решить ряд простых заданий, в каждом из которых пространственная организация стимульного материала удовлетворяла одному и тому же правилу. В первом эксперименте задача была связана с поиском фигуры нужной величины, а во втором – с поиском нужного числа. Таким образом, в первом исследовании усваиваемая закономерность оказывалась связанной с перцептивными особенностями стимульного материала, а во втором – нет. Запланированный анализ не выявил эффекта научения ни в плане изменения времени решения, ни при решении задачи классификации ни в одном из экспериментов. Неоднородность решаемых задач послужила основанием для проведения эксплораторного анализа. Дополнительный анализ был проведен с разделением задач на «простые» и «сложные». Его результаты показали возможность имплицитного научения только для «сложных» задач. Эффект имплицитного научения проявился в изменении времени решения, но не был зафиксирован при решении задачи классификации. Для задач, связанных с перцептивными особенностями стимула в первом эксперименте были обнаружены свидетельства в пользу как положительного, так и отрицательного эффекта научения. Для задач, связанных с семантическим значением стимула во втором эксперименте, свидетельств в пользу положительного эффекта научения обнаружено не было. Полученные результаты сопоставляются с работами в экспериментальной парадигме имплицитного усвоения контекстной подсказки.
Article
Full-text available
В настоящее время остается неясным, что же выучивается в процессе имплицитного усвоения сложных закономерностей. Возможно ли имплицитное выучивание сложной закономерности целиком, или мы имеем дело с выучиванием ее отдельных элементов? Если закономерность касается расположения букв в анаграмме, то применение знания о ней, как правило, является осознанным и контролируемым. Остается неясным, возможно ли в данном случае имплицитное выучивание правила решения и неконтролируемое применение такого знания. В статье описан эксперимент, в котором мы попытались затруднить экспликацию правила и обнаружить проявления имплицитного знания закономерности. Рассматривались возможности целостного или фрагментарного усвоения схемы. Мы предполагали, что имплицитное знание правила может повлиять на выбор одного из двух возможных способов решения анаграммы, а также затруднить решение простой анаграммы, не соответствующей усвоенной схеме. Осознанность знания контролировалась с помощью оценки уверенности при решении задачи классификации (выборе способа составления анаграммы). Эксперимент проводился онлайн, выборка составила 375 человек. Полученные результаты не позволяют однозначно говорить об осознанном или неосознанном усвоении закономерности. Результаты также не позволяют сделать однозначный вывод о том, может ли усваиваться правило как целое или как набор фрагментов. Обсуждаются ограничения эксперимента и факторы, которые могли помешать обнаружению знания. Предлагаются возможные направления коррекции экспериментальной схемы и стимульного материала для дальнейших исследований.
Article
Full-text available
The present report investigated whether nonmusicians can incidentally learn musical skills needed for sight-reading. On each trial, participants identified a note name written inside of a note on the musical staff. In Experiment 1, each note was presented frequently with the congruent note name (e.g., "do" with the note for "do") and rarely with the incongruent names (e.g., "do" with the note for "fa"). With or without deliberate learning instructions, a robust contingency learning effect was observed: faster responses for congruent trials compared to incongruent trials. Participants also explicitly identified the meaning of the note positions more accurately than chance. Experiment 2 ruled out the potential influence of preexisting knowledge on the contingency learning effect by presenting notes most often with an incongruent note name. Robust learning was again observed, suggesting that participants acquired sufficient knowledge of musical notation to produce automatic influences on behavior (e.g., akin to the interference effect previously found in skilled musicians). A congruency effect was additionally observed in Experiment 2, however. Experiment 3 further explored to what extent this congruency effect might be due to prior music knowledge and/or spatial stimulus-response compatibility between note and response locations (analogous to the SMARC effect). Overall, our results open up new avenues for investigating the incidental learning of complex material, musical or otherwise, and for reinforcing learning even further.
Article
Full-text available
In contextual cueing tasks, participants can use a repeating local context to learn to detect the target, yet most contextual cueing studies have relied on repeating global context properties. We examined whether observers can use local context repetitions in a similar manner as they use global context repetitions. In addition, we examined how reward-predicting context features modulate the use of local and global contexts. Participants searched through contexts in which either the entire context configuration or only a local context around the target repeated, intermixed with novel contexts. Half of the context items appeared in a color signaling either low or high reward. We found that local context repetitions led to comparable benefits in response times and fixation count as global context repetitions did. Surprisingly, reward magnitude did not affect performance in local nor in global contexts. The results suggest that a local chunk of distractors can be used for context learning and attention guidance in a similar manner as the global context configuration. We suggest that the proportion of repeated and novel context trials is crucial for context learning and that our combination of locally and globally repeating contexts provided an environment that facilitated learning in both context types because it allowed predicting the target location from the context in most of the trials.
Article
The last ten years of attention research have witnessed a revolution, replacing a theoretical dichotomy (top-down vs. bottom-up control) with a trichotomy (biased by current goals, physical salience, and selection history). This third new mechanism of attentional control, selection history, is multifaceted. Some aspects of selection history must be learned over time whereas others reflect much more transient influences. A variety of different learning experiences can shape the attention system, including reward, aversive outcomes, past experience searching for a target, target‒non-target relations, and more. In this review, we provide an overview of the historical forces that led to the proposal of selection history as a distinct mechanism of attentional control. We then propose a formal definition of selection history, with concrete criteria, and identify different components of experience-driven attention that fit within this definition. The bulk of the review is devoted to exploring how these different components relate to one another. We conclude by proposing an integrative account of selection history centered on underlying themes that emerge from our review.
Article
Full-text available
This article discusses the conceptualization, measurement, and validity of a recently emerged construct in the field of second language acquisition (SLA)-implicit language aptitude (alternatively "implicit aptitude"). Implicit aptitude is a set of cognitive abilities that enable learners to make unconscious computations of the distributional and transitional probabilities of linguistic input. Implicit aptitude is key to an accurate understanding of the cognitive foundation of language learning and contributes significantly to the advancement of SLA theory and pedagogy. The article starts by clarifying the concept and components of implicit aptitude, elaborating its role in SLA theories, identifying its attributes, and discussing its measurement. It then synthesizes the empirical evidence on its divergent, convergent, and predictive validity, which refers to whether it is distinct or separable from explicit aptitude, whether measures of implicit aptitude are correlated, and whether it is predictive of learning outcomes, respectively. Next, the article provides an overview of the seven empirical studies included in this special issue that examined implicit aptitude from various perspectives. The article concludes by identifying future directions.
Article
Full-text available
We investigated if contextual cueing can be guided by egocentric and allocentric reference frames. Combinations of search configurations and external frame orientations were learned during a training phase. In Experiment 1, either the frame orientation or the configuration was rotated, thereby disrupting either the allocentric or egocentric and allocentric predictions of the target location. Contextual cueing survived both of these manipulations, suggesting that it can overcome interference from both reference frames. In contrast, when changed orientations of the external frame became valid predictors of the target location in Experiment 2, we observed contextual cueing as long as one reference frame was predictive of the target location, but contextual cueing was eliminated when both reference frames were invalid. Thus, search guidance in repeated contexts can be supported by both egocentric and allocentric reference frames as long as they contain valid information about the search goal.
Article
Full-text available
The role of attention in implicit sequence learning was investigated in 3 experiments in which participants were presented with a serial reaction time (SRT) task under single- or dual-task conditions. Unlike previous studies using this paradigm, these experiments included only probabilistic sequences of locations and arranged a counting task performed on the same stimulus on which the SRT task was being carried out. Another sequential contingency was also arranged between the dimension to be counted and the location of the next stimulus. Results indicate that the division of attention barely affected learning but that selective attention to the predictive dimensions was necessary to learn about the relation between these dimensions and the predicted one. These results are consistent with a theory of implicit sequence learning that considers this learning as the result of an automatic associative process running independently of attentional load, but that would associate only those events that are held simultaneously in working memory.
Article
Full-text available
The early and late selection debate may be resolved if perceptual load of relevant information determines the selective processing of irrelevant information. This hypothesis was tested in 3 studies; all used a variation of the response competition paradigm to measure irrelevant processing when load in the relevant processing was varied. Perceptual load was manipulated by relevant display set size or by different processing requirements for identical displays. These included the requirement to process conjunctions versus isolated features and the requirement to perform simple detection of a character's presence versus difficult identification of its size and position. Distractors' interference was found only under low-load conditions. Because the distractor was usually clearly distinct from the target, it is concluded that physical separation is not a sufficient condition for selective perception; overloading perception is also required. This allows a compromise between early and late selection views and resolves apparent discrepancies in previous work.
Article
Full-text available
Search for conjunctions of highly discriminable features can be rapid or even parallel. This article explores three possible accounts based on (a) perceptual segregation, (b) conjunction detectors, and (c) inhibition controlled separately by two or more distractor features. Search rates for conjunctions of color, size, orientation, and direction of motion correlated closely with an independent measure of perceptual segregation. However, they appeared unrelated to the physiology of single-unit responses. Each dimension contributed additively to conjunction search rates, suggesting that each was checked independently of the others. Unknown targets appear to be found only by serial search for each in turn. Searching through 4 sets of distractors was slower than searching through 2. The results suggest a modification of feature integration theory, in which attention is controlled not only by a unitary “window” but also by a form of feature-based inhibition.
Article
Full-text available
The conclusion that scene knowledge interacts with object perception depends on evidence that object detection is facilitated by consistent scene context. Experiment 1 replicated the I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz (1982) object-detection paradigm. Detection performance was higher for semantically consistent versus inconsistent objects. However, when the paradigm was modified to control for response bias (Experiments 2 and 3) or when response bias was eliminated by means of a forced-choice procedure (Experiment 4), no such advantage obtained. When an additional source of biasing information was eliminated by presenting the object label after the scene (Experiments 3 and 4), there was either no effect of consistency (Experiment 4) or an inconsistent object advantage (Experiment 3). These results suggest that object perception is not facilitated by consistent scene context. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
When looking at a scene, observers feel that they see its entire structure in great detail and can immediately notice any changes in it. However, when brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: identification of changes becomes extremely difficult, even when changes are large and made repeatedly. Identification is much faster when a verbal cue is provided, showing that poor visibility is not the cause of this difficulty. Identification is also faster for objects mentioned in brief verbal descriptions of the scene. These results support the idea that observers never form a complete, detailed representation of their surroundings. In addition, results also indicate that attention is required to perceive change, and that in the absence of localized motion signals it is guided on the basis of high-level interest. To see or not to see: The need for attention to perceive changes in scenes. Available from: https://www.researchgate.net/publication/236170014_To_see_or_not_to_see_The_need_for_attention_to_perceive_changes_in_scenes [accessed Jun 15, 2017].
Article
Full-text available
Tested the 2-process theory of detection, search, and attention presented by the current authors (1977) in a series of experiments. The studies (a) demonstrate the qualitative difference between 2 modes of information processing: automatic detection and controlled search; (b) trace the course of the learning of automatic detection, of categories, and of automatic-attention responses; and (c) show the dependence of automatic detection on attending responses and demonstrate how such responses interrupt controlled processing and interfere with the focusing of attention. The learning of categories is shown to improve controlled search performance. A general framework for human information processing is proposed. The framework emphasizes the roles of automatic and controlled processing. The theory is compared to and contrasted with extant models of search and attention. (31/2 p ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In this study we investigated the role of attention, sequence structure, and effector specificity in learning a structured sequence of actions. Experiment 1 demonstrated that simple structured sequences can be learned in the presence of attentional distraction. The learning is unaffected by variation in distractor task difficulty, and subjects appear unaware of the structure. The structured sequence knowledge transfers from finger production to arm production (Experiment 2), suggesting that sequence specification resides in an effector-independent system. Experiments 3 and 4 demonstrated that only structures with at least some unique associations (e.g., any association in Structure 15243… or 4 to 3 in Structure 143132…) can be learned under attentional distraction. Structures with all items repeated in different orders in different parts of the structure (e.g., Sequence 132312…) require attention for learning. Such structures may require hierarchic representation, the construction of which takes attention. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A 2-process theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in long-term memory that is initiated by appropriate inputs and then proceeds automatically--without S control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacity-limited (usually serial in nature), and is controlled by the S. A series of studies, with approximately 8 Ss, using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search was utilized in varied-mapping paradigms, and in the present studies, it took the form of serial, terminating search. (60 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
I examine the phenomenon of implicit learning, the process by which knowledge about the rule-governed complexities of the stimulus environment is acquired independently of conscious attempts to do so. Our research with the two seemingly disparate experimental paradigms of synthetic grammar learning and probability learning, is reviewed and integrated with other approaches to the general problem of unconscious cognition. The conclusions reached are as follows: (a) Implicit learning produces a tacit knowledge base that is abstract and representative of the structure of the environment; (b) such knowledge is optimally acquired independently of conscious efforts to learn; and (c) it can be used implicitly to solve problems and make accurate decisions about novel stimulus circumstances. Various epistemological issues and related problems such as intuition, neuroclinical disorders of learning and memory, and the relationship of evolutionary processes to cognitive science are also discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In conjunction search, response latencies usually increase with the number of displayed elements, suggesting serial, self-terminating search through all elements. In line with the results of H. Egeth, R. Virzi, and H. Garbart (1984), the present study shows that Ss do not necessarily search all display elements, but can limit their search to a color-defined subset of elements. The results make clear that selective search for a color-defined subset does not depend on saliency of the subset (Exp 1), that selective search can be purely color-based and does not depend on luminance (Exp 2), and that Ss can flexibly change which subset they are searching (Exp 3). Exp 4 showed that subset-selective search also occurs without fast absent responses as found in Exps 1–3 and that for selective search no explicit instruction is required. Subset-selective search is a likely strategy in conjunction search. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Invariant spatial relationships of objects may provide a rich source of contextual information. Visual context can assist localization of individual objects via an implicit learning mechanism, as revealed in the contextual cueing paradigm (Chun & Jiang, 1998). What defines a visual context? How robust is contextual learning? And is it perceptually constrained? Here we investigate whether both local context that surround a target, and long-range context that does not spatially coincide with a target, can influence target localization. In the contextual cueing task, participants implicitly learned a context by repeated exposure to items arranged in invariant patterns. Experiments 1 and 2 suggest that only local con-text facilitates target localization. However, Experiment 3 showed that long-range context can prime target location when target and context are not separated by random information. Experiment 4 showed that grouping by colour does not affect contextual cueing, suggesting that spatial features play a more important role than surface features in spatial contextual cueing. In separate analyses, visual hemifield differences were found for learning and performance. In sum, the results indicate that implicit learning of spatial context is robust across noise and biased towards spatially grouped information. Context can have a powerful influence on the processing of visual information. Objects can be recognized without context but when dealing with less familiar objects, complex scenes, or degraded information, the importance of context increases (Ullman, 1996). In a series of studies, Biederman and colleagues Please address all correspondenc e to I.
Article
Full-text available
Seven experiments were conducted to examine the role of attention in automatization. Ss searched 2-word displays for members of a target category in divided-attention, focused-attention, and dual-task conditions. The main issue was whether attention conditions would affect what Ss learned about co-occurrences of the words in the displays. The attention hypothesis, derived from the instance theory of automaticity, predicts learning of co-occurrences in divided-attention and dual-task conditions in which Ss attend to both words but not in focused-attention conditions in which Ss only attend to 1 word. The data supported the attention hypothesis and therefore the instance theory. This article concerns what is learned during automatization. This is an important question in the automaticity literature, especially from the perspective of memory-based theories, such as the instance theory of automaticity (Logan, 1988,1990, 1992). Memory-based theories assume that automatic perfor-mance is based on retrieval of representations of past solutions from memory. What "gets into" those representations during learning and what is "taken out" of them during automatic performance are central questions in memory-based theories. The purpose of this article is to evaluate the answers offered by the instance theory of automaticity. The answers are important because they are derived from two of the three main assump-tions of the theory: obligatory encoding and instance represen-tation. The obligatory encoding assumption says that attention determines what gets into the representation. The instance representation assumption says that the representations in memory are instances—separate representations of co-occurrences. Attention determines what is in an instance; attention determines which co-occurrences are remembered. The experiments tested this hypothesis.
Article
Full-text available
In this paper, we propose that the debate concerning the locus of attentional selection can be resolved by specifying the conditions under which early selection is possible. In the first part, we present a theoretical discussion that integrates aspects from structural and capacity approaches to attention and suggest that perceptual load is a major factor in determining the locus of selection. In the second part, we present a literature review that examines the conditions influencing the processing of irrelevant information. This review supports the conclusion that a clear physical distinction between relevant and irrelevant information is not sufficient to prevent irrelevant processing; early selection also requires that the perceptual load of the task be sufficiently high to exceed the upper limit of available attentional resources.
Article
Full-text available
The role of attention in implicit sequence learning was investigated in 3 experiments in which participants were presented with a serial reaction time (SRT) task under single- or dual-task conditions. Unlike previous studies using this paradigm, these experiments included only probabilistic sequences of locations and arranged a counting task performed on the same stimulus on which the SRT task was being carried out. Another sequential contingency was also arranged between the dimension to be counted and the location of the next stimulus. Results indicate that the division of attention barely affected learning but that selective attention to the predictive dimensions was necessary to learn about the relation between these dimensions and the predicted one. These results are consistent with a theory of implicit sequence learning that considers this learning as the result of an automatic associative process running independently of attentional load, but that would associate only those events that are held simultaneously in working memory.
Article
Full-text available
Subjects looked at two optically superimposed video sccreens, on which two different kinds of things were happening. In the principal condition, they were required to follow the action in one episode (by pressing keys when significant events occurred) and ignore the other. They could do this without difficulty, although both were present in the same fully overlapped visual field. Odd events in the unattended episode were rarely noticed. It was very difficult to monitor both episodes at once. Performance was no better when the two episodes were presented to different eyes (dichoptic condition) than when both were given binocularly. It is argued that selective attention does not involve special mechanisms to reject unwanted information, but is a direct consequence of skilled perceiving.
Article
Full-text available
Important differences have emerged between introspective measures of learning, such as recall and recognition, and performance measures, in which the performance of a task is facilitated by prior experience. Introspective remembering of unattended stimuli is poor. We investigated whether performance measures would also show a strong dependence on attention. Subjects performed a serial reaction time task comprised of a repeating 10-trial stimulus sequence. When this task was given under dual-task conditions, acquisition of the sequence as assessed by verbal reports and performance measures was minimal. Patients with Korsakoff's syndrome learned the sequence despite their lack of awareness of the repeating pattern. Results are discussed in terms of the attentional requirements of learning, the relation between learning and awareness, preserved learning in amnesia, and the separation of memory systems.
Article
Full-text available
To reexamine the role of covert attention in visual search, the authors directly manipulated attention by peripherally cueing the target location and analyzed its effects on the set-size and the eccentricity effects. Observers participated in feature and conjunction tasks. Experiment 1 used precues, and Experiment 2 used postcues in a yes-no task under valid-, invalid-, and neutral-cueing conditions. Experiments 3 and 4 used a 2-interval alternative forced-choice visual-search task under cued and neutral conditions. Precueing the target location improved performance in feature and conjunction searches; postcueing did not. For the cued targets, the eccentricity effect for features and conjunctions was diminished, suggesting that the attentional mechanism improves the quality of the sensory representation of the attended location. The conjunction set-size effect was reduced but not eliminated. This questions serial-search models that attribute a major role to covert attention in visual search.
Article
Full-text available
Through rapid serial visual presentation (RSVP), we asked Ss to identify a partially specified letter (target) and then to detect the presence or absence of a fully specified letter (probe). Whereas targets are accurately identified, probes are poorly detected when they are presented during a 270-ms interval beginning 180 ms after the target. Probes presented immediately after the target or later in the RSVP stream are accurately detected. This temporary reduction in probe detection was not found in conditions in which a brief blank interval followed the target or Ss were not required to identify the target. The data suggest that the presentation of stimuli after the target but before target-identification processes are complete produces interference at a letter-recognition stage. This interference may cause the temporary suppression of visual attention mechanisms observed in the present study.
Article
Full-text available
Subjects searched sets of items for targets defined by conjunctions of color and form, color and orientation, or color and size. Set size was varied and reaction times (RT) were measured. For many unpracticed subjects, the slopes of the resulting RT X Set Size functions are too shallow to be consistent with Treisman's feature integration model, which proposes serial, self-terminating search for conjunctions. Searches for triple conjunctions (Color X Size X Form) are easier than searches for standard conjunctions and can be independent of set size. A guided search model similar to Hoffman's (1979) two-stage model can account for these data. In the model, parallel processes use information about simple features to guide attention in the search for conjunctions. Triple conjunctions are found more efficiently than standard conjunctions because three parallel processes can guide attention more effectively than two.
Article
Full-text available
A new theory of search and visual attention is presented. Results support neither a distinction between serial and parallel search nor between search for features and conjunctions. For all search materials, instead, difficulty increases with increased similarity of targets to nontargets and decreased similarity between nontargets, producing a continuum of search efficiency. A parallel stage of perceptual grouping and description is followed by competitive interaction between inputs, guiding selective access to awareness and action. An input gains weight to the extent that it matches an internal description of that information needed in current behavior (hence the effect of target-nontarget similarity). Perceptual grouping encourages input weights to change together (allowing "spreading suppression" of similar nontargets). The theory accounts for harmful effects of nontargets resembling any possible target, the importance of local nontarget grouping, and many other findings.
Article
Full-text available
Theories of visual attention deal with the limit on our ability to see (and later report) several things at once. These theories fall into three broad classes. Object-based theories propose a limit on the number of separate objects that can be perceived simultaneously. Discrimination-based theories propose a limit on the number of separate discriminations that can be made. Space-based theories propose a limit on the spatial area from which information can be taken up. To distinguish these views, the present experiments used small (less than 1 degree), brief, foveal displays, each consisting of two overlapping objects (a box with a line struck through it). It was found that two judgments that concern the same object can be made simultaneously without loss of accuracy, whereas two judgments that concern different objects cannot. Neither the similarity nor the difficulty of required discriminations, nor the spatial distribution of information, could account for the results. The experiments support a view in which parallel, preattentive processes serve to segment the field into separate objects, followed by a process of focal attention that deals with only one object at a time. This view is also able to account for results taken to support both discrimination-based and space-based theories.
Article
Full-text available
It has recently been proposed that in searching for a target defined as a conjunction of two or more separable features, attention must be paid serially to each stimulus in a display. Support for this comes from studies in which subjects searched for a target that shared a single feature with each of two different kinds of distractor items (e.g., a red O in a field of black Os and red Ns). Reaction time increased linearly with display size. We argue that this design may obscure evidence of selectivity in search. In an experiment in which the numbers of the two distractors were unconfounded, we find evidence that subjects can search through specified subsets of stimuli. For example, subjects told to search through just the Os to find the red O target do so without searching through Ns. Implications of selective search are discussed.
Article
Full-text available
When 2 targets are presented among distractors in rapid serial visual presentation, correct identification of the 1st target results in a deficit for a 2nd target appearing within 200-500 ms. This attentional blink (AB; J.E. Raymond, K.L. Shapiro, & K.M. Arnell, 1992) was examined for categorically defined targets (letters among nonletters) in 7 experiments. AB was obtained for the 2nd letter target among digit distractors (Experiment 1) and also for a 3rd target (Experiment 2). Results of Experiments 3-5 confirmed that AB is triggered by local interference from immediate posttarget stimulation (Raymond et al., 1992) and showed that AB is modulated by the discriminability between the 1st target and the immediately following distractor. Experiments 5-7 further examined the effects of both local interference and global discriminability. A 2-stage model is proposed to account for the AB results.
Article
Full-text available
The early and late selection debate may be resolved if perceptual load of relevant information determines the selective processing of irrelevant information. This hypothesis was tested in 3 studies; all used a variation of the response competition paradigm to measure irrelevant processing when load in the relevant processing was varied. Perceptual load was manipulated by relevant display set size or by different processing requirements for identical displays. These included the requirement to process conjunctions versus isolated features and the requirement to perform simple detection of a character's presence versus difficult identification of its size and position. Distractors' interference was found only under low-load conditions. Because the distractor was usually clearly distinct from the target, it is concluded that physical separation is not a sufficient condition for selective perception; overloading perception is also required. This allows a compromise between early and late selection views and resolves apparent discrepancies in previous work.
Article
Full-text available
In this paper, we propose that the debate concerning the locus of attentional selection can be resolved by specifying the conditions under which early selection is possible. In the first part, we present a theoretical discussion that integrates aspects from structural and capacity approaches to attention and suggest that perceptual load is a major factor in determining the locus of selection. In the second part, we present a literature review that examines the conditions influencing the processing of irrelevant information. This review supports the conclusion that a clear physical distinction between relevant and irrelevant information is not sufficient to prevent irrelevant processing; early selection also requires that the perceptual load of the task be sufficiently high to exceed the upper limit of available attentional resources.
Article
Full-text available
Implicit learning is nonepisodic learning of complex information in an incidental manner, without awareness of what has been learned. Implicit learning experiments use 3 different stimulus structures (visual, sequence, and function) and 3 different dependent measures or response modalities (conceptual fluency, efficiency, and prediction and control). Implicit learning may require a certain minimal amount of attention and may depend on attentional and working memory mechanisms. The result of implicit learning is implicit knowledge in the form of abstract (but possibly instantiated) representations rather than verbatim or aggregate representations. Implicit learning shows biases and dissociations in learning different stimulus structures. The dependence of implicit learning on particular brain areas is discussed, some conclusions are drawn for modeling implicit learning, and the interaction of implicit and explicit learning is considered.
Article
Full-text available
Restrictions to attentional capacity are revealed by the interference that commonly results when two sensory inputs must be identified at the same time. To investigate this phenomenon within and between modalities, we presented streams of visual and/or auditory inputs, containing occasional targets to be identified and recalled. For two visual or two auditory streams, identification of one target produced a sustained reduction in the ability to identify a second, the period of interference lasting for several hundred milliseconds. Subjectively, when attention was assigned to one target it was temporarily unavailable for another. In contrast, there was no such time-locked interference between targets in different modalities. The results suggest a modality-specific restriction to concurrent attention and awareness; visual attention to one simple target does not restrict concurrent auditory attention to another.
Article
Full-text available
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Article
Full-text available
When monitoring a rapid serial visual presentation at 100 ms per item for 2 targets among distractors, viewers have difficulty reporting the 2nd target (T2) when it appears 200-500 ms after the onset of the 1st letter target (T1): an attentional blink (AB; M. M. Chun & M. C. Potter, 1995b; J. E. Raymond, K. L. Shapiro, & K. M. Arnell, 1992). Does the same deficit occur with auditory search? The authors compared search for auditory, visual, and cross-modal targets in 2 tasks: (a) identifying 2 target letters among digits (Experiments 1-3 and 5) or digits among letters (Experiment 6), and (b) identifying 1 digit among letters and deciding whether an X occurred among the subsequent letters (Experiment 4). In the experiments using the 1st task, the standard AB was found only when both targets were visual. In the 2nd task, with a change in selective set from T1 to T2, a task-switching deficit was obtained regardless of target modality.
Article
Theories of visual attention deal with the limit on our ability to see (and later report) several things at once. These theories fall into three broad classes. Object-based theories propose a limit on the number of separate objects that can be perceived simultaneously. Discrimination-based theories propose a limit on the number of separate discriminations that can be made. Space-based theories propose a limit on the spatial area from which information can be taken up. To distinguish these views, the present experiments used small (less than 1 degree), brief, foveal displays, each consisting of two overlapping objects (a box with a line struck through it). It was found that two judgments that concern the same object can be made simultaneously without loss of accuracy, whereas two judgments that concern different objects cannot. Neither the similarity nor the difficulty of required discriminations, nor the spatial distribution of information, could account for the results. The experiments support a view in which parallel, preattentive processes serve to segment the field into separate objects, followed by a process of focal attention that deals with only one object at a time. This view is also able to account for results taken to support both discrimination-based and space-based theories.
Book
Arien Mack and Irvin Rock make the radical claim that there is no conscious perception of the visual world without attention to it. Many people believe that merely by opening their eyes, they see everything in their field of view; in fact, a line of psychological research has been taken as evidence of the existence of so-called preattentional perception. In Inattentional Blindness, Arien Mack and Irvin Rock make the radical claim that there is no such thing—that there is no conscious perception of the visual world without attention to it. The authors present a narrative chronicle of their research. Thus, the reader follows the trail that led to the final conclusions, learning why initial hypotheses and explanations were discarded or revised, and how new questions arose along the way. The phenomenon of inattentional blindness has theoretical importance for cognitive psychologists studying perception, attention, and consciousness, as well as for philosophers and neuroscientists interested in the problem of consciousness. Bradford Books imprint
Article
The visual environment is extremely rich and complex, producing information overload for the visual system. But the envi- ronment also embodies structure in the form of redundancies and reg- ularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evi- dence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory rep- resentations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.
Article
Previous research has shown that implicit learning of a serial pattern in a reaction time (RT) task is eliminated or reduced when the task is performed concurrently with a tone-counting task. These results led to the inference that implicit learning requires attentional capacity. Two experiments tested the alternative hypothesis that the tone-counting task disrupts learning by preventing consistent organization of the sequence. The tone-counting condition was compared with a condition with additional attentional demands, but no disruption of organization, and with a condition with no additional attentional demands, but disruption of organization. The results were consistent with the organizational hypothesis. It is argued that learning depends on practicing consistently organized runs of trials, that shifts of attention may determine how the runs are organized, and that the relation between attention and learning depends more on organization and intention than on capacity. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Aimed at providing a comprehensive overview of implicit learning, the contributors to this volume explore the field's controversies, the functional characteristics of implicit learning, brain mechanisms and the neurological foundations for implicit learning, connectionist models of implicit learning, and applications of implicit learning to acquiring new mental skills. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Although very little visual information is explicitly retained across views, some continuity of processing is afforded by implicit visual memory traces of previous views. These memory traces interact with attentional mechanisms to guide eye movements, cognition, and action. Two different memory mechanisms are described here. First, the deployments of focal attention and eye movements are facilitated towards recently attended features and locations (priming of popout). Second, attention is guided by implicit memory traces of specific visual contexts experienced in the past (contextual cueing). Compared to the visual memory tapped by change blindness tasks, the implicit memory mechanisms of priming of popout and contextual cueing do not require conscious intervention and may exhibit greater memory capacity, longer durability, and higher discriminability. Thus, these implicit traces of past views guide attention and eye movements to allow for effective access (indexing) to a scene's details, hence providing context and continuity to ongoing interactions with the perceptual world. Detailed visual representations of the world are clearly ephemeral (Averbach & Coriell, 1961; Neisser, 1967; Sperling, 1960) and fail to persist from one view to another, contrary to one's intuitions that our rich visual experience reflects a seamless integration of a scene's details over eye movements and over time. This failure is highlighted in work on change blindness, which is a striking inability to detect changes to objects and scenes from one view to the next. The pervasiveness of this impairment has been illustrated in an admirable diversity of paradigms. Thus, change blindness occurs for changes introduced to Please address all correspondence to M.
Article
MacProbe is a program that turns an Apple Macintosh with a 68020 processor or greater and a floating point unit into an experimenter’s workstation for implementing a large class of experimental paradigms characteristic of the interdisciplinary fields constituting the cognitive sciences. The core of MacProbe is a structured, interpreted programming language with over 200 high-level commands that provide support for all facets of experimentation from design and presentation of visual and auditory probes, to real-time experiment control, to the analyses and management of experimental data and the presentation of results. The programming language is supplemented by a graphical user interface for such tasks as text and waveform editing and determining the placement of visual probes.
Article
The replicated finding that implicit learning in the serial reaction task (SRT) is impaired when both the learning and the assessment of learning occur in the presence of a secondary tone-counting task has been interpreted as implying that the mechanism(s) underlying implicit sequence learning require(s) attention in order to operate. However, in almost all studies, learning and the assessment of learning have been confounded. It is therefore unclear whether tone counting affects learning per se, the behavioral expression of the learned, or both. The goal of the present research was to disentangle the effects of tone counting on learning and the expression of the learned. In Exps. 1a and 1b, participants performed the Nissen and Bullemer SRT under different single-task (ST) and dual-task (DT) practice schedules. In Exps. 2a and 2b, participants received different amounts of ST and DT practice. In all experiments, degree of implicit learning was then assessed under both ST and DT conditions. Results are consistent with the argument that primarily the expression of what has been learned and, to some extent, implicit learning itself, are affected by tone counting. These findings are easily understood in terms of specific interference mechanisms but are problematic for models that contain the assumption of an attentional learning mechanism.
Article
Visual context information constrains what to expect and where to look, facilitating search for and recognition of objects embedded in complex displays. This article reviews a new paradigm called contextual cueing, which presents well-defined, novel visual contexts and aims to understand how contextual information is learned and how it guides the deployment of visual attention. In addition, the contextual cueing task is well suited to the study of the neural substrate of contextual learning. For example, amnesic patients with hippocampal damage are impaired in their learning of novel contextual information, even though learning in the contextual cueing task does not appear to rely on conscious retrieval of contextual memory traces. We argue that contextual information is important because it embodies invariant properties of the visual environment such as stable spatial layout information as well as object covariation information. Sensitivity to these statistical regularities allows us to interact more effectively with the visual world.
Article
Although at any instant we experience a rich, detailed visual world, we do not use such visual details to form a stable representation across views. Over the past five years, researchers have focused increasingly on 'change blindness' (the inability to detect changes to an object or scene) as a means to examine the nature of our representations. Experiments using a diverse range of methods and displays have produced strikingly similar results: unless a change to a visual scene produces a localizable change or transient at a specific position on the retina, generally, people will not detect it. We review theory and research motivating work on change blindness and discuss recent evidence that people are blind to changes occurring in photographs, in motion pictures and even in real-world interactions. These findings suggest that relatively little visual information is preserved from one view to the next, and question a fundamental assumption that has underlain perception research for centuries: namely, that we need to store a detailed visual representation in the mind/brain from one view to the next.
Article
The book is an extended essay on implicit learning, a topic that emerged in recent years as an important but previously overlooked process. Implicit learning is learning that takes place independent of both the process and products of learning. It occurs without the intention to learn and largely without awareness of the nature of what has been learned. The process is "bottom-up"; information is acquired automatically when individuals focus attention on complex displays; and the knowledge base is "tacit" and largely opaque to introspection. Examples abound in everyday life, notably natural language learning and the acquisition of the mores of social behavior. A core assumption is that this implicit acquisitional mechanism is a fundamental "root" process that is based on evolutionarily old neurological structures and lies at the heart of the adaptive behavioral repertoire of every complex organism. Firstly, the book outlines the essential features of implicit learning that have emerged from controlled studies carried out over the past several decades. It also presents alternative perspectives that have been proposed and accommodates these views to the proposed theoretical model. It then structures the literature within the framework of Darwinian evolutionary biology that lies at the core of the theory. Finally, it shows how the evolutionary stance makes a series of predictions about how functions based on implicit mechanisms should differ from those mediated by consciousness.
Article
Search for conjunctions of highly discriminable features can be rapid or even parallel. This article explores three possible accounts based on (a) perceptual segregation, (b) conjunction detectors, and (c) inhibition controlled separately by two or more distractor features. Search rates for conjunctions of color, size, orientation, and direction of motion correlated closely with an independent measure of perceptual segregation. However, they appeared unrelated to the physiology of single-unit responses. Each dimension contributed additively to conjunction search rates, suggesting that each was checked independently of the others. Unknown targets appear to be found only by serial search for each in turn. Searching through 4 sets of distractors was slower than searching through 2. The results suggest a modification of feature integration theory, in which attention is controlled not only by a unitary "window" but also by a form of feature-based inhibition.
Article
When a briefly presented real-world scene was jumbled, the accuracy of identifying a single, cued object was less than that when the scene was coherent. Jumbling remained an effective variable even when the subject knew where to look and what to look for. Thus an object's meaningful context may affect the course of perceptual recognition and not just peripheral scanning or memory.
Article
Five classes of relations between an object and its setting can characterize the organization of objects into real-world scenes. The relations are (1) Interposition (objects interrupt their background), (2) Support (objects tend to rest on surfaces), (3) Probability (objects tend to be found in some scenes but not others), (4) Position (given an object is probable in a scene, it often is found in some positions and not others), and (5) familiar Size (objects have a limited set of size relations with other objects). In two experiments subjects viewed brief (150 msec) presentations of slides of scenes in which an object in a cued location in the scene was either in a normal relation to its background or violated from one to three of the relations. Such objects appear to (1) have the background pass through them, (2) float in air, (3) be unlikely in that particular scene, (4) be in an inappropriate position, and (5) be too large or too small relative to the other objects in the scene. In Experiment I, subjects attempted to determine whether the cued object corresponded to a target object which had been specified in advance by name. With the exception of the Interposition violation, violation costs were incurred in that the detection of objects undergoing violations was less accurate and slower than when those same objects were in normal relations to their setting. However, the detection of objects in normal relations to their setting (innocent bystanders) was unaffected by the presence of another object undergoing a violation in that same setting. This indicates that the violation costs were incurred not because of an unsuccessful elicitation of a frame or schema for the scene but because properly formed frames interfered with (or did not facilitate) the perceptibility of objects undergoing violations. As the number of violations increased, target detectability generally decreased. Thus, the relations were accessed from the results of a single fixation and were available sufficiently early during the time course of scene perception to affect the perception of the objects in the scene. Contrary to expectations from a bottom-up account of scene perception, violations of the pervasive physical relations of Support and Interposition were not more disruptive on object detection than the semantic violations of Probability, Position and Size. These are termed semantic because they require access to the referential meaning of the object. In Experiment II, subjects attempted to detect the presence of the violations themselves. Violations of the semantic relations were detected more accurately than violations of Interposition and at least as accurately as violations of Support. As the number of violations increased, the detectability of the incongruities between an object and its setting increased. These results provide converging evidence that semantic relations can be accessed from the results of a single fixation. In both experiments information about Position was accessed at least as quickly as information on Probability. Thus in Experiment I, the interference that resulted from placing a fire hydrant in a kitchen was not greater than the interference from placing it on top of a mail ☐ in a street scene. Similarly, violations of Probability in Experiment II were not more detectable than violations of Position. Thus, the semantic relations which were accessed included information about the detailed interactions among the objects—information which is more specific than what can be inferred from the general setting. Access to the semantic relations among the entities in a scene is not deferred until the completion of spatial and depth processing and object identification. Instead, an object's semantic relations are accessed simultaneously with its physical relations as well as with its own identification.
Article
Implicit serial learning occurs when indirect measures such as transfer reveal learning of a repeating sequence even when subjects are not informed of the repeating sequence, are not asked to learn it, and do not become of aware of it. This phenomenon is reminiscent of an experiment by Hebb (1961), who studied the repetition of sequences in a serial recall task. Two experiments investigated the relation between implicit serial learning and ideas about learning forwarded by Hebb and others who used his method. The experiments showed that implicit serial learning occurs even when the repeating sequence is intermixed with randomly generated sequences instead of being repeated continuously, that the organization of the sequence into regularly or irregularly grouped subsequences determines the extent of learning, and that the repetition effect observed does not depend on subjects' ability to recognize the repetition.