Article

Configural learning in contextual cuing of visual search

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Two experiments explore the role of configural representations in contextual cuing of visual search. Repeating patterns of distractors (contexts) were trained incidentally as predictive of the target location. Training these repeating contexts in consistent configurations led to stronger contextual cuing than when contexts were trained in inconsistent configurations. Computational simulations with an elemental associative learning model of contextual cuing demonstrated that purely elemental representations could not account for the results. However, a configural model of associative learning was able to simulate the ordinal pattern of data.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For example, Beesley, Vadillo, Pearson, and Shanks (2015) found that preexposure of distractor configurations in an incidental learning task enhanced contextual cuing during a subsequent learning phase. This preexposure benefit could not be accounted for by a model that only learned target-distractor associations (i.e., Brady & Chun, 2007) but could successfully be explained by a model that also learned distractor configurations (Beesley, Vadillo, Pearson, & Shanks, 2016). In addition to learning distractor-distractor relationships, there is evidence that nonconfigural Bbackground^cues, such as colors, textures, or natural scenes, can support contextual cuing (e.g., Brockmole, Castelhano, & Henderson, 2006;Kunar, Flusberg, & Wolfe, 2006). ...
... The reduction in the number of fixations for repeated displays has been replicated repeatedly (e.g., Beesley, Hanafi, Vadillo, Shanks, & Livesey, 2017;Manginelli & Pollmann, 2009;Tseng & Li, 2004;Zhao et al., 2012), and is broadly consistent with data that has found cuing effects on search slopes. The idea that people are able to search through repeated displays faster has been incorporated into computational models of the cuing effect, which assume that search time is a function of the predicted number of locations searched by the model (Beesley et al., 2015(Beesley et al., , 2016Brady & Chun, 2007). ...
... We refer to this novel alternative as the perceptual learning account. According to this idea, people learn about the composition of the repeated displays (e.g., Beesley et al., 2015Beesley et al., , 2016Brady & Chun, 2007) and use this information to facilitate decision-making about the orientation of the target stimulus. We conjecture that, for repeated displays, the information people use to judge the orientation of the target might be influenced by (learnable) factors other than the perceptual properties of the target stimulus itself. ...
Article
Full-text available
Contextual cuing refers to a response time (RT) benefit that occurs when observers search through displays that have been repeated over the course of an experiment. Although it is generally agreed that contextual cuing arises via an associative learning mechanism, there is uncertainty about the type(s) of process(es) that allow learning to influence RT. We contrast two leading accounts of the contextual cuing effect that differ in terms of the general process that is credited with producing the effect. The first, the expedited search account, attributes the cuing effect to an increase in the speed with which the target is acquired. The second, the decision threshold account, attributes the cuing effect to a reduction in the response threshold used by observers when making a subsequent decision about the target (e.g., judging its orientation). We use the diffusion model to contrast the quantitative predictions of these two accounts at the level of individual observers. Our use of the diffusion model allows us to also explore a novel decision-level locus of the cuing effect based on perceptual learning. This novel account attributes the RT benefit to a perceptual learning process that increases the quality of information used to drive the decision process. Our results reveal both individual differences in the process(es) involved in contextual cuing but also identify several striking regularities across observers. We find strong support for both the decision threshold account as well as the novel perceptual learning account. We find relatively weak support for the expedited search account.
... The standard account attributes these savings to search being cued, or guided, more directly to the target location as a result of having acquired a (long-term) associative memory representation, or template, of a specific distractor-target arrangement. This template is activated upon reencountering such an arrangement on a given trial, which then top-down increases the attentional priority of the target location (e.g., ref. 1 ; for computationally explicit models, see, e.g., ref. 10,11 ) thus enhancing the target's potential to summon covert or overt attention. According to this account, the number of attention shifts required to detect a target in a repeated search array will decrease with increasing (re-)encounters of this array, due to the build-up of a search-guiding contextual memory template for this array (e.g., ref. 1 ). ...
Article
Full-text available
Visual search improves when a target is encountered repeatedly at a fixed location within a stable distractor arrangement (spatial context), compared to non-repeated contexts. The standard account attributes this contextual-cueing effect to the acquisition of display-specific long-term memories, which, when activated by the current display, cue attention to the target location. Here we present an alternative, procedural-optimization account, according to which contextual facilitation arises from the acquisition of generic oculomotor scanning strategies, optimized with respect to the entire set of displays, with frequently searched displays accruing greater weight in the optimization process. To decide between these alternatives, we examined measures of the similarity, across time-on-task, of the spatio-temporal sequences of fixations through repeated and non-repeated displays. We found scanpath similarity to increase generally with learning, but more for repeated versus non-repeated displays. This pattern contradicts display-specific guidance, but supports one-for-all scanpath optimization.
... With a maximum of 600-800 trials typically available in a single-session study for practical reasons, it may only be possible to get tens-not hundreds-of trials in each of the many conditions. Another possibility is that the experimental comparison of interest may involve conditions with inherently low numbers of trials; for example, when studying RTs at restricted levels of practice (e.g., Beesley et al., 2016;Mazor & Fleming, 2022) or studying RTs to rare stimuli, responses, or conditions (e.g., Mowrer et al., 1940;Sali et al., 2022). In some designs, the experimental manipulation may require a training or habituation phase, after which the crucial comparison is only possible in a relatively short testing or transfer phase (e.g., Burke & Roodenrys, 2000;Lubczyk et al., 2022), or the testing session may have to be relatively short because the participant population of interest does not have enough patience or stamina for testing hundreds of trials Pr(outlier | nonword) ...
Article
A methodological problem in most reaction time (RT) tasks is that some measured RTs may be outliers, being either too fast or too slow to reflect the task-related processing of interest. Numerous ad hoc procedures have been used to identify these outliers for exclusion from further analyses, but the accuracies of these methods have not been systematically compared. The present study compared the performance of 58 different outlier exclusion procedures (OEPs) using four huge datasets of real RTs. The results suggest that these OEPs are likely to do more harm than good, because they incorrectly identify outliers, increase noise, introduce bias, and generally reduce statistical power. The results suggest that RT researchers should not automatically apply any of these OEPs to clean their RT data prior to the main analyses. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
... The standard account attributes these savings to search being 'cued', or guided, more directly to the target location as a result of having acquired a (longterm) associative memory representation, or 'template', of a speci c distractor-target arrangement. This template is activated upon re-encountering such an arrangement on a given trial, which then topdown increases the attentional priority of the target location (e.g., Chun & Jiang, 1998;Brady & Chun, 2007;Beesley et al., 2016) -thus enhancing the target's potential to summon covert or overt attention. According to this account, the number of attention shifts required to detect a target in a repeated search array will decrease with increasing (re-)encounters of this array, due to the build-up of a search-guiding contextual memory template for this array (e.g., Chun & Jiang, 1998). ...
Preprint
Full-text available
Detecting a target in visual search becomes more efficient over time when it is encountered repeatedly at a fixed location within a stable distractor arrangement (spatial ‘context’), compared to non-repeated contexts. The standard account attributes this contextual-cueing effect to the acquisition of display-specific long-term memories, which, when activated by the current display, ‘cue’ attention to the target location. Our alternative, ‘procedural-optimization’ account posits that contextual facilitation arises from the acquisition of generic oculomotor scanning strategies that are optimized with respect to the entire set of displays, with frequently searched displays accruing greater weight in the optimization. To decide between these alternatives, we examined novel measures of the similarity, across time-on-task, of the spatio-temporal sequences of fixations through repeated and non-repeated displays. We found scanpath similarity to increase generally with learning, but more for repeated versus non-repeated displays. This pattern contradicts display-specific guidance, but supports ‘one-for-all’ scanpath optimization.
... Visual search scenes are more complex, representing multiple target-distractor relations. Even though recent work shows there is also a component of scene memory to contextual cueing [32,33], this might not be strong enough to enable one 'scene' to function as a predictor of the next, analogous to how an object predicts the next one in typical temporal statistical learning tasks. A previous study exposed observers to sequenced information in addition to, but independent of, spatial predictive context. ...
Article
Full-text available
The human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context based on sequence order, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was also true when we looked specifically at participants who revealed a sensitivity to spatial predictive context. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.
... Reaction time (RT) was our primary, and accuracy our secondary variable of interest. Statistical assessment of the RT data was done for the second half of the experiment, i.e. blocks [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36] (selected a priori). Only RT data from correct trials was used. ...
Preprint
Full-text available
The human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was true both for participants who were sensitive to spatial predictive context, and for those who were not. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.
... Pre-exposure to repeated displays with unpredictive target locations facilitates contextual cueing at a later stage when the preexposed distractor-distractor configurations predict the target location (Beesley, Vadillo, Pearson, & Shanks, 2015). Moreover, inconsistent displays composed of recombined sub-patterns of distractors that were associated with target positions lead to a weakened contextual cueing effect suggesting that configural presentations contribute to contextual cueing (Beesley, Vadillo, Pearson, & Shanks, 2016). For both of these studies, computational simulations with a configural model of associative learning accounted better for the results than a model, which only considered elementary representations. ...
Article
Full-text available
Learned spatial regularities can efficiently guide visual search. This effect has been extensively studied using the contextual cueing paradigm. We investigated age-related changes in the initial learning of contextual configurations and the relearning after target relocation. Younger and older participants completed a contextual cueing experiment on two days. On day one, they were tested with a standard contextual cueing task. On day two, for the repeated displays the location of the targets was moved while keeping the distractor configurations unchanged. Older participants developed a reliable contextual cueing effect but the emergence of this effect required more repetitions compared to younger individuals. Contextual cueing was apparent quickly after target relocation in younger and older participants. Especially in older adults, the fast updating might be due to learned distractor-distractor associations rather than the updating of target-distractor configurations.
... It would seem therefore that the Selective attention in contextual cuing Beesley,Hanafi,Vadillo,Shanks,& Livesey 34 benefit of segregating the predictive and nonpredictive information occurs during the initial encoding of the configuration, rather than the recall of that information from memory. While further experimental evidence will be needed to support these conclusions, these findings may well have important implications for the manner in which surface feature information is realized in formal models of contextual cuing (e.g., Brady & Chun, 2007;Beesley et al., , 2016. ...
Article
Full-text available
Two experiments examined biases in selective attention during contextual cuing of visual search. When participants were instructed to search for a target of a particular colour, overt attention (as measured by the location of fixations) was biased strongly towards distractors presented in that same colour. However, when participants searched for targets that could be presented in one of two possible colours, overt attention was not biased between the different distractors, regardless of whether these distractors predicted the location of the target (repeating) or did not (randomly arranged). These data suggest that selective attention in visual search is guided only by the demands of the target detection task (the attentional set) and not by the predictive validity of the distractor elements.
Article
Full-text available
Contextual cuing is the enhancement of visual search when the configuration of distractors has been experienced previously. It has been suggested that contextual cuing relies on associative learning between the distractor locations and the target position. Four experiments examined the effect of pre-exposing configurations of consistent distractors on subsequent contextual cuing. The findings demonstrate a facilitation of subsequent cuing for pre-exposed configurations compared to novel configurations that have not been pre-exposed. This facilitation suggests that learning of repeated visual search patterns involves acquisition of not just distractor-target associations but also associations between distractors within the search context, an effect that is not captured by the Brady and Chun (2007) connectionist model of contextual cuing. We propose a new connectionist model of contextual cuing that learns associations between repeated distractor stimuli, enabling it to predict an effect of pre-exposure on contextual cuing. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.
Article
Full-text available
Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate the target. A psychophysiological interactions analysis for repeated configurations revealed that a strong functional connectivity between this area in the right hippocampus and the left superior parietal lobule early in learning was significantly reduced toward the end of the task. Practice related changes (which we call "procedural learning") in activation in temporo-occipital and parietal brain regions depended on whether or not spatial context was repeated. We conclude that context repetition facilitates visual search through chunk formation that reduces the number of effective distractors that have to be processed during the search. Context repetition influences procedural learning in a way that allows for continuous and effective chunk updating.
Article
Full-text available
Invariant spatial relationships of objects may provide a rich source of contextual information. Visual context can assist localization of individual objects via an implicit learning mechanism, as revealed in the contextual cueing paradigm (Chun & Jiang, 1998). What defines a visual context? How robust is contextual learning? And is it perceptually constrained? Here we investigate whether both local context that surround a target, and long-range context that does not spatially coincide with a target, can influence target localization. In the contextual cueing task, participants implicitly learned a context by repeated exposure to items arranged in invariant patterns. Experiments 1 and 2 suggest that only local con-text facilitates target localization. However, Experiment 3 showed that long-range context can prime target location when target and context are not separated by random information. Experiment 4 showed that grouping by colour does not affect contextual cueing, suggesting that spatial features play a more important role than surface features in spatial contextual cueing. In separate analyses, visual hemifield differences were found for learning and performance. In sum, the results indicate that implicit learning of spatial context is robust across noise and biased towards spatially grouped information. Context can have a powerful influence on the processing of visual information. Objects can be recognized without context but when dealing with less familiar objects, complex scenes, or degraded information, the importance of context increases (Ullman, 1996). In a series of studies, Biederman and colleagues Please address all correspondenc e to I.
Article
Full-text available
Three experiments examined “atomistic” and “configurai” processes in stimulus compounding using the rabbit’s conditioned nictitating membrane response. Two conditioned stimuli (CSs) were trained separately and then tested together in a compound. Animals trained with CSs from different modalities—namely, tone and light—showed summation in both acquisition and extinction. That is, the probability of a response to the compound could be predicted by the statistical sum of responding to the CSs. In contrast, animals trained with CSs from the auditory modality, tone and noise, showed a level of responding to the tone + noise compound that was the same as that of the CSs, well under the level predicted by the statistical sum of responding to the CSs. In conclusion, atomistic processes appear to predominate in cross-modal compounding. Configurai processes may occur during compounding within the auditory modality, but atomistic alternatives—namely, common elements and selective attention hypotheses—may be able to explain the results.
Article
Full-text available
A fundamental principle of learning is that predictive cues or signals compete with each other to gain control over behavior. Associative and propositional reasoning theories of learning provide radically different accounts of cue competition. Propositional accounts predict that under conditions that do not afford or warrant the use of higher order reasoning processes, cue competition should not be observed. We tested this prediction in 2 contextual cuing experiments, using a visual search task in which patterns of distractor elements predict the location of a target object. Blocking designs were used in which 2 sets of predictive distractors were trained in compound, with 1 set trained independently. There was no evidence of cue competition in either experiment. In fact, in Experiment 2, we found evidence for augmentation of learning. The findings are contrasted with the predictions of an error-driven associative model of contextual cuing (Brady & Chun, 2007).
Article
Full-text available
The metafor package provides functions for conducting meta-analyses in R. The package includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables (study-level covariates) in these models. Meta-regression analyses with continuous and categorical moderators can be conducted in this way. Functions for the Mantel-Haenszel and Peto's one-step method for meta-analyses of 2 x 2 table data are also available. Finally, the package provides various plot functions (for example, for forest, funnel, and radial plots) and functions for assessing the model fit, for obtaining case diagnostics, and for tests of publication bias.
Article
Full-text available
Visual search is often facilitated when the search display occasionally repeats, revealing a contextual-cueing effect. According to the associative-learning account, contextual cueing arises from associating the display configuration with the target location. However, recent findings emphasizing the importance of local context near the target have given rise to the possibility that low-level repetition priming may account for the contextual-cueing effect. This study distinguishes associative learning from local repetition priming by testing whether search is directed toward a target's expected location, even when the target is relocated. After participants searched for a T among Ls in displays that repeated 24 times, they completed a transfer session where the target was relocated locally to a previously blank location (Experiment 1) or to an adjacent distractor location (Experiment 2). Results revealed that contextual cueing decreased as the target appeared farther away from its expected location, ultimately resulting in a contextual cost when the target swapped locations with a local distractor. We conclude that target predictability is a key factor in contextual cueing.
Article
Full-text available
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Article
Full-text available
Three experiments were carried out. Each required subjects to make judgements about the causal status of cues following a two-stage blocking procedure. In Stage 1 a competitor cue was consistently paired with an outcome, and in Stage 2 the competitor continued to be paired with the outcome but was accompanied by a target cue. It was predicted that causal judgements for the target would be reduced by the presence of the competitor. In Experiments 1 and 2 the blocking procedure was implemented as a computer simulation of a card game during which subjects had to learn which cards produced the best payouts. The cues that subjects used to make their judgement were colours and symbols that appeared on the backs of the cards. When the target and competitor cues appeared on the same card blocking effects did not emerge, but when they appeared as part of different cards blocking effects were found. Thus, spatial separation of target and competitor cues appeared to facilitate blocking. Experiment 3 replicated the blocking result using spatially separated target and competitor cues.
Article
Full-text available
Previous studies have shown that context-facilitated visual search can occur through implicit learning. In the present study, we have explored its oculomotor correlates as a step toward unraveling the mechanisms that underlie such learning. Specifically, we examined a number of oculomotor parameters that might accompany the learning of context-guided search. The results showed that a decrease in the number of saccades occurred along with a fall in search time. Furthermore, we identified an effective search period in which each saccade monotonically brought the fixation closer to the target. Most important, the speed with which eye fixation approached the target did not change as a result of learning. We discuss the general implications of these results for visual search.
Article
Full-text available
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.
Article
Full-text available
Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model's assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.
Chapter
This chapter begins with a review of behavioral evidence of associative learning in vision. It considers which types of associations can be learned, and then explores the properties and constraints of the mechanisms involved in such learning. It also discusses how visual processing can be facilitated by knowledge of associative relationships between objects. The second part of the chapter analyzes the neural mechanisms that support visual associative learning.
Article
Little is known about the conditions that encourage animals to learn to use configural associations to guide their behavior or the consequences of such learning for transfer. This study provided some information about these issues by examining how rats solve the transverse-patterning problem, which requires a configural solution (Spence, 1952). Animals had to concurrently solve 3 simultaneous visual discriminations, represented abstractly as A+ versus B-, B+ versus C-, and C+ versus A-. Experiment 1 indicated that rats use a configural solution even when the problems have an elemental solution, provided that the significance of 1 element (e.g., B) shared by 2 problems is ambiguous (e.g., A+/B-; and B+/C-). Experiments 2 and 3 suggested that, when stimulated to use a configural solution by solving the A+/B- and B+/C- problems, rats transfer the configural solution to problems that have no ambiguous elements.
Article
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of this contextual cueing effect using a novel Cueing-Miscueing design. Pigeons had to peck a target which could appear in one of four possible locations on four possible color backgrounds or four possible color photographs of real-world scenes. On 80% of the trials, each of the contexts was uniquely paired with one of the target locations; on the other 20% of the trials, each of the contexts was randomly paired with the remaining target locations. Pigeons came to exhibit robust contextual cueing when the context preceded the target by 2 s, with reaction times to the target being shorter on correctly-cued trials than on incorrectly-cued trials. Contextual cueing proved to be more robust with photographic backgrounds than with uniformly colored backgrounds. In addition, during the context-target delay, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. These findings confirm the effectiveness of animal models of contextual cueing and underscore the important part played by associative learning in producing the effect.
Article
Conducted 4 experiments with male Sprague-Dawley rats (N = 80) testing the proposition that a compound stimulus, AB, may be conceptualized as composed of the individual A and B elements as well as a separate stimulus unique to their combination. Together with an assumption about limitations on the total associative strength of the compound, this conceptualization can account for the learning of various configural conditioning paradigms. Each experiment examined whether the hypothesized unique stimulus has properties like those of a separable element. Results indicate that, like the separate elements, the unique stimulus can acquire associative strength which is either excitatory or inhibitory, which summates with other associative strengths, which influences the effectiveness of reinforcement and nonreinforcement, and which is attenuated when the unique stimulus becomes irrelevant to reinforcement. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
A paperback edition of the translation by Anrep, first published in 1927 by the Oxford University Press, containing a series of 23 lectures on the research of Pavlov's laboratory in the 1st quarter of the 20th century. From Psyc Abstracts 36:05:5CG30P. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Crowding, the inability to recognize objects in clutter, sets a fundamental limit on conscious visual perception and object recognition throughout most of the visual field. Despite how widespread and essential it is to object recognition, reading and visually guided action, a solid operational definition of what crowding is has only recently become clear. The goal of this review is to provide a broad-based synthesis of the most recent findings in this area, to define what crowding is and is not, and to set the stage for future work that will extend our understanding of crowding well beyond low-level vision. Here we define six diagnostic criteria for what counts as crowding, and further describe factors that both escape and break crowding. All of these lead to the conclusion that crowding occurs at multiple stages in the visual hierarchy.
Article
Little is known about the conditions that encourage animals to learn to use configural associations to guide their behavior or the consequences of such learning for transfer. This study provided some information about these issues by examining how rats solve the transverse-patterning problem, which requires a configural solution (Spence, 1952). Animals had to concurrently solve 3 simultaneous visual discriminations, represented abstractly as A+ versus B-, B+ versus C-, and C+ versus A-. Experiment 1 indicated that rats use a configural solution even when the problems have an elemental solution, provided that the significance of 1 element (e.g., B) shared by 2 problems is ambiguous (e.g., A+/B-; and B+/C-). Experiments 2 and 3 suggested that, when stimulated to use a configural solution by solving the A+/B- and B+/C- problems, rats transfer the configural solution to problems that have no ambiguous elements.
Article
A selective review of experiments that can be said to demonstrate the effects of generalization decrement in Pavlovian condition is presented, and it is argued that an adequate theoretical explanation for them is currently not available. This article then develops a theoretical account for the processes of generalization and generalization decrement in Pavlovian conditioning. It assumes that animals represent their environment by a stimulus array in a buffer and that this array in its entirety constitutes the conditioned stimulus. Generalization is then held to occur whenever at least some of the stimuli represented in the array on a test trial are the same as at least some of those represented in the array during training. Specifically, the magnitude of generalization is determined by the proportion of the array occupied by these common stimuli during training compared to the proportion of the array they occupy during testing. By adding to this principle rules concerning excitatory and inhibitory learning, it is proposed, the model can explain all the results that were difficult for its predecessors to account for.
Article
A configural theory of associative learning is described that is based on the assumption that conditioning results in associations between the unconditioned stimulus and a representation of the entire pattern of stimulation that was present prior to its delivery. Configural theory was formulated originally to account for generalization and discrimination in Pavlovian conditioning. The first part of the article demonstrates how this theory can be used to explain results from studies of overshadowing, blocking, summation, and discrimination learning. The second part of the article shows how the theory can be developed to explain a broader range of phenomena, including mediated conditioning, reinforcer devaluation effects, the differential outcomes effect, acquired equivalence, sensory preconditioning, and structural discriminations.
Article
With the use of spatial contextual cuing, we tested whether subjects learned to associate target locations with overall configurations of distractors or with individual locations of distractors. In Experiment 1, subjects were trained on 36 visual search displays that contained 36 sets of distractor locations and 18 target locations. Each target location was paired with two sets of distractor locations on separate trials. After training, the subjects showed perfect transfer to recombined displays, which were created by combining half of one trained distractor set with half of another trained distractor set. This result suggests that individual distractor locations were sufficient to cue the target location. In Experiment 2, the subjects showed good transfer from trained displays to rescaled, displaced, and perceptually regrouped displays, suggesting that the relative locations among items were also learned. Thus, both individual target-distractor associations and configural associations are learned in contextual cuing.
Article
An enduring theme for theories of associative learning is the problem of explaining how configural discriminations--ones in which the significance of combinations of cues is inconsistent with the significance of the individual cues themselves-are learned. One approach has been to assume that configurations are the basic representational form on which associative processes operate, another has tried in contrast to retain elementalism. We review evidence that human learning is representationally flexible in a way that challenges both configural and elemental theories. We describe research showing that task demands, prior experience, instructions, and stimulus properties all influence whether a particular problem is solved configurally or elementally. Lines of possible future theory development are discussed.
Article
Jiang and Wagner (2004) demonstrated that individual target-distractor associations were learned in contextual cuing. We examined whether individual associations can be learned in efficient visual searches that do not involve attentional deployment to individual search items. In Experiment 1, individual associations were not learned during the efficient search tasks. However, in Experiment 2, where additional exposure duration of the search display was provided by presenting placeholders marking future locations of the search items, individual associations were successfully learned in the efficient search tasks and transferred to inefficient search. Moreover, Experiment 3 demonstrated that a concurrent task requiring attention does not affect the learning of the local visual context. These results clearly showed that attentional deployment is not necessary for learning individual locations and clarified how the human visual system extracts and preserves regularity in complex visual environments for efficient visual information processing.