(A) The percentage of first eye movements to each of the different distractors observed in Experiment 2, depicted separately for the different distractors. (B) The mean target saccade latencies, which is the time needed to initiate a saccade to the target when the target was the first item selected. The error bars indicate ± 1 SEM.

(A) The percentage of first eye movements to each of the different distractors observed in Experiment 2, depicted separately for the different distractors. (B) The mean target saccade latencies, which is the time needed to initiate a saccade to the target when the target was the first item selected. The error bars indicate ± 1 SEM.

Source publication
Article
Full-text available
It is well-known that we can tune attention to specific features (e.g., colors). Originally, it was believed that attention would always be tuned to the exact feature value of the sought-after target (e.g., orange). However, subsequent studies showed that selection is often geared towards target-dissimilar items, which was variably attributed to (1...

Contexts in source publication

Context 1
... results of Experiment 2 are depicted in Figure 4A. Analyzing the percentage of first distractor fixations with a one-way ANOVA showed a significant main effect of distractor type, F(6, 126) = 44.58, ...
Context 2
... t-tests revealed that the aqua-blue distractor captured the gaze most strongly, significantly more strongly than the target-similar aqua distractor, t(21) = 2.74, p = 0.012, and marginally significantly more strongly than the blue distractor, t(21) = 1.85, p = 0.079. By contrast, capture by the target-similar distractor did not differ from the blue distractor, t < 1 (see Figure 4A). ...
Context 3
... corrected). As shown in Figure 4B, the target saccade latencies were longest in the presence of a target-similar distractor, followed by the aqua-blue distractor and the blue distractor, whereby none of the differences was significant, all ts < 1. Target saccade latencies were significantly longer in the presence of any of these distractors (aqua, aqua-blue, blue) than in the presence of the nontarget-similar turquoise, gray, and salient red distractor, all ts > 2.21, ps ≤ 0.038, though not the green distractor, which did not differ from any distractors, all ts < 1.94, ps > 0.066. Target saccade latencies also did not differ between the turquoise, green, gray, or red distractors, all ts < 1.94, ps > 0.066. ...

Citations

... Deviating from the optimal tuning account, tuning to relative features is predicted to occur independently of nontarget similarity, and can lead to selection of vastly different colours rather than being limited to a range of similar feature or items with a slightly different (shifted) feature value (e.g., York & Becker, 2020). According to the relational account, attention will only be tuned to a particular feature value when the target cannot be found by tuning to the relative feature, as for example when an orange target is surrounded by equal numbers of red and yellow items (e.g., Becker, Harris, Venini & Retell, 2014;Harris, Remington & Becker, 2013;Schoenhammer, Becker & Kerzel, 2020). ...
... Given that the optimal tuning effects were observed only rather late in the visual search trials, it was concluded that optimal tuning does not guide attention, but appears to guide perceptual decision-making after an item has been selected (when the task requires very fine-grained perceptual decision-making; e.g., Scolari & Serences, 2009, 2010. Attention, by contrast, is guided by a much broader, relational target 'template' that can include a range of different colours that may even cross the colour boundaries (Yu et al., 2022; see also York & Becker, 2020). ...
... Another explanation is that the relationally matching distractor is more salient than the optimal distractor, as it has a slightly higher feature contrast to the other items (target and nontargets) than the optimal distractor. Previous studies have included a very salient and highly dissimilar distractor as a control and did not find any strong effects for this distractor (e.g., Becker, Lewis & Axtens, 2017;Martin & Becker, 2018;York & Becker, 2020), arguing against a saliency explanation. However, as the two explanations have never been formally tested, they both remain possible (and other explanations are conceivable as well). ...
Preprint
Full-text available
When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.
... These data suggest that good-enough guidance not only occurs because of fixed limits in visual processing but also because guidance is optimized to be fast, even when inaccurate: errors typically result in low costs because post-selection decision processes can quickly identify them and trigger a new saccade to a different potential target [55,59,60]. Interestingly, although the measured precision of target decisions is greater than guidance, memory representations Box 2. Brain networks for attentional guidance Attentional guidance relies on a network of regions that have frequently been described in terms of 'sources' and 'sites'. ...
... For example, looking for an orange target among yellower distractors results in attentional capture by objects that are 'redder' than the true target. Similar effects have been reported for targets defined by shape, size, line orientation, direction of motion, and even for facial expression search [4,60,[69][70][71][72][73]. The size of target shifting within visual search contexts is greater than that of simultaneous stimulus contrast alone [73,74]. ...
Article
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
... In line with this prediction, several visual search studies showed that when an orange target is presented among mostly yellow(er) items, a red irrelevant distractor was more likely to be attended first than the target, even when it was quite dissimilar from the target, suggesting that attention was biased to all redder items, or the reddest item (e.g., Becker, 2010;Becker et al., 2013;Hamblin-Frohman & Becker, 2021;York & Becker, 2020). ...
... Selection of the distractor was reflected both in higher response times (RT) to the target, as well as in a high proportion of first eye movements to the red distractor. Several studies also included a visually salient distractor with a dissimilar colour (e.g., blue), but found no or only very weak effects of saliency, ruling out that selection of the relatively matching (e.g., red) distractor was mediated by bottom-up, stimulus-driven processes (e.g., Martin & Becker, 2018;York & Becker, 2020). ...
... Third, previous work on characterising the attentional tuning function is also limited in that the studies typically used fairly sparse displays, of four to eight items (e.g., Becker, 2010;Hamblin-Frohman & Becker, 2021, Navalpakkam & Itti, 2007Yu et al., 2022). The results typically showed that visually salient items (e.g., a red item among blue-green stimuli) did not attract attention, or attracted attention only very weakly (e.g., Gaspelin et al., 2015;Martin & Becker, 2018;York & Becker, 2020). These results were taken to show that strong capture by relational items was not mediated by saliency, and that bottom-up saliency may modulate attention to a lesser extent than assumed in current models (e.g., Theeuwes, 2013;Wolfe, 1994). ...
Preprint
Full-text available
It is well-known that visual attention can be tuned in a context-dependent manner to elementary features, such as searching for all redder items or the reddest item, supporting a relational theory of visual attention. However, in previous studies, the conditions were often conducive for relational search, allowing successfully selecting the target relationally on 50% of trials or more. Moreover, the search displays were often only sparsely populated and presented repeatedly, rendering it possible that relational search was based on context learning and not spontaneous. The present study tested the shape of the attentional tuning function in 36-item search displays, when the target never had a maximal feature value (e.g., was never the reddest or yellowest item), and when only the target colour but not the context colour was known. The first fixations on a trial showed that these displays still reliably evoked relational search, even when participants had no advance information about the context and no on-task training. Context learning further strengthened relational tuning on subsequent trials, but was not necessary for relational search. Analysing the progression of visual search within a singe trial showed that attention is first guided to the relationally maximal item (e.g., reddest), then the next-maximal (e.g., next-reddest) item, and so forth, before attention can hone in on target-matching features. In sum, the results support two tenets of the relational account, that information about the dominant feature in a display can be rapidly extracted and used to guide attention to the relatively best-matching features.
... For instance, for orange targets among yellow nontargets, the template facilitates values even redder than the target, which still aids target selection, while making nontarget selection less likely. Becker (2010;Martin & Becker, 2018;York & Becker, 2020) put forward a relational account to explain such shifts, in which attention is guided toward the feature space direction in which the target differs from nontargets. In their experiments they showed that observers are most distracted not by elements most similar to the target but by those that differed most strongly from the nontargets in the same direction as the target. ...
Article
Full-text available
In real-world tasks visual attention is rarely aimed at a single object. Humans rather "forage" the visual scene for information, dynamically switching attentional templates. Several visual search studies have found that observers often use suboptimal attentional control strategies, possibly to avoid effort. Here, we investigated with a foraging paradigm if observers' reluctance to switch between attentional templates increases with template specificity. To that end, we manipulated the feature context of displays in which participants "foraged" moving stimuli on a tablet-PC. Experiment 1 (N = 35) revealed a decline in switching tendency and foraging efficiency with increasing feature-space distance between target alternatives. Experiment 2 (N = 36) found even lower flexibility with distractor color close to target colors and strongest impairments with distractor color in between target colors. Our results demonstrate that visual information sampling is most flexible when broad (instead of very specific) templates and relational search strategies are possible (e.g., attending to "redder" objects), with implications for both attention research and applications, especially in visual-foraging-like tasks, such as baggage screening or medical image assessment. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
... The finding that memory for Similar and Dissimilar colours is indistinguishable once changes in the relative colours are controlled for suggests that there may not be a genuine similarity effect. Similar results were also obtained in the domain of attention research, where it was shown that the similarity effect (e.g., Duncan & Humphreys, 1989;Folk & Remington, 1998) was in fact due to top-down tuning to relative features (e.g., Becker, 2010;Becker, Folk, & Remington, 2013;York & Becker, 2020). However, further research is required to establish whether the Relational Account can explain all the occurrences of similarity effects reported in previous VSTM studies (e.g., Jiang et al., 2016;Yang & Mo, 2017). ...
Article
Visual short-term memory (VSTM) is an important resource that allows temporarily storing visual information. Current theories posit that elementary features (e.g., red, green) are encoded and stored independently of each other in VSTM. However, they have difficulty explaining the similarity effect, that similar items can be remembered better than dissimilar items. In Experiment 1, we tested (N=20) whether the similarity effect may be due to storing items in a context-dependent manner in VSTM (e.g., as the reddest/yellowest item). In line with a relational account of VSTM, we found that the similarity effect is not due to feature similarity, but to an enhanced sensitivity for detecting changes when the relative colour of a to-be-memorised item changes (e.g., from reddest to not-reddest item; than when an item underwent the same change but retained its relative colour; e.g., still reddest). Experiment 2 (N=20) showed that VSTM load, as indexed by the CDA amplitude in the EEG, was smaller when the colours were ordered so that they all had the same relationship than when the same colours were out-of-order, requiring encoding different relative colours. With this, we report two new effects in VSTM – a relational detection advantage that describes an enhanced sensitivity to relative changes in change detection, and a relational CDA effect, which reflects that VSTM load, as indexed by the CDA, scales with the number of (different) relative features between the memory items. These findings support a relational account of VSTM and question the view that VSTM stores features such as colours independently of each other.
... Salient, target-dissimilar distractors produced only weak and inconsistent effects, with a salient green distractor producing a significant AB in the redder target condition but not in the yellower target condition (and both being significantly weaker than the AB of the relatively matching and targetsimilar distractors). These findings closely match the results usually found in spatial attention, when observers have to search for a salient target with a particular color: In these experiments, too, we typically find that relatively matching distractors attract attention and the gaze most strongly, closely followed by target-similar distractors, and significantly less or no capture by salient items (e.g., Martin & Becker, 2018;York & Becker, 2020). This correspondence indicates that attentional orienting and attentional engagement share important characteristics, and that they are partly based on the same (relational) processes and mechanisms (see also Visser, Zuvic, Bischof, & Di Lollo, 1999). ...
Article
Visual attention allows selecting relevant information from cluttered visual scenes and is largely determined by our ability to tune or bias visual attention to goal-relevant objects. Originally, it was believed that this top-down bias operates on the specific feature values of objects (e.g., tuning attention to orange). However, subsequent studies showed that attention is tuned to in a context-dependent manner to the relative feature of a sought-after object (e.g., the reddest or yellowest item), which drives covert attention and eye movements in visual search. However, the evidence for the corresponding relational account is still limited to the orienting of spatial attention. The present study tested whether the relational account can be extended to explain attentional engagement and specifically, the attentional blink (AB) in a rapid serial visual presentation (RSVP) task. In two blocked conditions, observers had to identify an orange target letter that could be either redder or yellower than the other letters in the stream. In line with previous work, a target-matching (orange) distractor presented prior to the target produced a robust AB. Extending on prior work, we found an equally large AB in response to relatively matching distractors that matched only the relative color of the target (i.e., red or yellow; depending on whether the target was redder or yellower). Unrelated distractors mostly failed to produce a significant AB. These results closely match previous findings assessing spatial attention and show that the relational account can be extended to attentional engagement and selection of continuously attended objects in time.
... The hallmark of such a relational search strategy is that all distractors that are redder than the other items (e.g., red-orange or red) can strongly attract attention and the gazeeven when they are very dissimilar to the target and could not be confused with the target (e.g., Becker, 2010a;Becker, Folk, & Remington, 2010, 2013Becker, Harris, Venini, & Retell, 2014). Moreover, in visual search, all distractors that are redder than the target itself (e.g., red-orange, red) will attract attention and the gaze more strongly than a target-similar distractor with the same color as the target (orange; e.g., Becker, Harris, et al., 2014;Martin & Becker, 2018;York & Becker, 2020). When the orange target is presented among red nontarget items in a different block (yellower target), yellow-orange and yellow distractors attract attention and the gaze more strongly than target-similar orange distractors, showing that attention has now been tuned to yellower, or the yellowest item in the visual field (Becker, 2010a;Becker, Harris, et al., 2014). ...
... The finding of stronger capture by redder or yellower items is also consistent with broad top-down tuning to a color category (e.g., red; Guided Search 2.0; Wolfe, 1994), or tuning attention to a shifted, exaggerated target feature value, as proposed in optimal tuning accounts (e.g., Navalpakkam & Itti, 2007;Scolari & Serences, 2009). However, multiple studies showed that attention is indeed genuinely tuned to the relative target feature in a contextdependent manner, not to a broadly defined feature category (Becker, 2010a;Becker, Valuch, & Ansorge, 2014;Becker, Folk, & Remington, 2013), or to a feature that is shifted away from the nontarget feature to a more extreme feature value (e.g., York & Becker, 2020). For instance, in a variant of the spatial cueing paradigm, Becker et al. (2013) showed that relatively matching distractors can attract attention even when they have the same color as the nontargets, which is inconsistent with broad categorical tuning or optimal tuning (see Becker, Harris, York, & Choi, 2017 for similar findings in conjunction cues and targets). ...
... For instance, in a variant of the spatial cueing paradigm, Becker et al. (2013) showed that relatively matching distractors can attract attention even when they have the same color as the nontargets, which is inconsistent with broad categorical tuning or optimal tuning (see Becker, Harris, York, & Choi, 2017 for similar findings in conjunction cues and targets). Moreover, in a visual search task, York and Becker (2020) showed that distractors can still attract attention and the gaze when they are very dissimilar from the target and outside the area of optimal tuning. Collectively, these results show that attention is genuinely tuned to the relative feature of the target, and suggest that previous results attributed to broad categorical or optimal tuning were in fact because of relational tuning (e.g., Becker, 2010aBecker, , 2013aBecker, , 2013bBecker et al., 2013). ...
Article
Full-text available
Current models of attention propose that we can tune attention in a top-down controlled manner to a specific feature value (e.g., shape, color) to find specific items (e.g., a red car; feature-specific search). However, subsequent research has shown that attention is often tuned in a context-dependent manner to the relative features that distinguish a sought-after target from other surrounding nontarget items (e.g., larger, bluer, and faster; relational search). Currently, it is unknown whether search will be feature-specific or relational in search for multiple targets with different attributes. In the present study, observers had to search for 2 targets that differed either across 2 stimulus dimensions (color, motion; Experiment 1) or within the same stimulus dimension (color; Experiment 2: orange/redder or aqua/bluer). We distinguished between feature-specific and relational search by measuring eye movements to different types of irrelevant distractors (e.g., relatively matching vs. feature-matching). The results showed that attention was biased to the 2 relative features of the targets, both across different feature dimensions (i.e., motion and color) and within a single dimension (i.e., 2 colors; bluer and redder). The results were not due to automatic intertrial effects (dimension weighting or feature priming), and we found only small effects for valid precueing of the target feature, indicating that relational search for two targets was conducted with relative ease. This is the first demonstration that attention is top-down biased to the relative target features in dual target search, which shows that the relational account generalizes to multiple target search. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
... Additionally, the Kerzel study specifically avoided relational guidance of search (Becker, 2010;Becker, Folk & Remington, 2013), whereas we encouraged it in this study. This difference could impact the results, because recent research has questioned whether relational guidance is the same as the shifting of a search template (York & Becker, 2020). Finally, the tasks measured bias in distinct ways and the differences in methodology could explain the discrepant results. ...
Article
Full-text available
When searching for a specific object, we often form an image of the target, which we use as a search template. This template is thought to be maintained in working memory, primarily because of evidence that the contents of working memory influences search behavior. However, it is unknown whether this interaction applies in both directions. Here, we show that changes in search templates influence working memory. Participants were asked to remember the orientation of a line that changed every trial, and on some trials (75%) search for that orientation, but on remaining trials recall the orientation. Critically, we manipulated the target template by introducing a predictable context-distractors in the visual search task were always counterclockwise (or clockwise) from the search target. The predictable context produced a large bias in search. Importantly, we also found a similar bias in orientation memory reports, demonstrating that working memory and target templates were not held as completely separate, isolated representations. However, the memory bias was considerably smaller than the search bias, suggesting that, although there is a common source, the two may not be driven by a single, shared process.
Article
To lessen users' feelings of unease and stress after icons are updated, this study investigates icon recognition and discrimination in terms of icon color and location under different working memory (WM) loads. At low WM load, changing the color or position of the icons has little effect on icon recognition. The icon color update will affect the user's accuracy when less information is presented, but it will not affect participants' reaction time. As for position updates, it is better to update icons' positions within the same row rather than across rows. At high WM load, although subjects responded faster to color than location, subjects were more accurate with location than with color. It can be concluded that color updates are preferred for entertaining user interfaces that demand quick responses, while location updates are preferable for human–computer interfaces that focus on accuracy. From the results of high WM load, subjects are more sensitive to color brightness. The more drastic the color difference, especially if the change level was at levels of 20% or 40%, the subjects' responses would be more obvious.
Article
When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.