Searching for unknown feature targets on more than one dimension: Investigating a “dimension-weighting” account

Birkbeck College, University of London, England.
Perception & Psychophysics (Impact Factor: 2.22). 01/1996; 58(1):88-101. DOI: 10.3758/BF03205479
Source: PubMed


Search for odd-one-out feature targets takes longer when the target can be present in one of several dimensions as opposed to only one dimension (Müller, Heller, & Ziegler, 1995; Treisman, 1988). Müller et al. attributed this cost to the need to discern the target dimension. They proposed a dimension-weighting account, in which master map units compute, in parallel, the weighted sum of dimension-specific saliency signals. If the target dimension is known in advance, signals from that dimension are amplified. But if the target dimension is unknown, it is determined in a process that shifts weight from the nontarget to the target dimension. The weight pattern thus generated persists across trials, producing intertrial facilitation for a target (trial n + 1) dimensionally identical to the preceding target (trial n). In the present study, we employed a set of new tasks in order to reexamine and extend this account. Targets were defined along two possible dimensions (color or orientation) and could take on one of two feature values (e.g., red or blue). Experiments 1 and 2 required absent/present and color/orientation discrimination of a single target, respectively. They showed that (1) both tasks involve weight shifting, though (explicitly) discerning the dimension of a target requires some process additional to simply detecting its presence; and (2) the intertrial facilitation is indeed (largely) dimension specific rather than feature specific in nature. In Experiment 3, the task was to count the number of targets in a display (either three or four), which could be either dimensionally the same (all color or all orientation) or mixed (some color and some orientation). As predicted by the dimension-weighting account, enumerating four targets all defined within the same dimension was faster than counting three such targets or mixed targets defined in two dimensions.

Download full-text


Available from: Hermann J Müller, Apr 05, 2014
  • Source
    • "Effects at the level of the stimulus dimension have been demonstrated previously in the visual search paradigm (Found & Müller, 1996; Müller, Heller, & Ziegler, 1995; Müller, Reimann, & Krummenacher, 2003). For example Müller, Reimann, and Krummenacher (2003) had participants report the presence or absence of a pop-out target that on each trial could be one of two colours (red or blue) or one of two oblique orientations, presented with a number of vertical green distractors. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Previous work on attentional capture has shown the attentional system to be quite flexible in the stimulus properties it can be set to respond to. Several different attentional "modes" have been identified. Feature search mode allows attention to be set for specific features of a target (e.g., red). Singleton detection mode sets attention to respond to any discrepant item ("singleton") in the display. Relational search sets attention for the relative properties of the target in relation to the distractors (e.g., redder, larger). Recently, a new attentional mode was proposed that sets attention to respond to any singleton within a particular feature dimension (e.g., colour; Folk & Anderson, 2010). We tested this proposal against the predictions of previously established attentional modes. In a spatial cueing paradigm, participants searched for a colour target that was randomly either red or green. The nature of the attentional control setting was probed by presenting an irrelevant singleton cue prior to the target display and assessing whether it attracted attention. In all experiments, the cues were red, green, blue, or a white stimulus rapidly rotated (motion cue). The results of three experiments support the existence of a "colour singleton set," finding that all colour cues captured attention strongly, while motion cues captured attention only weakly or not at all. Notably, we also found that capture by motion cues in search for colour targets was moderated by their frequency; rare motion cues captured attention (weakly), while frequent motion cues did not.
    Attention Perception & Psychophysics 05/2015; 77(7). DOI:10.3758/s13414-015-0927-0 · 2.17 Impact Factor
  • Source
    • "On the other hand, probability cueing might also involve a feature-or dimensionbased component, that is, selectively influencing the processing of certain features or feature dimensions (at certain locations). The latter is a central component of Guided-Search-type models of visual attention (e.g., Wolfe et al., 1989; Wolfe, 1994; Müller et al., 1995; Found and Müller, 1996), which assume a processing architecture in which local feature contrast signals are first calculated in parallel (within separate dimensions). These signals can then be top–down modulated, or " weighted " , prior to their integration into a master salience map, which guides the deployment of attention. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed “probability cueing.” The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor (“additional singleton”) were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.
    Frontiers in Psychology 11/2014; 5. DOI:10.3389/fpsyg.2014.01195 · 2.80 Impact Factor
  • Source
    • "Traditionally, facilitation of low-level perceptual skills has been primarily attributed to two mechanisms: attention and visual perceptual learning. For example, previous psychophysical studies of vision showed that selective attention [1], [2] and feature-based attention [3], [4] can generate perceptual improvements. In addition, previous visual perceptual learning studies demonstrated perceptual improvements specific to the stimulus attributes used in training (e.g. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Can subjective belief about one's own perceptual competence change one's perception? To address this question, we investigated the influence of self-efficacy on sensory discrimination in two low-level visual tasks: contrast and orientation discrimination. We utilised a pre-post manipulation approach whereby two experimental groups (high and low self-efficacy) and a control group made objective perceptual judgments on the contrast or the orientation of the visual stimuli. High and low self-efficacy were induced by the provision of fake social-comparative performance feedback and fictional research findings. Subsequently, the post-manipulation phase was performed to assess changes in visual discrimination thresholds as a function of the self-efficacy manipulations. The results showed that the high self-efficacy group demonstrated greater improvement in visual discrimination sensitivity compared to both the low self-efficacy and control groups. These findings suggest that subjective beliefs about one's own perceptual competence can affect low-level visual processing.
    PLoS ONE 10/2014; 9(10):e109392. DOI:10.1371/journal.pone.0109392 · 3.23 Impact Factor
Show more