Brightness and darkness as perceptual dimensions.

Laboratory of Experimental Ophthalmology & BCN NeuroImaging Centre, School of Behavioural and Cognitive Neurosciences, University Medical Centre Groningen, University of Groningen, Groningen, The Netherlands.
PLoS Computational Biology (Impact Factor: 4.87). 11/2007; 3(10):e179. DOI: 10.1371/journal.pcbi.0030179
Source: PubMed

ABSTRACT A common-sense assumption concerning visual perception states that brightness and darkness cannot coexist at a given spatial location. One corollary of this assumption is that achromatic colors, or perceived grey shades, are contained in a one-dimensional (1-D) space varying from bright to dark. The results of many previous psychophysical studies suggest, by contrast, that achromatic colors are represented as points in a color space composed of two or more perceptual dimensions. The nature of these perceptual dimensions, however, presently remains unclear. Here we provide direct evidence that brightness and darkness form the dimensions of a two-dimensional (2-D) achromatic color space. This color space may play a role in the representation of object surfaces viewed against natural backgrounds, which simultaneously induce both brightness and darkness signals. Our 2-D model generalizes to the chromatic dimensions of color perception, indicating that redness and greenness (blueness and yellowness) also form perceptual dimensions. Collectively, these findings suggest that human color space is composed of six dimensions, rather than the conventional three.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Simultaneous contrast refers to the respective whitening or blackening of physically identical image regions surrounded by regions of low or high luminance, respectively. A common method of measuring the strength of this effect is achromatic color matching, in which subjects adjust the luminance of a target region to achieve an achromatic color match with another region. Here I present psychophysical data questioning the assumption-built into many models of achromatic color perception-that achromatic colors are represented as points in a one-dimensional (1D) perceptual space, or an absolute achromatic color gamut. I present an alternative model in which the achromatic color gamut corresponding to a target region is defined relatively, with respect to surround luminance. Different achromatic color gamuts in this model correspond to different 1D lines through a 2D perceptual space composed of blackness and whiteness dimensions. Each such line represents a unique gamut of achromatic colors ranging from black to white. I term this concept gamut relativity. Achromatic color matches made between targets surrounded by regions of different luminance are shown to reflect the relative perceptual distances between points lying on different gamut lines. The model suggests a novel geometrical approach to simultaneous contrast and achromatic color matching in terms of the vector summation of local luminance and contrast components, and sets the stage for a unified computational theory of achromatic color perception.
    Vision research 08/2012; 69:49-63. · 2.29 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.
    Frontiers in Human Neuroscience 01/2014; 8:640. · 2.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The visual system's computation of lightness (perceived reflectance) leads to contrast effects in which a gray target region appears lighter on a black background than on a white one. Here we show a paradoxical contrast effect in which targets look lighter after adding regions that increase the scene's average luminance, and darker after adding regions that decrease this luminance. The paradoxical effect emerges if the target sits either on a black local background surrounded by a white remote background, or on a white local background surrounded by a black remote background. It does not occur if both backgrounds have the same luminance. The effect is consistent with Bressan's double-anchoring theory, and likely also with those edge-integration theories that assume gain control, but differs from previously reported effects of assimilation, articulation, reverse contrast, and remote contrast.
    Vision research 11/2009; 50(2):144-8. · 2.29 Impact Factor

Full-text (2 Sources)

Available from
May 30, 2014