Brightness and darkness as perceptual dimensions.

Laboratory of Experimental Ophthalmology & BCN NeuroImaging Centre, School of Behavioural and Cognitive Neurosciences, University Medical Centre Groningen, University of Groningen, Groningen, The Netherlands.
PLoS Computational Biology (Impact Factor: 4.83). 11/2007; 3(10):e179. DOI: 10.1371/journal.pcbi.0030179
Source: PubMed

ABSTRACT A common-sense assumption concerning visual perception states that brightness and darkness cannot coexist at a given spatial location. One corollary of this assumption is that achromatic colors, or perceived grey shades, are contained in a one-dimensional (1-D) space varying from bright to dark. The results of many previous psychophysical studies suggest, by contrast, that achromatic colors are represented as points in a color space composed of two or more perceptual dimensions. The nature of these perceptual dimensions, however, presently remains unclear. Here we provide direct evidence that brightness and darkness form the dimensions of a two-dimensional (2-D) achromatic color space. This color space may play a role in the representation of object surfaces viewed against natural backgrounds, which simultaneously induce both brightness and darkness signals. Our 2-D model generalizes to the chromatic dimensions of color perception, indicating that redness and greenness (blueness and yellowness) also form perceptual dimensions. Collectively, these findings suggest that human color space is composed of six dimensions, rather than the conventional three.

  • [Show abstract] [Hide abstract]
    ABSTRACT: A series of experiments were conducted to assess how the reflectance properties and the complexity of surface "mesostructure" (small-scale 3-D relief) influence perceived lightness. Experiment 1 evaluated the role of surface relief and gloss on perceived lightness. For surfaces with visible mesostructure, lightness constancy was better for targets embedded in glossy than matte surfaces. The results for surfaces that lacked surface relief were qualitatively different than the 3-D surrounds, exhibiting abrupt steps in perceived lightness at points at which the targets transition from being increments to decrements. Experiments 2 and 4 compared the matte and glossy 3-D surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum, respectively. Although some improved lightness constancy was observed for the 3-D gloss display over the histogram-matched display, this benefit was not observed for phase-scrambled variants of these images with equated power spectrums. These results suggest that the improved lightness constancy observed with 3-D surfaces can be well explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity whereas the putatively "simpler" flat displays may evoke more complex midlevel representations similar to that evoked in conditions of transparency.
    Journal of Vision 07/2014; 14(8). DOI:10.1167/14.8.24 · 2.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.
    Frontiers in Human Neuroscience 08/2014; 8:640. DOI:10.3389/fnhum.2014.00640 · 2.90 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When we look at the world-or a graphical depiction of the world-we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance-based on a boarder theoretical framework called gamut relativity-that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.
    PLoS ONE 11/2014; 9(11):e113159. DOI:10.1371/journal.pone.0113159 · 3.53 Impact Factor

Full-text (2 Sources)

Available from
May 30, 2014