Article

On pictures and stuff: Image quality and material appearance

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In addition, some studies explored how to describe the visual appearance of different materials using color appearance along with texture 15 or gloss. 16 Color is observed in two forms, i.e., related colors and unrelated colors. 2 Most color appearance models were based on related colors that were perceived to belong to an area or object seen in relation to other colors while Withouck et al. [17][18][19] implemented a series of studies on unrelated colors, which were perceived to belong to an area or object seen in isolation from other colors. They proposed the method to predict brightness for unrelated self-luminous stimuli and a new model named CAM15u. ...
Article
This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
Conference Paper
Full-text available
In this paper we introduce a new light reflection model for image synthesis based on experimental studies of surface gloss perception. To develop the model, we've conducted two experiments that explore the relationships between the physical parameters used to describe the reflectance properties of glossy surfaces and the perceptual dimensions of glossy appearance. In the first experiment we use multidimensional scaling techniques to reveal the dimensionality of gloss perception for simulated painted surfaces. In the second experiment we use magnitude estimation methods to place metrics on these dimensions that relate changes in apparent gloss to variations in surface reflectance properties. We use the results of these experiments to rewrite the parameters of a physically-based light reflection model in perceptual terms. The result is a new psychophysically-based light reflection model where the dimensions of the model are perceptually meaningful, and variations along the dimensions are perceptually uniform. We demonstrate that the model can facilitate describing surface gloss in graphics rendering applications. This work represents a new methodology for developing light reflection models for image synthesis.
Conference Paper
Full-text available
In this paper we introduce functional difference predictors (FDPs), a new class of perceptually-based image difference metrics that predict how image errors affect the ability to perform visual tasks using the images. To define the properties of FDPs, we conduct a psychophysical experiment that focuses on two visual tasks: spatial layout and material estimation. In the experiment we introduce errors in the positions and contrasts of objects reflected in glossy surfaces and ask subjects to make layout and material judgments. The results indicate that layout estimation depends only on positional errors in the reflections and material estimation depends only on contrast errors. These results suggest that in many task contexts, large visible image errors may be tolerated without loss in task performance, and that FDPs may be better predictors of the relationship between errors and performance than current visible difference predictors (VDPs).
Article
Full-text available
Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/∼lcv/ssim/.
Article
Efficient, realistic rendering of complex scenes is one of the grand challenges in computer graphics. Perceptually based rendering addresses this challenge by taking advantage of the limits of human vision. However, existing methods, based on predicting visible image differences, are too conservative because some kinds of image differences do not matter to human observers. In this paper, we introduce the concept of visual equivalence, a new standard for image fidelity in graphics. Images are visually equivalent if they convey the same impressions of scene appearance, even if they are visibly different. To understand this phenomenon, we conduct a series of experiments that explore how object geometry, material, and illumination interact to provide information about appearance, and we characterize how two kinds of transformations on illumination maps (blurring and warping) affect these appearance attributes. We then derive visual equivalence predictors (VEPs): metrics for predicting when images rendered with transformed illumination maps will be visually equivalent to images rendered with reference maps. We also run a confirmatory study to validate the effectiveness of these VEPs for general scenes. Finally, we show how VEPs can be used to improve the efficiency of two rendering algorithms: Light-cuts and precomputed radiance transfer. This work represents some promising first steps towards developing perceptual metrics based on higher order aspects of visual coding.
Article
Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.