James A. Ferwerda

Rochester Institute of Technology, Rochester, New York, United States

Are you James A. Ferwerda?

Claim your profile

Publications (45)5.82 Total impact

  • James A. Ferwerda
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we describe our efforts to create a surface display system that produces realistic representations of real-world surfaces. Our system, based on a desktop PC with GPU hardware, LCD display, light and position sensors, and custom graphics software, supports the photometrically accurate and visually realistic real-time simulation and display of surfaces with complex color, gloss, and textural properties in real-world lighting environments, and allows users to interact with and manipulate these virtual surfaces as naturally as if they were real ones. We explain the system design, illustrate its capabilities, describe experiments we conducted to validate its fidelity, and discuss potential applications in material appearance research and computer-aided appearance design.
    Vision Research. 11/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we describe a series of psychophysical experiments to quantify the relationships between anti-glare (AG) glass treatments and perceived sparkle in emissive displays. The experiments show the following: (1) that a new measure, pixel power deviation, correlates well with perceived sparkle; (2) that for a given AG treatment, sparkle is worse on high-pitch displays; (3) that tests of sparkle using small samples provide a conservative bound on perceived sparkle in display-sized samples; and (3) that sparkle visibility is affected by the content of displayed images. The goal of these efforts is to enable the development of AG glass treatments for emissive displays that effectively reduce front surface reflections while preserving high image quality.
    Journal of the Society for Information Display. 07/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Relief printing technology developed by Océ allows the superposition of several layers of colorant on different types of media which creates a variation of the surface height defined by the input to the printer. Evaluating the reproduction accuracy of distinct surface characteristics is of great importance to the application of the relief printing system. Therefore, it is necessary to develop quality metrics to evaluate the relief process. In this paper, we focus on the third dimension of relief printing, i.e. height information. To achieve this goal, we define metrics and develop models that aim to evaluate relief prints in two aspects: overall fidelity and surface finish. To characterize the overall fidelity, three metrics are calculated: Modulation Transfer Function (MTF), difference and root-mean-squared error (RMSE) between the input height map and scanned height map, and print surface angle accuracy. For the surface finish property, we measure the surface roughness, generate surface normal maps and develop a light reflection model that serves as a simulation of the differences between ideal prints and real prints that may be perceived by human observers. Three sets of test targets are designed and printed by the Océ relief printer prototypes for the calculation of the above metrics: (i) twisted target, (ii) sinusoidal wave target, and (iii) ramp target. The results provide quantitative evaluations of the printing quality in the third dimension, and demonstrate that the height of relief prints is reproduced accurately with respect to the input design. The factors that affect the printing quality include: printing direction, frequency and amplitude of the input signal, shape of relief prints. Besides the above factors, there are two additional aspects that influence the viewing experience of relief prints: lighting condition and viewing angle.
    02/2014;
  • James A. Ferwerda
    [Show abstract] [Hide abstract]
    ABSTRACT: Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.
    01/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: A new method for quantifying “sparkle” uses a simple measurement which includes a pixelated source, a test sample, and an eye simulator. The degree of sparkle is calculated from the standard deviation of the pixel powers across a portion of the display. Measurements show excellent correlation to human response studies.
    SID Symposium Digest of Technical Papers. 06/2013; 44(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: : We describe a series of psychophysical experiments to quantify the relationships between anti-glare glass treatments and perceived sparkle in emissive displays. The experiments show that: 1) a new measure pixel-power-deviation (PPD) correlates well with perceived sparkle; 2) tests of sparkle using small samples provide a conservative bound on perceived sparkle in display-sized samples; and 3) sparkle visibility is affected by the content of displayed images.
    SID Symposium Digest of Technical Papers. 06/2013; 44(1).
  • James A. Ferwerda, Benjamin A. Darling
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we describe our efforts to create tangible imaging systems that provide rich virtual representations of real-world surfaces. Tangible imaging systems have three main properties: 1) the images produced must be visually realistic; 2) the images must be responsive to user interaction; and 3) the images must be situated, appearing to be integrated with their environments. Our current system, based on a computer, LCD display, light and position sensors, and graphics rendering tools meets all these requirements; supporting the accurate simulation of the appearances of surfaces with complex textures and material properties, and allowing users to interact with and experience these virtual surfaces as if they were real ones. We first describe the components of our current system and its implementation. We then illustrate the system's capabilities for simulating the appearances and behaviors of real-world surfaces. Finally we describe some potential applications of tangible imaging systems and discuss limitations and future work.
    Proceedings of the 4th international conference on Computational Color Imaging; 03/2013
  • Source
    Jaroslav Krivánek, James A. Ferwerda, Kavita Bala
    [Show abstract] [Hide abstract]
    ABSTRACT: Rendering applications in design, manufacturing, ecommerce and other fields are used to simulate the appearance of objects and scenes. Fidelity with respect to appearance is often critical, and calculating global illumination (GI) is an important contributor to image fidelity; but it is expensive to compute. GI approximation methods, such as virtual point light (VPL) algorithms, are efficient, but they can induce image artifacts and distortions of object appearance. In this paper we systematically study the perceptual effects on image quality and material appearance of global illumination approximations made by VPL algorithms. In a series of psychophysical experiments we investigate the relationships between rendering parameters, object properties and image fidelity in a VPL renderer. Using the results of these experiments we analyze how VPL counts and energy clamping levels affect the visibility of image artifacts and distortions of material appearance, and show how object geometry and material properties modulate these effects. We find the ranges of these parameters that produce VPL renderings that are visually equivalent to reference renderings. Further we identify classes of shapes and materials that cannot be accurately rendered using VPL methods with limited resources. Using these findings we propose simple heuristics to guide visually equivalent and efficient rendering, and present a method for correcting energy losses in VPL renderings. This work provides a strong perceptual foundation for a popular and efficient class of GI algorithms.
    ACM Transactions on Graphics 07/2010; 29. · 3.36 Impact Factor
  • Source
    James A. Ferwerda
    [Show abstract] [Hide abstract]
    ABSTRACT: In his book "Understanding Media" social theorist Marshall McLuhan declared: "The medium is the message." The thesis of this paper is that with respect to image quality, imaging system developers have taken McLuhan's dictum too much to heart. Efforts focus on improving the technical specifications of the media (e.g. dynamic range, color gamut, resolution, temporal response) with little regard for the visual messages the media will be used to communicate. We present a series of psychophysical studies that investigate the visual system's ability to "see through" the limitations of imaging media to perceive the messages (object and scene properties) the images represent. The purpose of these studies is to understand the relationships between the signal characteristics of an image and the fidelity of the visual information the image conveys. The results of these studies provide a new perspective on image quality that shows that images that may be very different in "quality", can be visually equivalent as realistic representations of objects and scenes.
    Human Vision and Electronic Imaging XV - part of the IS&T-SPIE Electronic Imaging Symposium, San Jose, CA, USA, January 18-21, 2010, Proceedings; 01/2010
  • Source
    Jonathan B. Phillips, James A. Ferwerda, Ann Nunziata
    [Show abstract] [Hide abstract]
    ABSTRACT: Human observers are able to make fine discriminations of surface gloss. What cues are they using to perform this task? In previous studies, we identified two reflection-related cues-the contrast of the reflected image (c, contrast gloss) and the sharpness of reflected image (d, distinctness-of-image gloss)--but these were for objects rendered in standard dynamic range (SDR) images with compressed highlights. In ongoing work, we are studying the effects of image dynamic range on perceived gloss, comparing high dynamic range (HDR) images with accurate reflections and SDR images with compressed reflections. In this paper, we first present the basic findings of this gloss discrimination study then present an analysis of eye movement recordings that show where observers were looking during the gloss discrimination task. The results indicate that: 1) image dynamic range has significant influence on perceived gloss, with surfaces presented in HDR images being seen as glossier and more discriminable than their SDR counterparts; 2) observers look at both light source highlights and environmental interreflections when judging gloss; and 3) both of these results are modulated by surface geometry and scene illumination.
    Human Vision and Electronic Imaging XV - part of the IS&T-SPIE Electronic Imaging Symposium, San Jose, CA, USA, January 18-21, 2010, Proceedings; 01/2010
  • Source
    Benjamin A. Darling, James A. Ferwerda
    [Show abstract] [Hide abstract]
    ABSTRACT: When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications. Bibtex entry for this abstract Preferred format for this abstract (see Preferences) Find Similar Abstracts: Use: Authors Title Abstract Text Return: Query Results Return items starting with number Query Form Database: Astronomy Physics arXiv e-prints
    Human Vision and Electronic Imaging XV - part of the IS&T-SPIE Electronic Imaging Symposium, San Jose, CA, USA, January 18-21, 2010, Proceedings; 01/2010
  • Stefan Luka, James A. Ferwerda
    [Show abstract] [Hide abstract]
    ABSTRACT: In dual layer high dynamic range displays, images are formed through the optical modulation of a front panel image by a backlight image. We present a colorimetric method for calculating the backlight image that maximizes gamut utilization while minimizing quantization errors, producing displayed images with deeper, more saturated colors.
    SID Symposium Digest of Technical Papers. 06/2009; 40(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
    Proc SPIE 01/2008;
  • Source
    Ganesh Ramanarayanan, Kavita Bala, James A. Ferwerda
    [Show abstract] [Hide abstract]
    ABSTRACT: Aggregates of individual objects, such as forests, crowds, and piles of fruit, are a common source of complexity in computer graphics scenes. When viewing an aggregate, observers attend less to individual objects and focus more on overall properties such as numerosity, variety, and arrangement. Paradoxically, rendering and modeling costs increase with aggregate complexity, exactly when observers are attending less to individual objects. In this paper we take some first steps to characterize the limits of visual coding of aggregates to efficiently represent their appearance in scenes. We describe psychophysical experiments that explore the roles played by the geometric and material properties of individual objects in observers' abilities to discriminate different aggregate collections. Based on these experiments we derive metrics to predict when two aggregates have the same appearance, even when composed of different objects. In a follow-up experiment we confirm that these metrics can be used to predict the appearance of a range of realistic aggregates. Finally, as a proof-of-concept we show how these new aggregate perception metrics can be applied to simplify scenes by allowing substitution of geometrically simpler aggregates for more complex ones without changing appearance.
    ACM Trans. Graph. 01/2008; 27.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Efficient, realistic rendering of complex scenes is one of the grand challenges in computer graphics. Perceptually based rendering addresses this challenge by taking advantage of the limits of human vision. However, existing methods, based on predicting visible image differences, are too conservative because some kinds of image differences do not matter to human observers. In this paper, we introduce the concept of visual equivalence, a new standard for image fidelity in graphics. Images are visually equivalent if they convey the same impressions of scene appearance, even if they are visibly different. To understand this phenomenon, we conduct a series of experiments that explore how object geometry, material, and illumination interact to provide information about appearance, and we characterize how two kinds of transformations on illumination maps (blurring and warping) affect these appearance attributes. We then derive visual equivalence predictors (VEPs): metrics for predicting when images rendered with transformed illumination maps will be visually equivalent to images rendered with reference maps. We also run a confirmatory study to validate the effectiveness of these VEPs for general scenes. Finally, we show how VEPs can be used to improve the efficiency of two rendering algorithms: Light-cuts and precomputed radiance transfer. This work represents some promising first steps towards developing perceptual metrics based on higher order aspects of visual coding.
    ACM Trans. Graph. 01/2007; 26:76.
  • Conference Paper: Visual equivalence
    [Show abstract] [Hide abstract]
    ABSTRACT: Efficient, realistic rendering of complex scenes is one of the grand challenges in computer graphics. Perceptually based rendering addresses this challenge by taking advantage of the limits of human vision. However, existing methods, based on predicting visible image differences, are too conservative because some kinds of image differences do not matter to human observers. In this paper, we introduce the concept of visual equivalence, a new standard for image fidelity in graphics. Images are visually equivalent if they convey the same impressions of scene appearance, even if they are visibly different. To understand this phenomenon, we conduct a series of experiments that explore how object geometry, material, and illumination interact to provide information about appearance, and we characterize how two kinds of transformations on illumination maps (blurring and warping) affect these appearance attributes. We then derive visual equivalence predictors (VEPs): metrics for predicting when images rendered with transformed illumination maps will be visually equivalent to images rendered with reference maps. We also run a confirmatory study to validate the effectiveness of these VEPs for general scenes. Finally, we show how VEPs can be used to improve the efficiency of two rendering algorithms: Light-cuts and precomputed radiance transfer. This work represents some promising first steps towards developing perceptual metrics based on higher order aspects of visual coding.
    ACM SIGGRAPH 2007 papers; 01/2007
  • Source
    Piti Irawan, James A. Ferwerda, Stephen R. Marschner
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a new perceptually based tone mapping operator that represents scene visibility under timevarying, high dynamic range conditions. The operator is based on a new generalized threshold model that extends the conventional threshold-versus-intensity (TVI) function to account for the viewer's adaptation state, and a new temporal adaptation model that includes fast and slow neural mechanisms as well as photopigment bleaching. These new visual models allow the operator to produce tone-mapped image streams that represent the loss of visibility experienced under changing illumination conditions and in high dynamic range scenes. By varying the psychophysical data that the models use, we simulate the differences in scene visibility experienced by normal and visually impaired observers.
    Proceedings of the Eurographics Symposium on Rendering Techniques, Konstanz, Germany, June 29 - July 1, 2005; 01/2005
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of this project was to determine if advanced rendering methods such as global illumination allow more accurate discrimination of shape differences than standard rendering methods such as OpenGL.To address these questions, we conducted two psychophysical experiments to measure observers' sensitivity to shape differences between a physical model and rendered images of the model. Two results stand out:• The rendering method used has a significant effect on the ability to discriminate shape. In particular, under the conditions tested, global illumination rendering improves sensitivity to shape differences.• Further, viewpoint appears to have an effect on the ability to discriminate shape. In most of the cases studied, sensitivity to small shape variations was poorer when the rendering and model viewpoints were different.The results of this work have important implications for our understanding of human shape perception and for the development of rendering tools for computer-aided design.
    Proceedings of the 1st Symposium on Applied Perception in Graphics and Visualization, APGV 2004, Los Angeles, California, USA, August 7-8, 2004; 01/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we introduce a new perceptual metric for efficient, high quality, global illumination rendering. The metric is based on a rendering-by-components framework in which the direct, and indi- rect diffuse, glossy, and specular light transport paths are separately computed and then composited to produce an image. The metric predicts the perceptual importances of the computationally expen- sive indirect illumination components with respect to image qual- ity. To develop the metric we conducted a series of psychophysical experiments in which we measured and modeled the perceptual im- portances of the components. An important property of this new metric is that it predicts component importances from inexpensive estimates of the reflectance properties of a scene, and therefore adds negligible overhead to the rendering process. This perceptual met- ric should enable the development of an important new class of ef- ficient global-illumination rendering systems that can intelligently allocate limited computational resources, to provide high quality images at interactive rates. CR Categories: I.3.7 (Three-Dimensional Graphics and Real- ism);
    ACM Trans. Graph. 01/2004; 23:742-749.
  • James A. Ferwerda, Fabio Pellacini
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we introduce Functional Difference Predictors (FDPs), a new class of perceptually-based image difference metrics that predict how image errors affect the ability to perform visual tasks using the images. To define the properties of FDPs, we conduct a psychophysical experiment that focuses on two visual tasks: spatial layout and material estimation. In the experiment we introduce errors in the positions and contrasts of objects reflected in glossy surfaces and ask subjects to make layout and material judgments. The results indicate that layout estimation depends only on positional errors in the reflections and material estimation depends only on contrast errors. These results suggest that in many task contexts, large visible image errors may be tolerated without loss in task performance, and that FDPs may be better predictors of the relationship between errors and performance than current Visible Difference Predictors (VDPs).
    12/2003;

Publication Stats

2k Citations
5.82 Total Impact Points

Institutions

  • 2008–2014
    • Rochester Institute of Technology
      • Chester F. Carlson Center for Imaging Science
      Rochester, New York, United States
  • 1996–2007
    • Cornell University
      • Department of Computer Science
      Ithaca, New York, United States