William B. Thompson’s research while affiliated with University of Utah and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (102)


Workflow of the DeVAS software
A: the original image of a step rendered by Radiance software. B: image filtered to simulate Severe low vision (VA 1.55 logMAR, CS 0.6 Pelli-Robson). C: luminance boundaries extracted from the filtered image. D: Pixels representing Geometric edges inferred from 3D data map of the space. E: Estimation of hazard visibility, based on a match between the luminance contours in C and the geometrical edges in D. Color coding represents the closeness of the match, ranging from red (poor match) to green (good match). F: A manually defined Region of Interest (ROI), G: The conjunction of E and F which specifies the hazard region of primary focus, which is used to generate the final Hazard Visibility Score (HVS).
Geometry, lighting, and viewpoint variation of stimuli
The top row used the lighting setting “spotlight 1” and viewpoint setting “center” to demonstrate the five target types: flat, big step-up big step-down, small step-up, and small step-down. The middle row used big step-down and center viewpoint to show the five lighting variations: overhead, far panel, near panel, spotlight 1, and spotlight 2. The bottom row used big step-down and spotlight 1 to show the five viewpoints: center, pivot left, pivot right, rotate down, and rotate up.
Distribution of trials in ten HVS bins, each covering 0.1 width in a zero to one range
The upper panel shows the trial distribution for Moderate blur (1.2 logMAR) and the lower panel shows the distribution for Severe blur (1.6 logMAR).
Logistic regression model of aggregated data from subjects viewing with artificial blur
Top: Moderate Blur (seven subjects), mean acuity 1.2 logMAR. Bottom: Severe Blur (seven subjects), mean acuity 1.6 logMAR. The red line shows the logistic regression function, transformed as shown in Eq 2. the gray area represents 95% confidence intervals.
Histograms presenting correct and incorrect trials in each 0.1-wide bin of the HVS accumulated across seven subjects in each blur group
The upper panel shows the distribution for moderate blur group trials, and the lower panel shows the distribution of severe blur group trials.

+8

Validating a model of architectural hazard visibility with low-vision observers
  • Article
  • Full-text available

November 2021

·

58 Reads

·

2 Citations

·

Yichen Liu

·

·

[...]

·

Pedestrians with low vision are at risk of injury when hazards, such as steps and posts, have low visibility. This study aims at validating the software implementation of a computational model that estimates hazard visibility. The model takes as input a photorealistic 3D rendering of an architectural space, and the acuity and contrast sensitivity of a low-vision observer, and outputs estimates of the visibility of hazards in the space. Our experiments explored whether the model could predict the likelihood of observers correctly identifying hazards. In Experiment 1, we tested fourteen normally sighted subjects with blur goggles that simulated moderate or severe acuity reduction. In Experiment 2, we tested ten low-vision subjects with moderate to severe acuity reduction. Subjects viewed computer-generated images of a walkway containing five possible targets ahead—big step-up, big step-down, small step-up, small step-down, or a flat continuation. Each subject saw these stimuli with variations of lighting and viewpoint in 250 trials and indicated which of the five targets was present. The model generated a score on each trial that estimated the visibility of the target. If the model is valid, the scores should be predictive of how accurately the subjects identified the targets. We used logistic regression to examine the correlation between the scores and the participants’ responses. For twelve of the fourteen normally sighted subjects with artificial acuity reduction and all ten low-vision subjects, there was a significant relationship between the scores and the participant’s probability of correct identification. These experiments provide evidence for the validity of a computational model that predicts the visibility of architectural hazards. It lays the foundation for future validation of this hazard evaluation tool, which may be useful for architects to assess the visibility of hazards in their designs, thereby enhancing the accessibility of spaces for people with low vision.

Download

Shedding Light on Cast Shadows: An Investigation of Perceived Ground Contact in AR and VR

July 2021

·

357 Reads

·

23 Citations

IEEE Transactions on Visualization and Computer Graphics

Virtual objects in augmented reality (AR) often appear to float atop real world surfaces, which makes it difficult to determine where they are positioned in space. This is problematic as many applications for AR require accurate spatial perception. In the current study, we examine how the way we render cast shadows--which act as an important monocular depth cue for creating a sense of contact between an object and the surface beneath it--impacts spatial perception. Over two experiments, we evaluate people's sense of surface contact given both traditional and non-traditional shadow shading methods in optical see-through augmented reality (OST AR), video see-through augmented reality (VST AR), and virtual reality (VR) head-mounted displays. Our results provide evidence that nontraditional shading techniques for rendering shadows in AR displays may enhance the accuracy of one's perception of surface contact. This finding implies a possible tradeoff between photorealism and accuracy of depth perception, especially in OST AR displays. However, it also supports the use of more stylized graphics like non-traditional cast shadows to improve perception and interaction in AR applications.


Evaluating the Visibility of Architectural Features for People with Low Vision – A Quantitative Approach

April 2021

·

34 Reads

·

2 Citations

LEUKOS The Journal of the Illuminating Engineering Society of North America

Most people with low vision rely on their remaining functional vision for mobility. Our goal is to provide tools to help design architectural spaces in which safe and effective mobility is possible by those with low vision – spaces that we refer to as visually accessible. We describe an approach that starts with a 3D CAD model of a planned space and produces labeled images indicating whether or not structures that are potential mobility hazards are visible at a particular level of low vision. There are two main parts to the analysis. The first, previously described, represents low-vision status by filtering a calibrated luminance image generated from the CAD model and associated lighting and materials information to produce a new image with unseen detail removed. The second part, described in this paper, uses both these filtered images and information about the geometry of the space obtained from the CAD model and related lighting and surface material specifications to produce a quantitative estimate of the likelihood of particular hazards being visible. We provide examples of the workflow required, a discussion of the novelty and implications of the approach, and a short discussion of needed future work.


Going the distance and beyond: simulated low vision increases perception of distance traveled during locomotion

October 2019

·

125 Reads

·

15 Citations

Psychological Research

In a series of experiments, we tested the hypothesis that severely degraded viewing conditions during locomotion distort the perception of distance traveled. Some research suggests that there is little-to-no systematic error in perceiving closer distances from a static viewpoint with severely degraded acuity and contrast sensitivity (which we will refer to as blur). However, several related areas of research—extending across domains of perception, attention, and spatial learning—suggest that degraded acuity and contrast sensitivity would affect estimates of distance traveled during locomotion. In a first experiment, we measured estimations of distance traveled in a real-world locomotion task and found that distances were overestimated with blur compared to normal vision using two measures: verbal reports and visual matching (Experiments 1 a, b, and c). In Experiment 2, participants indicated their estimate of the length of a previously traveled path by actively walking an equivalent distance in a viewing condition that either matched their initial path (e.g., blur/blur) or did not match (e.g., blur/normal). Overestimation in blur was found only when participants learned the path in blur and made estimates in normal vision (not in matched blur learning/judgment trials), further suggesting a reliance on dynamic visual information in estimates of distance traveled. In Experiment 3, we found evidence that perception of speed is similarly affected by the blur vision condition, showing an overestimation in perception of speed experienced in wheelchair locomotion during blur compared to normal vision. Taken together, our results demonstrate that severely degraded acuity and contrast sensitivity may increase people’s tendency to overestimate perception of distance traveled, perhaps because of an increased perception of speed of self-motion.


The Powerful Influence of Marks: Visual and Knowledge-Driven Processing in Hurricane Track Displays

September 2019

·

30 Reads

·

33 Citations

Journal of Experimental Psychology Applied

Given the widespread use of visualizations to communicate hazard risks, forecast visualizations must be as effective to interpret as possible. However, despite incorporating best practices, visualizations can influence viewer judgments in ways that the designers did not anticipate. Visualization designers should understand the full implications of visualization techniques and seek to develop visualizations that account for the complexities in decision-making. The current study explores the influence of visualizations of uncertainty by examining a case in which ensemble hurricane forecast visualizations produce unintended interpretations. We show that people estimate more damage to a location that is overlapped by a track in an ensemble hurricane forecast visualization compared to a location that does not coincide with a track. We find that this effect can be partially reduced by manipulating the number of hurricane paths displayed, suggesting the importance of visual features of a display on decision making. Providing instructions about the information conveyed in the ensemble display also reduced the effect, but importantly, did not eliminate it. These findings illustrate the powerful influence of marks and their encodings on decision-making with visualizations. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Perceived distance to augmented reality images is influenced by ground-contact

September 2019

·

107 Reads

Journal of Vision

Recent advancements in augmented reality (AR) have led to the development of several applications in domains such as architecture, engineering, and medical training. Typically, these applications present users with 3D virtual images in real environments that would not easily be portrayed otherwise (e.g., floor plans, arteries, etc.). The way users perceive the scale (i.e., size, distance, etc.) of such displays is important for decision making and learning outcomes. The current study aimed to asses users’ perception of distance to AR images, which has previously been shown to be underestimated in other virtual technologies. We focused our investigation on the influence of ground contact, which is an important cue for distance perception that many AR images lack because they are presented above the ground surface. Furthermore, binocular cues should be particularly important for users to overcome the lack of ground contact in many AR images. To test both the influence of ground contact and the importance of binocular cues, we conducted a study where participants were asked to blind walk to AR cubes presented at 3m, 4.5m, and 6m. Participants completed this task with cubes rendered on the ground surface or 0.2m above the ground surface. Additionally, we had each participant perform this task under monocular and binocular viewing conditions. We found that participants blind walked farther to AR cubes presented above the ground surface and that this effect was exaggerated under monocular viewing conditions. However, we found that participants blind walked shorter to AR cubes presented on the ground which was not expected. We also found underestimation of cube distance, regardless of where the cubes were presented or the viewing condition. Our results suggest that distance in AR environments is generally underestimated and that a lack of ground contact influences users’ perception of distance to AR images.



The Powerful Influence of Marks: Visual and Knowledge-Driven Processing in Hurricane Track Displays

July 2019

·

151 Reads

·

3 Citations

Given the widespread use of visualizations to communicate hazard risks, forecast visualizations must be as effective to interpret as possible. However, despite incorporating best practices, visualizations can influence viewer judgments in ways that the designers did not anticipate. Visualization designers should understand the full implications of visualization techniques and seek to develop visualizations that account for the complexities in decision-making. The current study explores the influence of visualizations of uncertainty by examining a case in which ensemble hurricane forecast visualizations produce unintended interpretations. We show that people estimate more damage to a location that is overlapped by a track in an ensemble hurricane forecast visualization compared to a location that does not coincide with a track. We find that this effect can be partially reduced by manipulating the number of hurricane paths displayed, suggesting the importance of visual features of a display on decision making. Providing instructions about the information conveyed in the ensemble display also reduced the effect, but importantly, did not eliminate it. These findings illustrate the powerful influence of marks and their encodings on decision-making with visualizations.


Figure 1: An example of an AR cube trial as it was run in the real world laboratory space.
Figure 2: A participant viewing an AR cube with the HoloLens in the real world laboratory space.
Mean blind walked distances for each condition
Distance Judgments to On- and Off-Ground Objects in Augmented Reality

March 2019

·

729 Reads

·

41 Citations

Augmented reality (AR) technologies have the potential to provide individuals with unique training and visualizations, but the effectiveness of these applications may be influenced by users' perceptions of the distance to AR objects. Perceived distances to AR objects may be biased if these objects do not appear to make contact with the ground plane. The current work compared distance judgments of AR targets presented on the ground versus off the ground when no additional AR depth cues, such as shadows, were available to denote ground contact. We predicted that without additional information for height off the ground, observers would perceive the off-ground objects as placed on the ground, but at farther distances. Furthermore, this bias should be exaggerated when targets were viewed with one eye rather than two. In our experiment, participants judged the absolute egocentric distance to various cubes presented on or off the ground with an action-based measure, blind walking. We found that observers walked farther for off-ground AR objects and that this effect was exaggerated when participants viewed off-ground objects with monocular vision compared to binocular vision. However, we also found that the restriction of binocular cues influenced participants' distance judgments for on-ground AR objects. Our results suggest that distances to off-ground AR objects are perceived differently than on-ground AR objects and that the elimination of binocular cues further influences how users perceive these distances.


Simulating visibility under reduced acuity and contrast sensitivity

March 2017

·

1,815 Reads

·

19 Citations

Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting-design communities. We validate the simulation using a letter-recognition task.


Citations (85)


... The design should take into account visibility to ensure that the area is safe and free of hazards. [46,47] Adequate lighting and protection should be provided to ensure the safety of users. ...

Reference:

Exploring the Application of Neurostructural Principles to the Design of Public Spaces on University Campuses
Validating a model of architectural hazard visibility with low-vision observers

... Use the drag-and-drop interface of World Cast to build interactive AR content. The platform has several capabilities, including the ability to import 3D models, animation tools, picture recognition, and spatial mapping [27,31,32]. In addition, the AR authoring tools from World Cast allow users to place virtual objects, films, photos, and other digital content in the physical world [33,34]. ...

Shedding Light on Cast Shadows: An Investigation of Perceived Ground Contact in AR and VR

IEEE Transactions on Visualization and Computer Graphics

... The regression model's slope varied from individual to individual, yet all low-vision subjects and 12 out of 14 normally sighted subjects with artificially reduced acuity had slopes significantly larger than zero. These findings provide a first step in validating the approach of assessing architectural feature visibility using the computational model implemented in the DeVAS software and described by Thompson and colleagues [5]. ...

Evaluating the Visibility of Architectural Features for People with Low Vision – A Quantitative Approach
  • Citing Article
  • April 2021

LEUKOS The Journal of the Illuminating Engineering Society of North America

... Hollerbach et al. at the University of Utah developed a 'haptic display' for grasping and manipulating virtual mechanisms (e.g., linkages and chains) [246,273] using an exoskeleton haptic device Sarcos Dextrous Arm Master-later upgraded to Sarcos DTS Master Exoskeleton for subsequent work [117]. They also integrated the haptic interface with Utah's Alpha-1 geometric modeling system to enable manipulation of both polygonal (i.e., mesh) and freeform (i.e., parametric) surfaces, particularly using direct parametric tracing (DPT) [348] for tracing untrimmed and trimmed NURBS surfaces [346,347] and physics-based models-e.g., stick-slip friction [310] and nonlinear viscosity [248]-for rapid virtual prototyping [167,168]. Among other related works of the group is nonlinear device modeling for VR applications [83,84]. ...

Haptic Interfacing for Virtual Prototyping of Mechanical CAD Designs
  • Citing Conference Paper
  • September 1997

... Some previous research has made contributions towards incorporating force feedback into traditional CAD systems. For example, [1] describes a haptic interface coupled with CAD software, allowing the operator to see and feel both geometrical shapes and dynamic forces. In the work described by [2], the author formulated inverse kinematic and inverse dynamic equations involved in simulating open chain mechanisms and single closed chain mechanisms. ...

Haptic Interfacing for Virtual Prototyping of Mechanical CAD Designs
  • Citing Conference Paper
  • September 1998

... Additionally, the studies have proposed basic guidance on the optimal number of forecasts to display simultaneously to balance user prediction efficacy and trust in the forecasts. These findings are exciting, given the mixed results of prior attempts to improve decision-making using uncertainty visualizations [2], [3], [10], [11], [5]. Nevertheless, MFVs are an emerging approach to uncertainty communication, and there are numerous open research challenges concerning implementation and visualization design choices that best support decision-making. ...

The Powerful Influence of Marks: Visual and Knowledge-Driven Processing in Hurricane Track Displays

Journal of Experimental Psychology Applied

... In the distance estimation task environment, participants estimated the egocentric distance of a virtual traffic cone target object placed at 4 m, 4.75 m, 5.5 m, 6.25 m, and 7 m in each trial. These distances were chosen because the majority of previous studies found people underestimate distances in action space [Adams et al. 2022;Buck et al. 2018;Creem-Regehr et al. 2023;Kelly 2022;Rosales et al. 2019]. A horizontal guideline was also rendered on the ground to represent the starting location of distance estimation. ...

Distance Judgments to On- and Off-Ground Objects in Augmented Reality

... This process is referred to as "material perception" in this study. Unlike simple visual attributes such as shape and color, material perception involves the complex interactions between an object's material, shape, and lighting, which generates diverse patterns in the retinal image [1]. Therefore, as with surface color perception, it is impossible to uniquely determine a material's properties from the retinal image alone. ...

Visual Perception from a Computer Graphics Perspective
  • Citing Book
  • April 2016

... indicated, the information regarding the distance traveled within a space plays a supportive role in spatial learning of new environments(Rand et al., 2019). Besides, understanding is facilitated by the sequencing and integration of a series of landmarks and their locations, thereby forming the pathways constituting the surroundings(Van Der Kuil et al., 2021). ...

Going the distance and beyond: simulated low vision increases perception of distance traveled during locomotion

Psychological Research