August 2023
·
18 Reads
Journal of Vision
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
August 2023
·
18 Reads
Journal of Vision
December 2022
·
2 Reads
Journal of Vision
June 2022
·
50 Reads
·
13 Citations
Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience.
March 2022
·
107 Reads
·
3 Citations
An important research question in optical see-through (OST) augmented reality (AR) is, how accurately and precisely can a virtual object’s real world location be perceived? Previously, a method was developed to measure the perceived three-dimensional location of virtual objects in OST AR. In this research, a replication study is reported, which examined whether the perceived location of virtual objects are biased in the direction of the dominant eye. The successful replication analysis suggests that perceptual accuracy is not biased in the direction of the dominant eye. Compared to the previous study’s findings, overall perceptual accuracy increased, and precision was similar.
March 2022
·
93 Reads
·
1 Citation
Figure 1: Experimental methodology. The triangulation by walking task (a) is performed for each condition: opaque wall (b), virtual window (c), and virtual window and background (d). ABSTRACT Accurate and usable x-ray vision is a significant goal in augmented reality (AR) development. X-ray vision, or the ability to comprehend location and object information when it is presented through an opaque barrier, needs to successfully convey scene information to be a viable use case for AR. Further, this investigation should be performed in an ecologically valid context in order to best test x-ray vision. This research seeks to experimentally evaluate the perceived object location of stimuli presented with x-ray vision, as compared to real-world perceived object location through a window, at action space distances of 1.5 to 15 meters.
February 2022
·
88 Reads
·
26 Citations
IEEE Transactions on Visualization and Computer Graphics
In optical see-through augmented reality (AR), information is often distributed between real and virtual contexts, and often appears at different distances from the user. To integrate information, users must repeatedly switch context and change focal distance. If the user's task is conducted under time pressure, they may attempt to integrate information while their eye is still changing focal distance, a phenomenon we term transient focal blur . Previously, Gabbard, Mehra, and Swan (2018) examined these issues, using a text-based visual search task on a one-eye optical see-through AR display. This paper reports an experiment that partially replicates and extends this task on a custom-built AR Haploscope. The experiment examined the effects of context switching, focal switching distance, binocular and monocular viewing, and transient focal blur on task performance and eye fatigue. Context switching increased eye fatigue but did not decrease performance. Increasing focal switching distance increased eye fatigue and decreased performance. Monocular viewing also increased eye fatigue and decreased performance. The transient focal blur effect resulted in additional performance decrements, and is an addition to knowledge about AR user interface design issues.
October 2021
·
157 Reads
·
12 Citations
For optical see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of virtual objects is presented, where participants verbally report a virtual object's location relative to both a vertical and horizontal grid. The method is tested with a small (1.95 × 1.95 × 1.95 cm) virtual object at distances of 50 to 80 cm, viewed through a Microsoft HoloLens 1 st generation AR display. Two experiments examine two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors, including a rightward bias and underestimated depth, might be due to systematic errors that are restricted to a particular display. Turning in a circle did not disrupt HoloLens tracking, and testing with a second display did not suggest systematic errors restricted to a particular display. Instead, the experiments are consistent with the hypothesis that, when looking downwards at a horizontal plane, HoloLens 1 st generation displays exhibit a systematic rightward perceptual bias. Precision analysis suggests that the method could measure the perceived location of a virtual object within an accuracy of less than 1 mm.
July 2020
·
6 Reads
Lecture Notes in Computer Science
The original version of this chapter was revised. The acknowledgement was inadvertently forgotten. It has been added.
July 2020
·
45 Reads
·
3 Citations
Lecture Notes in Computer Science
In room-clearing tasks, SWAT team members suffer from a lack of initial environmental information: knowledge about what is in a room and what relevance or threat level it represents for mission parameters. Normally this gap in situation awareness is rectified only upon room entry, forcing SWAT team members to rely on quick responses and near-instinctual reactions. This can lead to dangerously escalating situations or important missed information which, in turn, can increase the likelihood of injury and even mortality. Thus, we present an x-ray vision system for the dynamic scanning and display of room content, using a robotic platform to mitigate operator risk. This system maps a room using a robot-equipped stereo depth camera and, using an augmented reality (AR) system, presents the resulting geographic information according to the perspective of each officer. This intervention has the potential to notably lower risk and increase officer situation awareness, all while team members are in the relative safety of cover. With these potential stakes, it is important to test the viability of this system natively and in an operational SWAT team context.
March 2020
·
100 Reads
·
18 Citations
In augmented reality (AR) environments, information is often distributed between real and virtual contexts, and often appears at different distances from the user. Therefore, to integrate the information, users must repeatedly switch context and refocus the eyes. Previously, Gabbard, Mehra, and Swan (2018) examined these issues, using a text-based visual search task and a monocular optical see-through AR display. In this work, the authors report a replication of this earlier experiment, using a custom-built AR haploscope. The successful replication, on a very different display, is consistent with the hypothesis that the findings are a general property of AR.
... Incorrect inter-pupillary distance (IPD) -can result in incorrect depth estimation, since stereoscopic vision is affected. A smaller IPD can lead to an overestimation of depth in comparison with a larger IPD [2,44]. ...
June 2022
... Especially, VST displays have been reported to cause distance underestimation, which can be greater than with OST displays [24] [25]. In addition, looking downwards at a horizontal plane caused a systematic perceptual bias [26]. Lighting misalignment between real and virtual lights affected negatively distance perception [27]. ...
March 2022
... Reduced accommodation responses might be an objective indicator of the impact of the VAC from AR on the oculomotor system, and one possible contributor to effects such as increased time to focus [17] in fixed focal plane AR. Consequently, AR-HMD users may experience visuo-oculomotor dysfunction, especially when reduced accommodation presents in tasks that demand focal distance switching and context switching [57,58], such as visually and cognitively demanding industrial and educational tasks. ...
February 2022
IEEE Transactions on Visualization and Computer Graphics
... The influence of eye dominance on stereo acuity remains uncertain. However, there is evidence of a bias in the 3D location of objects, with eye dominance being considered a contributing factor (Khan et al., 2021). Future work should investigate the relationship between eye dominance and stereo acuity, simultaneously examining the participants' FOV to understand the impact on the virtual experience. ...
October 2021
... Huckauf et al. investigated the costs of context switching between a monocular OST HMD and a CRT placed at the same focus distance [50]. Gabbard et al. [51], [52], [53], [54] examined context switching and differing focal distances between a panel display and a monocular OST HMD (or a haploscope) using text-based visual search task. Both context switching and focal distance switching resulted in significantly reduced performance. ...
March 2020
... First, AR users retain their proprioception of self even while immersed in a virtual environment. Total virtual environments often neglect users' physical space in the real world (Smink et al., 2019), hindering their capacity to match virtual experience to the real world as required for external representation of the mental image in a drawing course. Secondly, the retention of sensorimotor function in AR allows the user to combine 3D objects in the virtual or physical environment without losing the advantage of either object or individual movement. ...
June 2019
Lecture Notes in Computer Science
... Drivers receive AR information from heads-up displays without diverting their gaze from the road, making use of the proximity-compatibility-principle [47], a crucial aspect in in-vehicle display design [48]. However, even if the AR graphics are presented at the same focal depth as real-world references, there is a cognitive cost to switching between the two [49], potentially leading to inattentional blindness. The size and prominence of the display imagery can also hinder drivers' perception of the far domain [50] , but the exact impact of display imagery saliency on drivers' detection capabilities is unknown. ...
May 2018
IEEE Transactions on Visualization and Computer Graphics
... There are different sampling methods like Node Sampling, Edge Sampling, Traversal-based sampling (Zhang et al., 2017) and Sampling with Neighborhood (VSN) (Hu & Lau, 2013). Our intention is not give detail review of graph sampling strategies and methods but we would like to mention the ones used in the context of frequent subgraph discovery by static and dynamic frequent subgraph mining algorithms. ...
January 2017
Electronic Imaging
... However, Novick and Rodriguez [28] observed that people tend to position themselves closer to virtual agents in AR compared to real-world F2F interactions. In addition, the type of device used plays a role in distance estimation, for instance, smartphones often produce accurate distance estimates [6], whereas tablets show distortions, with observers expanding midpoint intervals at 15 m but compressing them at 30 m [35]. These mixed findings underscore the need for deeper investigation into proxemic behavior and spatial interaction patterns in AR. ...
December 2016
... Despite the potential advantages, X-ray-based AR content overlaid on the real environment will also present new challenges to human visual perception. On the one hand, the AR content must be able to accurately represent spatial relationships such as occlusion layers (Livingston et al. 2003) or depth cues (Vaziri et al. 2021); on the other hand, users with AR head-worn displays (HWDs) will need to balance the allocation of their visual resources (i.e. attention) between virtual and real elements to avoid missing key information in the physical environment. ...
October 2003