Figure 2 - uploaded by Haley Adams
Content may be subject to copyright.
A participant viewing an AR cube with the HoloLens in the real world laboratory space.

A participant viewing an AR cube with the HoloLens in the real world laboratory space.

Source publication
Conference Paper
Full-text available
Augmented reality (AR) technologies have the potential to provide individuals with unique training and visualizations, but the effectiveness of these applications may be influenced by users' perceptions of the distance to AR objects. Perceived distances to AR objects may be biased if these objects do not appear to make contact with the ground plane...

Similar publications

Article
Full-text available
We assessed the contribution of binocular disparity and the pictorial cues of linear perspective, texture, and scene clutter to the perception of distance in consumer virtual reality. As additional cues are made available, distance perception is predicted to improve, as measured by a reduction in systematic bias, and an increase in precision. We as...

Citations

... In the distance estimation task environment, participants estimated the egocentric distance of a virtual traffic cone target object placed at 4 m, 4.75 m, 5.5 m, 6.25 m, and 7 m in each trial. These distances were chosen because the majority of previous studies found people underestimate distances in action space [Adams et al. 2022;Buck et al. 2018;Creem-Regehr et al. 2023;Kelly 2022;Rosales et al. 2019]. A horizontal guideline was also rendered on the ground to represent the starting location of distance estimation. ...
Conference Paper
Full-text available
Omnidirectional treadmills provide one solution for locomoting through large virtual environments in confined physical spaces. Through two experiments, this paper evaluated locomotion on an omnidirectional treadmill (Cyberith Virtualizer Elite 2) by comparing it to natural walking in an open physical space. In Experiment 1, participants judged distances and completed a path integration task using the treadmill and natural walking. Participants walked further on the treadmill but had larger angular errors during path integration, potentially due to increased cybersickness. Experiment 2 varied path lengths during path integration and found that longer paths led to higher cybersickness scores but did not affect performance. The paper offers interpretations and suggestions for using omnidirectional treadmills in virtual reality.
... In VEs presented by HMDs, distances are generally underestimated in action space (2 -30 m) [Cutting and Vishton 1995] and beyond [Adams et al. 2022;Buck et al. 2021Buck et al. , 2018Creem-Regehr et al. 2023;Kelly 2023;Rosales et al. 2019]. A recent meta-analysis [Kelly 2023] found that field-of-view (FOV), weight, and pixel density were technical factors of HMDs that contributed significantly to this underestimation, but there were still unexplained sources of variance. ...
Conference Paper
Full-text available
Most modern head-mounted displays (HMDs) do not support the full range of adult inter-pupillary distances (IPDs) (i.e., 45 – 80 mm) due to technological limitations. Prior work indicates that the mismatch between a user’s actual IPD and the IPD set in the HMD (“IPD mismatch”) can affect distance and size judgments in near space (0 – 2 m). Therefore, users with IPDs outside of the supported HMD IPD range may not perceive virtual environments (VEs) accurately. Across three experiments, we investigated whether IPD mismatch significantly affects peoples’ distance judgments at longer distances (4 – 7 m). In two of the experiments, we recruited participants with IPDs smaller than the minimum supported IPD of the HTC Vive Pro HMD. They estimated distances in action space using verbal estimation (Experiment 1) and blind walking (Experiment 2) measures in indoor VEs. We found that: (i) distances were underestimated in action space, and (ii) IPD mismatch had minimal to no effect on their distance judgments. In a third experiment, we investigated whether we could generalize our findings to participants with an IPD within the supported HMD IPD range. We were able to replicate our previous findings. Overall, our findings suggest that IPD mismatch in an HMD may not be a major factor in distance underestimation in action space in VEs.
... This indicates an overestimation of target distance relative to the reference line, which may reflect a partial misinterpretation of its greater height-in-the-field as a cue to distance 1,2 . This effect has been found for similar contexts in augmented reality 31 . ...
Article
Full-text available
Shadows in physical space are copious, yet the impact of specific shadow placement and their abundance is yet to be determined in virtual environments. This experiment aimed to identify whether a target’s shadow was used as a distance indicator in the presence of binocular distance cues. Six lighting conditions were created and presented in virtual reality for participants to perform a perceptual matching task. The task was repeated in a cluttered and sparse environment, where the number of cast shadows (and their placement) varied. Performance in this task was measured by the directional bias of distance estimates and variability of responses. No significant difference was found between the sparse and cluttered environments, however due to the large amount of variance, one explanation is that some participants utilised the clutter objects as anchors to aid them, while others found them distracting. Under-setting of distances was found in all conditions and environments, as predicted. Having an ambient light source produced the most variable and inaccurate estimates of distance, whereas lighting positioned above the target reduced the mis-estimation of distances perceived.
... The results showed that there is a difference between both perceptions. However, it was also shown that this perception depends on whether the vision is monocular or binocular [217]. Plenty of research has been done in outdoor navigation and indoor navigation areas with AR [214]. ...
Article
Full-text available
Augmented reality (AR) has gained enormous popularity and acceptance in the past few years. AR is indeed a combination of different immersive experiences and solutions that serve as integrated components to assemble and accelerate the augmented reality phenomena as a workable and marvelous adaptive solution for many realms. These solutions of AR include tracking as a means for keeping track of the point of reference to make virtual objects visible in a real scene. Similarly, display technologies combine the virtual and real world with the user's eye. Authoring tools provide platforms to develop AR applications by providing access to low-level libraries. The libraries can thereafter interact with the hardware of tracking sensors, cameras, and other technologies. In addition to this, advances in distributed computing and collaborative augmented reality also need stable solutions. The various participants can collaborate in an AR setting. The authors of this research have explored many solutions in this regard and present a comprehensive review to aid in doing research and improving different business transformations. However, during the course of this study, we identified that there is a lack of security solutions in various areas of collaborative AR (CAR), specifically in the area of distributed trust management in CAR. This research study also proposed a trusted CAR architecture with a use-case of tourism that can be used as a model for researchers with an interest in making secure AR-based remote communication sessions.
... Because the human visual system treats floating objects as though they are located on the ground plane (in the absence of information specifying otherwise), floating targets are typically perceived as farther away [29,31]. In augmented reality, as well, the influence of optical contact on distance perception has been demonstrated in the Microsoft HoloLens 1 [81]. Specifically, Salas-Rosales and colleagues demonstrated that floating virtual targets were perceived as on the ground but farther away in AR when no surface contact information, like cast shadows, was present. ...
... When cues that link objects to nearby surfaces are absent, individuals judge distance based on optical contact-or the location at which the projected image of an object contacts the image of the ground beneath it-to determine position in space [64,67,70]. As a result, in the absence of cues specifying that a target is above the ground, distance judgments to targets positioned above the ground are perceived as on the ground but farther away [67,81]. This phenomenon has been demonstrated in both real [77,78] and virtual environments [13,65] within action space, which ranges between 2m and 30m [20]. ...
... This phenomenon has been demonstrated in both real [77,78] and virtual environments [13,65] within action space, which ranges between 2m and 30m [20]. More recently, Salas-Rosales et al. [81] have confirmed this effect in augmented reality. Specifically, Salas-Rosales and colleagues demonstrated that floating targets are perceived as farther away than grounded ones in an optical see-through augmented reality (OST AR) display, the Microsoft HoloLens 1. ...
Conference Paper
Full-text available
Although it is commonly accepted that depth perception in augmented reality (AR) displays is distorted, we have yet to isolate which properties of AR affect people’s ability to correctly perceive virtual objects in real spaces. From prior research on depth perception in commercial virtual reality, it is likely that ergonomic properties and graphical limitations impact visual perception in head-mounted displays (HMDs). However, an insufficient amount of research has been conducted in augmented reality HMDs for us to begin isolating pertinent factors in this family of displays. To this end, in the current research, we evaluate absolute measures of distance perception in the Microsoft HoloLens 2, an optical see-through AR display, and the Varjo XR-3, a video see-through AR display. The current work is the first to evaluate either device using absolute distance perception as a measure. For each display, we asked participants to verbally report distance judgments to both grounded and floating targets that were rendered either with or without a cast shadow along the ground. Our findings suggest that currently available video see-through displays may induce more distance underestimation than their optical see through counterparts. We also find that the vertical position of an object and the presence of a cast shadow influence depth perception.
... In future research, it may be worthwhile to evaluate how object shape and cast shadow shading manipulations affect more direct measures of depth perception. Rosales et al. [51] demonstrated that, in the absence of cast shadows, people perceive an object that is placed above the ground incorrectly as farther away. This may help explain some of the effects of overestimation found in prior AR depth perception research, especially given that many studies use floating objects in their assessments [46,55,61]. ...
Preprint
Full-text available
The information provided to a person's visual system by extended reality (XR) displays is not a veridical match to the information provided by the real world. Due in part to graphical limitations in XR head-mounted displays (HMDs), which vary by device, our perception of space may be altered. However, we do not yet know which properties of virtual objects rendered by HMDs -- particularly augmented reality displays -- influence our ability to understand space. In the current research, we evaluate how immersive graphics affect spatial perception across three unique XR displays: virtual reality (VR), video see-through augmented reality (VST AR), and optical see-through augmented reality (OST AR). We manipulated the geometry of the presented objects as well as the shading techniques for objects' cast shadows. Shape and shadow were selected for evaluation as they play an important role in determining where an object is in space by providing points of contact between an object and its environment -- be it real or virtual. Our results suggest that a non-photorealistic (NPR) shading technique, in this case for cast shadows, may be used to improve depth perception by enhancing perceived surface contact in XR. Further, the benefit of NPR graphics is more pronounced in AR than in VR displays. One's perception of ground contact is influenced by an object's shape, as well. However, the relationship between shape and surface contact perception is more complicated.
... La perception des relations spatiales désigne la perception du positionnement absolu, ainsi que la perception des relations spatiales entre les objets, qu'ils soient réels ou virtuels. En effet, on comprend que ces perceptions soient liées lorsque l'on regarde les travaux de Rosales et al. [82] sur les objets souterrains : ils démontrent qu'il est primordial que l'utilisateur comprenne que les objets sont sous le sol pour qu'ils puissent les voir au bon endroit, et donc bien percevoir les positions absolues. Il faut donc comprendre comment l'être humain perçoit les profondeurs, et notamment les distances, pour pouvoir influencer cette perception lorsqu'elle ne permet pas de retrouver les bonnes informations. ...
... Occultation : l'ordre d'apparition des objets, qui se retrouvent les uns devant les autres, permet d'apprendre de leurs relations spatiales. [82] l'ont démontré dans leurs travaux, un cube au-dessus du sol aura tendance à être perçu sur le sol, mais plus loin. ...
... Leur étude a néanmoins des limitations, et notamment celle de n'avoir testé les visualisations qu'avec des objets allant jusqu'à 6 m de distance de l'utilisateur. On sait grâce à Cutting Rosales et al. [82] étudient la perception de la distance avec des objets virtuels hors-sol et sur le sol. Ils trouvent que les objets au-dessus du sol sont perçus plus loin et sur le sol. ...
Thesis
Cette thèse de doctorat présente le travail qui nous a permis d’aboutir à un système de Réalité Augmentée efficace et agréable pour la gestion des plans de réseaux enterrés. De plus en plus d’institutions obligent à cartographier les souterrains, dans le but d’éviter d’endommager les ouvrages existants. Ces endommagements sont très dangereux, car les réseaux transportent des éléments comme du gaz ou de l’électricité. De nombreux accidents blessent des personnes chaque année à cause d’une méconnaissance des souterrains. Bien que les procédures de sécurité permettent de sauver des vies, l’interruption du trafic reste gênante et coûteuse. Outre le fait que ces plans sont parfois incorrects voire inexistants, ils sont d’abord difficiles à interpréter sur le terrain. De plus, leur maintenance implique un arrêt des travaux sur chantier et du retard qui est dommageable autant pour l’exploitant qui traite de son réseau, que pour le citoyen qui habite la rue où se déroulent les travaux et qui souhaite regagner son domicile. C’est pourquoi nous proposons un système complet, allant de la visualisation à la modification des plans de réseaux enterrés en Réalité Augmentée. Nous avons mené des travaux sur quatre visualisations et leur influence sur la perception des objets virtuels. Aussi, nous avons étudié deux modes de sélection et deux modes d’annotation afin de comprendre les éventuels bénéfices d’un système composé de deux périphériques : un casque de RA et un smartphone. Nous avons donc proposé un système permettant visualisation, sélection, modification de données attributaires textuelles et de la position des réseaux enterrés, de façon claire, efficace et peu fatigante directement in situ et en temps réel.
... Thus, if a systematic bias in distance perception exists between virtual objects and real objects presented at the same distances, then the use of AR applications on mobile devices that require accurate distance perception could be limited. Evidence exists that distance judgments in AR are distorted in head-mounted displays (HMDs) [4,14,17], so the question for this paper is whether biases are observed also in AR for mobile devices. Some work has been done to investigate distance perception in AR displayed through tablets and smartphones, but the results have been mixed [1,6,11,13,20]. ...
... Although a significant amount of work has been done on distance estimation in AR [4][5][6][7][8][9][10][11][12][14][15][16][17][18][19][20][21], there is no consensus on the accuracy of perceived distance using various AR platforms. There could be various reasons for the lack of a consensus, including (1) environmental contexts and available cues, (2) methodologies and designs employed, (3) distances considered, and (4) the AR devices themselves. ...
... They found that people underestimated the distances in the indoor environment but overestimated the distances in the outdoor environment. However, other work has shown that distances estimated in personal (up to 2 m) to action spaces (2-30 m) using various optical see-through AR displays (such as the Microsoft HoloLens) are consistently underestimated [8,17,19]. ...
Conference Paper
Full-text available
Although Augmented Reality (AR) can be easily implemented with most smartphones and tablets today, the investigation of distance perception with these types of devices has been limited. In this paper, we question whether the distance of a virtual human, e.g., avatar, seen through a smartphone or tablet display is perceived accurately. We also investigate, due to the Covid-19 pandemic and increased sensitivity to distances to others, whether a coughing avatar that either does or does not have a mask on affects distance estimates compared to a static avatar. We performed an experiment in which all participants estimated the distances to avatars that were either static or coughing, with and without masks on. Avatars were placed at a range of distances that would be typical for interaction, i.e., action space. Data on judgments of distance to the varying avatars was collected in a distributed manner by deploying an app for smartphones. Results showed that participants were fairly accurate in estimating the distance to all avatars, regardless of coughing condition or mask condition. Such findings suggest that mobile AR applications can be used to obtain accurate estimations of distances to virtual others "in the wild," which is promising for using AR for simulations and training applications that require precise distance estimates.
... The method reported here operates in the range of 50 to 80 cm. Overall, measured depth perception accuracy in AR has been variable, with a generally established finding that the depth of AR virtual objects is usually underestimated [18,26,28,29]. ...
... The method reported here was tested on a Microsoft HoloLens 1 st generation AR display. Recent work has also reported using HoloLens 1 st generation displays to estimate the perceived depth of AR objects, and again found underestimation [9,26]. However, Fischer et al. [7] used a HoloLens 1 st generation display to investigate perceptually aligning real and virtual information in a reaching space medical context, and found overestimation. ...
... While depth perception represents a single scalar quantity, perceived 3D location is more complex, consisting not only of depth (z-axis), but also of abscissa (x-axis) and ordinate (y-axis) information as well. The proposed method measures all three dimensions, and thus promises to enable better ways of measuring the affects and interactions of additional cues on perceived location, such as shadows [4,26], ground cues [4], and familiar size [19]. In addition, instead of focusing on perception, many researchers have investigated related challenges associated with object registration and tracking in AR [2,16,17,21,31,32]. ...
Conference Paper
For optical see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of virtual objects is presented, where participants verbally report a virtual object's location relative to both a vertical and horizontal grid. The method is tested with a small (1.95 × 1.95 × 1.95 cm) virtual object at distances of 50 to 80 cm, viewed through a Microsoft HoloLens 1 st generation AR display. Two experiments examine two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors, including a rightward bias and underestimated depth, might be due to systematic errors that are restricted to a particular display. Turning in a circle did not disrupt HoloLens tracking, and testing with a second display did not suggest systematic errors restricted to a particular display. Instead, the experiments are consistent with the hypothesis that, when looking downwards at a horizontal plane, HoloLens 1 st generation displays exhibit a systematic rightward perceptual bias. Precision analysis suggests that the method could measure the perceived location of a virtual object within an accuracy of less than 1 mm.
... We used two interaction tasks in our experiments: the lamp brightness adjustment task and the cube manipulation task, as shown in Fig. 4. The lamp task [64] and cube task [5], [65] are the commonly used AR interaction tasks, which represent a series of control tasks in desktop applications. These two tasks allow us to compare five modalities in different backgrounds, magnitudes, and frameworks. ...
Article
Multimodal interaction has become a recent research focus since it offers better user experience in augmented reality (AR) systems. However, most existing works only combine two modalities at a time, e.g., gesture and speech. Multimodal interactive system integrating gaze cue has rarely been investigated. In this article, we propose a multimodal interactive system that integrates gaze, gesture, and speech in a flexibly configurable AR system. Our lightweight head-mounted device supports accurate gaze tracking, hand gesture recognition, and speech recognition simultaneously. The system can be easily configured into various modality combinations, which enables us to investigate the effects of different interaction techniques. We evaluate the efficiency of these modalities using two tasks: the lamp brightness adjustment task and the cube manipulation task. We also collect subjective feedback when using such systems. The experimental results demonstrate that the Gaze+Gesture+Speech modality is superior in terms of efficiency, and the Gesture+Speech modality is more preferred by users. Our system opens the pathway toward a multimodal interactive AR system that enables flexible configuration.