Conference Paper

[POSTER] Halo3D: A Technique for Visualizing Off-Screen Points of Interest in Mobile Augmented Reality

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... From these scenarios, we derive two challenges for handheld AR industrial maintenance guidance systems: (1) manage POI-dense environments and (2) keep intrusion minimal and constant along the screen's edges to avoid overloading the focus area, used to consult data of selected POIs. Perea et al. introduced Halo3D ([17], [18]), an AR technique to visualize off-screen POIs on handheld devices. Halo3D is adapted from Halo [1] a 2D visualization technique. ...
... Perea et. al ( [17], [18]) provided a first attempt to adapt Halo [1] to 3D Augmented Reality (AR) environments using a handheld device. The radius of the halo is computed by first projecting the POI onto the device's screen plane, and then considering the distance between the projected POI and the edge of the display (see Figure 2a). ...
... The implementation of Halo3D follows the design described by Perea et. al ( [17], [18]) with two extensions. First we extend Halo3D to guarantee constant visual intrusion on screen. ...
Conference Paper
Navigating Augmented Reality (AR) environments with a handheld device often requires users to access digital contents (i.e. Points of Interests - POIs) associated with physical objects outside the field of view of the device's camera. Halo3D is a technique that displays the location of off-screen POIs as halos (arcs) along the edges of the screen. Halo3D reduces clutter by aggregating POIs but has not been evaluated. The results of a first experiment show that an enhanced version of Halo3D was 18% faster than the focus+context technique AroundPlot* for pointing at a POI, and perceived as 34% less intrusive than the arrow-based technique Arrow2D. The results of a second experiment in more realistic settings reveal that two variants of Halo3D that show the spatial distribution of POIs in clusters (1) enable an effective understanding of the off-screen environment and (2) require less effort than AroundPlot* to find POIs in the environment.
... More recent work was looking into transferring existing off-screen visualization techniques to 3D space. For example Halo3D by Perea et al. [18]. They developed 3D Halos for mobile Augmented Reality applications but their approach does not consider head-mounted devices. ...
Conference Paper
Full-text available
Head-mounted devices (HMDs) for Virtual and Augmented Reality (VR/AR) enable us to alter our visual perception of the world. However, current devices suffer from a limited field of view (FOV), which becomes problematic when users need to locate out of view objects (e.g., locating points-of-interest during sightseeing). To address this, we developed and evaluated in two studies HaloVR, WedgeVR, HaloAR and WedgeAR, which are inspired by usable 2D off-screen object visualization techniques (Halo, Wedge). While our techniques resulted in overall high usability, we found the choice of AR or VR impacts mean search time (VR: 2.25s, AR: 3.92s) and mean direction estimation error (VR: 21.85°, AR: 32.91°). Moreover, while adding more out-of-view objects significantly affects search time across VR and AR, direction estimation performance remains unaffected. We provide implications and discuss the challenges of designing for VR and AR HMDs.
Article
Navigation and situation awareness in 3D environments are often required by emergency responses such as firefighters, police, and military soldiers. This paper investigates a collaborative, cross-platform immersive system to improve the team navigation through effective real-time communication. We explore a set of essential building navigation, visualization, and interaction methods to support joint tasks in the physical building environments by leveraging on device sensors and image markers. Our platform also supports efficient exchange of useful visual information to improve the coordination and situation awareness for multiple users performing real-time operations in smart buildings. We have performed a user study to evaluate different devices used in coordination tasks. Our results demonstrate the effects of immersive visualization for improving 3D navigation and coordination for on-site collaboration in real physical environment.
Article
Search tasks in production environments have been shown to be significantly improved by attention guiding techniques in Augmented Reality (AR). In mobile use cases, head-mounted displays support context-sensitive and flexible search tasks. To improve search time and navigation efficiency, first concepts like attention funnel and spherical wave based guidance techniques have been tested successfully in comparison to simple arrow navigation. Existing research indicates that for two-dimensional interfaces, a differentiation into coarse and fine navigation will result in further performance improvements. In this paper, a laboratory study is presented with a stereoscopic head-mounted display that examines whether an additional arrow guiding technique for coarse navigation could improve both search time and efficiency of existing techniques. Results show objectively superior performance for one of the combined techniques in comparison to the unimodal attention guiding techniques and subjective preference by the participants. Further research should investigate the performance in a real environment.
Article
Full-text available
This research explores the design and evaluation of visualization techniques of targets that reside outside of users' view or are occluded by other elements within a virtual reality environment (VE). We first compare four techniques (3DWedge, 3DArrow, 3DMinimap, and Radar) that can provide direction and distance information of targets. To give structure to their evaluation, we also develop a framework of four tasks (one for direction and three for distance) and their assessment criteria. The results show that 3DWedge is the best-performing and most usable technique. However, all techniques, including 3DWedge, have poor performance in dense scenarios with a large number of targets. To improve their support in dense scenarios, a fifth technique, 3DWedge+, is developed by using 3DWedge as its foundation and including the visual elements of the other three techniques that are useful. A second study is conducted to evaluate the performance of 3DWedge+ in relation to the other techniques. The results show that both 3DWedge and 3DWedge+ are significantly better in distinguishing user-to-target distance and that 3DWedge+ is particularly suitable for dense scenarios. Based on these results, we provide a set of recommendations for the design of visualization techniques of off-screen and occluded targets in 3D VE.
Conference Paper
Full-text available
Current head-mounted displays (HMDs) for Virtual Reality (VR) and Augmented Reality (AR) have a limited field-of-view (FOV). This limited FOV further decreases the already restricted human visual range and amplifies the problem of objects going out of view. Therefore, we explore the utility of augmenting HMDs with RadialLight, a peripheral light display implemented as 18 radially positioned LEDs around each eye to cue direction towards out-of-view objects. We first investigated direction estimation accuracy of multi-colored cues presented on one versus two eyes. We then evaluated direction estimation accuracy and search time performance for locating out-of-view objects in two representative 360° video VR scenarios. Key findings show that participants could not distinguish between LED cues presented to one or both eyes simultaneously , participants estimated LED cue direction within a maximum 11.8° average deviation, and out-of-view objects in less distracting scenarios were selected faster. Furthermore, we provide implications for building peripheral HMDs.
Presentation
Full-text available
Presentation of Research Paper "Strategies for Visualizing Points-of-Interest of 3D Virtual Environments on Mobile Devices"
Article
Full-text available
Overview+Detail visualization is one of the major approaches to the display of large information spaces on a computer screen. Widely used in desktop applications, its feasibility on mobile devices has been scarcely investigated. This paper first provides a detailed analysis of the literature on Overview+Detail visualization, discussing and comparing the results of desktop and mobile studies to highlight strengths and weaknesses of the approach. The analysis reveals open issues worthy of additional investigation and can provide useful indications to interface designers. Then, the paper presents an experiment that studies unexplored aspects of the design space for mobile interfaces based on the Overview+Detail approach, investigating the effect of letting users manipulate the overview to navigate maps and the effect of highlighting possible objects of interest in the overview to support search tasks. Results of the experiment suggest that both direct manipulation of the overview and highlighting objects of interest in the overview have a positive effect on user performance in terms of the time to complete search tasks on mobile devices, but do not provide specific advantages in terms of recall of the spatial configuration of targets.
Article
Full-text available
The literature on information visualizations establishes the usability of overview+detail interfaces, but for zoomable user interfaces, results are mixed. We compare overview+detail and zoomable user interfaces to understand the navigation patterns and usability of these interfaces. Thirty-two subjects solved navigation and browsing tasks on maps organized in one or multiple levels. We find no difference between interfaces in subjects' ability to solve tasks correctly. Eighty percent of the subjects prefer the overview+detail interface, stating that it supports navigation and helps keep track of their position on the map. However, subjects are faster using the zoomable user interface, especially in combination with the multi-level map and when solving navigation tasks. The combination of the zoomable user interface and the multi-level map also improves subjects' recall of objects on the map. Switching between overview and detail windows was correlated with higher task completion time, suggesting that integration of overview and detail windows require mental and motor effort. We found large individual differences in navigation patterns and usability, but subjects' visualization ability influenced usability similarly between interfaces.
Article
Full-text available
Usability does not exist in any absolute sense; it can only be defined with reference to particular contexts. This, in turn, means that there are no absolute measures of usability, since, if the usability of an artefact is defined by the context in which that artefact is used, measures of usability must of necessity be defined by that context too. Despite this, there is a need for broad general measures which can be used to compare usability across a range of contexts. In addition, there is a need for "quick and dirty" methods to allow low cost assessments of usability in industrial systems evaluation. This chapter describes the System Usability Scale (SUS) a reliable, low-cost usability scale that can be used for global assessments of systems usability.
Conference Paper
Full-text available
3D virtual environments are increasingly used as general- purpose medium for communicating spatial information. In particular, virtual 3D city models have numerous applications such as car navigation, city marketing, tourism, and gaming. In these applications, points- of-interest (POI) play a major role since they typically represent features relevant for specific user tasks and facilitate effective user orientation and navigation through the 3D virtual environment. In this paper, we present strategies that aim at effectively visualizing points-of-interest in a 3D virtual environment used on mobile devices. Here, we additionally have to face the "keyhole" situation, i.e., the users can realize only a small part of the environment due to the limited view space and resolution. For the eective visualization of points-of-interest in 3D virtual environments we propose to combine specialized occlusion management for 3D scenes together with visual cues that handle out-of-frame points-of-interest. We also discuss general aspects and definitions of points-of-interest in the scope of 3D models and outline a prototype implementation of the mo- bile 3D viewer application based on the presented concepts. In addition, we give a first performance evaluation with respect to rendering speed and power consumptions.
Conference Paper
Full-text available
As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the off-screen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.
Conference Paper
Full-text available
To overcome display limitations of small-screen devices, researchers have proposed techniques that point users to objects located off-screen. Arrow-based techniques such as City Lights convey only direction. Halo conveys direction and distance, but is susceptible to clutter resulting from overlapping halos. We present Wedge, a visualization technique that conveys direction and distance, yet avoids overlap and clutter. Wedge represents each off-screen location using an acute isosceles triangle: the tip coincides with the off-screen locations, and the two corners are located on-screen. A wedge conveys location awareness primarily by means of its two legs pointing towards the target. Wedges avoid overlap programmatically by repelling each other, causing them to rotate until overlap is resolved. As a result, wedges can be applied to numbers and configurations of targets that would lead to clutter if visualized using halos. We report on a user study comparing Wedge and Halo for three off-screen tasks. Participants were significantly more accurate when using Wedge than when using Halo.
Conference Paper
Full-text available
An emerging technology for tourism information systems is mobile Augmented Reality using the position and orientation sensors of recent smartphones. State-of-the-art mobile Augmented Reality application accompanies the Augmented Reality visualization with a small mini-map to provide an overview of nearby points of interest (POIs). In this paper we develop an alternative visualization for nearby POIs based on off-screen visualization techniques for digital maps. The off-screen visualization uses arrows directly embedded into the Augmented Reality scene which point at the POIs. In the conducted study 26 participants explored nearby POIs and had to interpret their position. We show that participants are faster and can interpret the position of POIs more precisely with the developed visualization technique.
Conference Paper
Full-text available
The navigation support provided by user interfaces of Virtual Environments (VEs) is often inadequate and tends to be overly complex, especially in the case of large-scale VEs. In this paper, we propose a novel navigation aid that aims at allowing users to easily locate objects and places inside large-scale VEs. The aid exploits 3D arrows to point towards the objects and places the user is interested in. We illustrate and discuss the experimental evaluation we carried out to assess the usefulness of the proposed solution, contrasting it with more traditional 2D navigation aids. In particular, we compared subjects' performance in 4 conditions which differ for the type of provided navigation aid: three conditions employed respectively the proposed "3D arrows" aid, an aid based on 2D arrows, and a 2D aid based on a radar metaphor; the fourth condition was a control condition with no navigation aids available.
Article
Full-text available
When navigating large information spaces on mobile devices, the small size of the display often causes relevant content to shift off-screen, greatly increasing the difficulty of spatial tasks such as planning routes or finding points of interest on a map. Two possible approaches to mitigate the problem are Contextual Cues, i.e. visualizing abstract shapes in the border region of the view area to function as visual references to off-screen objects of interest, and Overview+Detail, i.e., simultaneously displaying a detail view and a small-scale overview of the information space. In this paper, we compare the effectiveness of two different Contextual Cues techniques, Wedge (Gustafson et al., 2008) and Scaled Arrows (Burigat et al., 2006), and a classical Overview+Detail visualization that highlights the location of objects of interest in the overview. The study involved different spatial tasks and investigated the scalability of the considered visualizations, testing them with two different numbers of off-screen objects. Results were multifaceted. With simple spatial tasks, no differences emerged among the visualizations. With more complex spatial tasks, Wedge had advantages when the task required to order off-screen objects with respect to their distance from the display window, while Overview+Detail was the best solution when users needed to find those off-screen objects that were closest to each other. Finally, we found that even a small increase in the number of off-screen objects negatively affected user performance in terms of accuracy, especially in the case of Scaled Arrows, while it had a negligible effect in terms of task completion times.
Article
Full-text available
The spatial nature of large-scale virtual worlds introduces wayfinding problems which are often overlooked in the design process. In order to design and build useful virtual worlds in which real work can take place, these issues must be addressed. The research described here is a study of human wayfinding in virtual worlds and how real world solutions can be applied to virtual world design. The objective of this work is to develop design principles which will lead to a design methodology for virtual worlds in which wayfinding problems are alleviated.
Conference Paper
In games, aircraft navigation systems and in control systems, users have to track moving targets around a large workspace that may extend beyond the users. viewport. This paper presents on-going work that investigates the effectiveness of two different off-screen visualization techniques for accurately tracking off-screen moving targets. We compare the most common off-screen representation, Halo, with a new fisheye-based visualization technique called EdgeRadar. Our initial results show that users can track off-screen moving objects more accurately with EdgeRadar over Halos. This work presents a preliminary but promising step toward the design of visualization techniques for tracking off-screen moving targets.
Article
There are many interface schemes that allow users to work at, and move between, focused and contextual views of a dataset. We review and categorize these schemes according to the interface mechanisms used to separate and blend views. The four approaches are overview+detail, which uses a spatial separation between focused and contextual views; zooming, which uses a temporal separation; focus+context, which minimizes the seam between views by displaying the focus within the context; and cue-based techniques which selectively highlight or suppress items within the information space. Critical features of these categories, and empirical evidence of their success, are discussed. The aim is to provide a succinct summary of the state-of-the-art, to illuminate both successful and unsuccessful interface strategies, and to identify potentially fruitful areas for further work.