Conference PaperPDF Available

Visualization of off-screen objects in mobile augmented reality

Authors:

Abstract and Figures

An emerging technology for tourism information systems is mobile Augmented Reality using the position and orientation sensors of recent smartphones. State-of-the-art mobile Augmented Reality application accompanies the Augmented Reality visualization with a small mini-map to provide an overview of nearby points of interest (POIs). In this paper we develop an alternative visualization for nearby POIs based on off-screen visualization techniques for digital maps. The off-screen visualization uses arrows directly embedded into the Augmented Reality scene which point at the POIs. In the conducted study 26 participants explored nearby POIs and had to interpret their position. We show that participants are faster and can interpret the position of POIs more precisely with the developed visualization technique.
Content may be subject to copyright.
Visualization of Off-Screen Objects
in Mobile Augmented Reality
Torben Schinke
University of Oldenburg
Oldenburg, Germany
torben.schinke@uni-
oldenburg.de
Niels Henze
University of Oldenburg
Oldenburg, Germany
niels.henze@uni-
oldenburg.de
Susanne Boll
University of Oldenburg
Oldenburg, Germany
susanne.boll@uni-
oldenburg.de
ABSTRACT
An emerging technology for tourism information systems
is mobile Augmented Reality using the position and orien-
tation sensors of recent smartphones. State-of-the-art mo-
bile Augmented Reality application accompanies the Aug-
mented Reality visualization with a small mini-map to pro-
vide an overview of nearby points of interest (POIs). In
this paper we develop an alternative visualization for nearby
POIs based on off-screen visualization techniques for digital
maps. The off-screen visualization uses arrows directly em-
bedded into the Augmented Reality scene which point at
the POIs. In the conducted study 26 participants explored
nearby POIs and had to interpret their position. We show
that participants are faster and can interpret the position of
POIs more precisely with the developed visualization tech-
nique.
Categories and Subject Descriptors
H.5.1 [Multimedia Information Systems]: Artificial, aug-
mented, and virtual realities
General Terms
Design, Human Factors, Experimentation
Keywords
Augmented Reality, Mobile Phone, Orientation
1. INTRODUCTION
Despite the worldwide financial crisis tourism remains one
of the largest economic sectors with about 30% of the world-
wide export business [1]. The results of Morrison et al. [8]
suggests that digital maps or digital extensions for paper
maps are not the perfect companion for tourists and even
today the two most important tools for tourists are paper
maps and printed tourist guides. Davies et al. [5] show that
a system’s technical maturity and perfectness is not the most
important aspect. In fact, it’s the interaction a system pro-
vides. Davies’ results suggest that tourists either want a list
of all sights in their environment (”What’s near?”) or de-
tailed information about a specific object (”What’s that?”).
A promising presentation technique to answer these ques-
tions is mobile Augmented Reality to display information
Copyright is held by the author/owner(s).
MobileHCI’10, September 7–10, 2010, Lisbon, Portugal.
ACM 978-1-60558-835-3/10/09.
about points of interests (POIs). Due to recent technical
advances mobile Augmented Reality applications for Smart-
phones using a digital compass and GPS became available
to end-users. While the phone’s display shows the camera’s
video a 3D overlay highlights sights in the physical scene.
Applications, such as Wikitude, Layar, or Google’s Goggles
available for the Android platform have each been installed
some hundred thousand times - in a few months.
The augmented scene only provides information about
POIs inside the viewport of the camera. It does not provide
an overview about so-called off-screen objects - objects that
are outside the viewport because they are besides or behind
the user. Current mobile Augmented Reality applications
(e.g. Wikitude or Layar) for tourists provide the user with
an overview about nearby sights using an additional mini-
map, usually centered in the lower half of the display. The
augmented environment and the mini-map, however, have
different reference systems. Therefore, interpreting the 2D
mini-map and align it with the augmented environment de-
mands mental effort.
We assume that an off-screen visualization directly em-
bedded into the augmentation reduces this mental effort. In
this paper we design a 3D visualization of off-screen objects
for mobile augmented reality applications. Our aim is to de-
termine if mobile augmented reality applications for tourists
can be improved by replacing the small mini-map with the
developed visualization of nearby objects.
In the remainder of this paper, we present the related work
in Section 2. The used visualization techniques are described
in Section 3. We present the design of the conducted user
study in Section 4. The results are outlined in Section 5
followed by a discussion in Section 6. We close the paper
with a conclusion and outlook to future work in Section 7.
2. RELATED WORK
Providing information about physical objects using mo-
bile devices has received a great share of attention in recent
years. Davies et al. [5], for example, studied the difference
between two interaction techniques to acquire information
about nearby POIs. They compared an interaction that en-
ables users to take a photo of a sight and receive according
information (the so-called point-and-shoot interaction) with
an interactive list of POIs in the surrounding. Davies et al.
found that users are happy to use image recognition even
when this is a more complex, lengthy and error-prone pro-
cess than traditional solutions.
Augmented reality for handheld devices is a similar in-
teraction technique that enables users to aim at a physical
object to acquire information about it. Contrary to point-
and-shoot, augmented reality provides instant and continu-
ous feedback to the user. Most research for handheld aug-
mented reality focused on adapting algorithms for mobile
devices which is still an open field (see e.g. [10]). How-
ever, recently Smartphone manufacturers began to include
a compass into their phones. Accompanied by a GPS re-
ceiver augmented reality based on pure sensor data became
possible [7].
Besides providing information by augmenting sights ap-
plications for tourists must also provide an overview about
the environment. Existing mobile augmented reality appli-
cations that are based on GPS and compass data present
a small mini-map side-by-side with the augmentation. The
mini-map is used to present so-called off-screen objects (i.e.
objects that are not visible in the current camera image
presented on the screen). Visualizing off-screen objects has
received some attention for other use-cases. Zellweger et
al. [11] introduced City Lights, a principle for visualizing
off-screen objects for hypertext. An extension of the City
Lights concept for digital maps is Halo [2]. For Halo circles
are drawn around the object in the virtual off-screen. Users
can interpret the position of the POI by extrapolating the
circular arc. Baudisch et al. showed that users achieve bet-
ter results when using Halo instead of arrows with a labeled
distance [2]. Burigat et al. [4] reviewed these results by
comparing Halo with different arrow types e.g. by visualiz-
ing distance through scaling the arrows. They found that
arrow-based visualizations outperform Halo, in particular,
for complex tasks. Other off-screen visualization have been
developed (e.g. Wedge [6]) but it has not been shown that
these outperform existing approaches.
Off-screen visualization techniques have also been applied
to virtual environments and augmented reality. Chittaro
and Burigat [3] compared 2D and 3D arrows as well as map
like techniques to present a single destination in virtual envi-
ronments. They showed that inexperienced users were bet-
ter when using 3D arrows. Tonnis et al. [9] developed and
evaluated 3D arrows and a map like visualization technique
in an augmented reality application for cars that presents a
source of danger. They conclude from their study that there
is a significant advantage for 3D arrows in case of reaction
times over the map like method.
3. VISUALIZATION DESIGN
We assume that a 2D map presented besides the aug-
mented scene, as in Figure 1.b, demands effort to be inter-
preted. We started with the concept of off-screen objects
and transferred it in 3D to embed the visualization into the
scene. Arrows that directly point at nearby POIs are ar-
ranged around a circle. The centroid of each arrow is located
on this circle and thus, all arrows are on the same plane. The
center of the circle is moved in front of the user’s position to
be inside the viewport. To reduce occlusion among the ar-
rows the plane is slightly tilted towards the user. The arrows
rotate according to the orientation of the phone, just like a
compass with multiple needles. To present the distance be-
tween the viewer and the object the arrows are scaled in
length according to this distance. The scale factor can be
adjusted just like the zoom level in digital maps. As shown
in Figure 1.c, arrows are not hidden if an off-screen object
becomes an on-screen object (i.e. the object is visible inside
the camera’s video) to avoid confusing the user.
Figure 1: Conception of the visualizations: a) Scene
from top b) mini-map and c) 3D arrows
To compare the arrow based visualization with the state-
of-the-art we implemented a mini-map that also rotates with
the orientation of the phone. A highlighted cone shows the
area of the real world that is visible on the phone’s display.
To obtain comparable results the mini-map is centered at the
same location with the same size as the arrows. To highlight
POIs inside the camera’s video we use circles that overlay the
objects inside the physical scene. If a POI is near the centre
of the display a short description is painted on top of a semi-
ellipse connected to the circle by a thin line. The system
was implemented for Android Smartphones. A screenshot
containing a side-by-side comparison of the application’s two
presentation techniques is shown in Figure 2.
4. USER STUDY
To compare the arrow based visualization of nearby POIs
with mini-maps we conducted a user study with the system
described above. In the experiment participants performed
two tasks using the system. Our assumption was that par-
ticipants are faster with the arrow based technique because
they do not have to mentally align two different reference
systems. Thus, we also expected that participants perceive
it as more intuitive. However, we assumed that participants
can localize POIs more precisely with the mini-map because
Figure 2: Screenshot with both visualizations for
illustration purpose. Only one was used at a time
for the evaluation.
of their experience with map usage and because of the more
abstract 2D visualization.
4.1 Design
The experiment’s independent variable was the visualiza-
tion technique used to present POIs. In the control condi-
tion, participants used a mini-map and in the experimental
condition they used the arrow based visualization instead.
The study consisted of two tasks. We used a repeated-
measures design for both tasks. The tasked are always per-
formed in the same order but the order of the conditions
have been counterbalanced to reduce sequence effects.
4.2 Tasks
For the first task the device displayed four virtual POIs
randomly distributed around the user. Participants’ task
was to read the names of the POIs. In order to do that,
they had to search for the POIs by turning around on the
spot. A POI’s name was written on the top of the screen
if the POI was located at the centre of the display. The
dependent variables were the time needed to read the names
of all POIs and a rating of the visualization technique on a
six point scale.
In the second task the device showed a set of four POIs
all in sight and randomly selected from 12 nearby POIs (e.g.
buildings, shops, and a bus station). Participants were asked
to turn in a specific direction before starting the task and
memorize the location of the POIs without turning around.
After they finished memorizing, the device was removed, and
participants had to tell which POIs were displayed. The
dependent variables were the time needed to memorize the
POIs and a rating of the visualization technique on a six
point scale. In addition, the number of correctly estimated
POIs and the difference between the named POIs and the
displayed POIs in meters and angle were measured.
4.3 Procedure
We set up the evaluation booth on a public place in the
city center of a medium size European city shown in Fig-
ure 3. The study was conducted on a Saturday from 11.00
to 16.00. Two teams of experimenters guided participants
through the tasks simultaneously. Passersby were randomly
asked to participate in the study. After a person had agreed
to participate in the evaluation, the experimenter made the
participants familiar with the concept of presenting POIs us-
ing Augmented Reality and the two visualization techniques.
After conducting both tasks participants were interviewed to
collect demographic information. In addition we asked par-
ticipants to self-assess their experience with Smartphones,
navigation skills, and experience with virtual reality (VE)
on a six point scale.
4.4 Participants
We conducted the user study with 26 participants, 13 fe-
male and 13 male, aged 21-41 (M=22.4, SD=7.2). The sub-
jects were passersby, so most of them were familiar with the
local place. All sub jects were volunteer, chosen without any
selection by age, nationality or other criteria. None of them
were familiar with the used application.
5. RESULTS
After conducting the experiment we collected and ana-
lyzed the data. For the first task we could not identify sig-
Figure 3: Evaluation at a public place
nificant results. Therefore only the results of the second
task will be discussed in the following. In addition to the
differences between the visualization techniques, significant
effects of gender and stated experience with virtual envi-
ronments are also reported if applicable. Unless otherwise
noted, a t-test is used to derive the p-values.
Using the arrow visualization subjects correctly identi-
fied significantly (p=0.023) more POIs (M=2.2) than using
the mini-map (M=1.6). In particular, males were signifi-
cant better (p=0.045) when using the arrow based method
(M=2.1) in contrast to the mini-map method (M=1.3). Fur-
thermore participants who stated to be good in VE were also
significant better (p=0,013) when using the arrow method
(M=2.4 compared to M=1.3).
To compare the positions of the displayed POIs with
the positions stated by the participants the respective geo-
coordinates were used. Using these geo-coordinates the de-
viation in meters between these positions were calculated.
Using arrows the distance between the POIs’ correct po-
sition and the guessed position was lower (M=18.0m) than
using the mini-map (M=23.3m) but the difference is not sig-
nificant. The difference of the distance from the user to the
correct POI and the distance from the user to the guessed
POI was smaller using arrows (M=29.9m) than using the
mini-map (M=38.8m). However, the difference was also not
significant.
We calculated the angle between the displayed POIs and
the guessed POI. The angular deviation was significantly
smaller (p=0.027) using arrows (M=12.4) than using the
mini-map (M=20.2) In particular, females profited from
the arrows (M=9.7) and were significant better with ar-
rows (p=0.030) than using the mini-map (M=18.2). Also
subjects who stated to be good in VE were significant better
(p<0.001) when using the arrows (M=8.5) instead of the
mini-map (M=25.2).
On average participants are slightly faster using the ar-
rows (M=21.4s versus M=24.1s) but no significant difference
was found. The ratings are nearly equal with M=2.9 for the
arrows and M=2.9 whereas 1.0 is best and 6.0 is worst.
We found some general tendencies that we, however, can-
not prove to be significant. Using the arrows subjects that
stated to have experience with virtual environments were on
average 3.4s slower but identified 0.3 more objects correctly
than subjects who stated to have little experience. In con-
trast, we found opposite results for the mini-map: Subjects
that stated to have experience with virtual environments
were on average 6.1s faster and identified 0.5 less objects
correctly than subjects who stated to have little experience.
Furthermore females were always better than males on av-
erage for both visualizations (e.g. 4.9smaller angular de-
viation and 0.4 more objects correctly identified) but slower
(3.3s) in all measured values.
We analyzed our data to find anomalies in our sample
of the population. Overall females were 3.8 years younger
than males (p=0.017). Furthermore females rated their nav-
igation skills 0.7 points worse than their male counterparts
(p<0.0001) and females rated their experience with vir-
tual environments 1.3 points worse than males (p<0.0001).
Younger participants rated their own competence worse in
the categories familiarity with the environment (p<0.0001,
r=-0.391), navigation skills (p<0.0001, r=-0.417), and expe-
rience with virtual environments (p<0.021, r=-0.261). An
analysis of covariance (ANCOVA) showed that the younger
females are not the cause why younger participants rated
themselves worse in general because gender is not a covari-
ate in this correlation.
6. DISCUSSION
No significant difference between the visualization tech-
niques has been found for the first task. We assume that
the data is affected by noise induced by participants lack
of training and the inaccuracy of the used phone’s build-
in compass. However, for the second task the arrows clearly
outperform the mini-map. Participants were faster and more
precise with the 3D arrows. In particular, the absolute
amount of correctly identified objects was higher.
The results support our first assumption, that partici-
pants are faster with the arrow based technique, because
they do not have to align different reference systems. Due
to the same reason we expected that users perceive the ar-
rows as more intuitive, but surprisingly all ratings were al-
most equal. One reason might be that subjects were naive
users, which were more interested in mobile augmented real-
ity applications in general than the compared visualization
techniques. Our last assumption, that participants are more
precisely when localizing POIs with the mini-map, was con-
tradicted. We assume that these results are mainly caused
by the improved visualization of directions the 3D arrows
provide. Because of our experimental design we cannot es-
timate if the 3D arrows visualize distances more effectively.
We asked participants to self asses their competence with
virtual environments because we expected that this compe-
tence has a direct effect on participants’ skills to navigate
and orientate in an augmented reality. On average, females
performed always better but with a lower self-assessment
of their experience with virtual environments. This implies
that some males overrated their own competence.
We were surprised that, in particular, subjects who stated
to have experience with virtual environments were sup-
ported by the 3D arrows. This is contradictory to the results
from Burigat and Chittaro [3], who showed that presenting a
destination in virtual environments using a 3D arrow is espe-
cially suitable for inexperienced users. We identified two po-
tential reasons causing this contradiction: Let participants
rate their competence in virtual realities might be too impre-
cise, especially compared to the preselected experts Burigat
and Chittaro used in their experiment. Furthermore it’s un-
clear if the results of an experiment about stationary virtual
environments are applicable to mobile augmented reality.
7. CONCLUSIONS
In this paper we described an off-screen visualization tech-
nique for mobile augmented reality applications using em-
bedded arrows pointing at nearby sights. A system was de-
veloped for Android Smartphones to compare the developed
off-screen visualization with a state-of-the-art mini-map to
conduct an experiment with 26 passersby in a city centre.
The study showed that 3D arrows enable users to estimate
the position of objects more precisly than a mini-map.
We conclude that existing mobile augmented reality appli-
cations could be improved by using the 3D arrows described
in this work. In addition, we suppose that our results can
be applied to virtual environments in general, so users e.g.
in games might benefit from multi-targeting arrows.
It remains to be examined how the users’ performance
changes with an increasing number of displayed objects
which cause self-occluding arrows. Furthermore an inves-
tigation on possible other variations of off-screen visual-
izations should be conducted, in particular, the tested ap-
proaches could be unified in a 3D mini-map.
8. ACKNOWLEDGMENTS
This paper is supported by the European Community
within the InterMedia project (project No. 038419).
9. REFERENCES
[1] World tourism barometer excerpt. volume 7, pages
1–7. UNWTO, 2009.
[2] P. Baudisch and R. Rosenholtz. Halo: a technique for
visualizing off-screen objects. In Proc. of CHI, 2003.
[3] S. Burigat and L. Chittaro. Navigation in 3d virtual
environments: Effects of user experience and
location-pointing navigation aids. In International
Journal of Human-Computer Studies, volume 65, 2007.
[4] S. Burigat, L. Chittaro, and S. Gabrielli. Visualizing
locations of off-screen objects on mobile devices: a
comparative evaluation of three approaches. In Proc.
of MobileHCI, 2006.
[5] N. Davies, K. Cheverst, A. Dix, and A. Hesse.
Understanding the role of image recognition in mobile
tour guides. In Proc. of MobileHCI, 2005.
[6] S. Gustafson, P. Baudisch, C. Gutwin, and P. Irani.
Wedge: clutter-free visualization of off-screen
locations. In Proc. of CHI, 2008.
[7] S. Karpischek, C. Marforio, M. Godenzi, S. Heuel, and
F. Michahelles. Mobile augmented reality to identify
mountains. In Adjunct Proc. of AmI, 2009.
[8] A. Morrison, A. Oulasvirta, P. Peltonen, S. Lemmela,
G. Jacucci, G. Reitmayr, J. N¨
as¨
anen, and A. Juustila.
Like bees around the hive: a comparative study of a
mobile augmented reality map. In Proc. of CHI, 2009.
[9] M. Tonnis and G. Klinker. Effective control of a car
driver’s attention for visual and acoustic guidance
towards the direction of imminent dangers. In Proc. of
ISMAR, 2006.
[10] D. Wagner, S. D., and B. H. Multiple Target
Detection and Tracking with Guaranteed Framerates
on Mobile Phones. In Proc. of ISMAR, 2009.
[11] P. T. Zellweger, J. D. Mackinlay, L. Good, M. Stefik,
and P. Baudisch. City lights: contextual views in
minimal space. In Proc. of CHI, 2003.
... Some research has focused on technology to resolve the challenges in a physical sense, and physiological safety, such as GPS technology, route planning method, and navigation instruction visualization; in contrast, the study of mental satisfaction during navigation is not rich [18]. For example, an increasing number of studies on human-computer interaction (HCI) have started to pay attention to the fundamental usability and preferences of user-interface designs for augmented and mixed-reality navigation [19][20][21][22][23][24]. However, in [6], Cartwright et al. suggested that mobile maps not only need to be efficient but also should be affective and emotionally pleasing to produce a positive user experience. ...
... Recently, the studies on augmented reality navigation could be summarized into car navigation, pedestrian navigation, and indoor navigation according to the scenarios. Most studies on pedestrian augmented reality navigation are focusing more on the visualization of point-of-interest information and turn-by-turn navigation instructions of route planning on the hand-held devices [22,47]. In these studies, the map interface usually performs as a split part of the screen to assist recognition of virtual marks of point of interest and digital route instruction. ...
Article
Full-text available
The development of ubiquitous computing technology and the emergence of XR could provide pedestrian navigation with more options for user interfaces and interactions. In this work, we aim investigate the role of a mixed-reality map interface in urban exploration to enhance pedestrians’ mental satisfaction. We propose a mixed-reality 3D minimap as a part of the navigation interface which pedestrians could refer to and interact during urban exploration. To further explore the different levels of detail of the map interface, we conducted a user study (n = 28, two groups with two tasks). We designed two exploratory activities as experimental tasks with two map modes (a normal one and a simplified one) to discuss the detailed design of the minimap interface. The results indicated that participants showed a positive attitude toward our method. The simplified map mode could result in a lower perceived workload in both tasks while enhancing performance in specific navigation, such as wayfinding. However, we also found that pedestrians’ preference for the level of detail of the minimap interface is dynamic in navigation. Thus, we suggest discussing the different levels of detail further in specific scenarios. Finally, we also summarize some findings observed during user study for inspiring the study of virtual map interface of future mixed-reality navigation for urban exploration in various scenarios.
... Счинке Т. ва бошқалар (2010), Ватанабе А. (2012) каби яна бир гуруҳ олимлар тадқиқотларида туризмда мобил қурилмаларнинг аҳамиятли жиҳатларига эътиборни қаратиб, сайёҳлик ва геолокацион иловалар, виртуал ҳақиқат (virtual reality) иловалари қанчалик туристик манзилларни намойиш қилишда муҳимлигини таъкидлаб ўтишган [7], [8]. ...
... 2.2.1. Augmented Reality for Overlaying Information on the Real World AR refers to an interactive experience in a real-world environment where the objects that reside in the real world are enhanced by computer-generated information [11,12] with or without the help of markers. According to [13], the evolution of the capabilities of mobile devices, combined with affordable Internet access and advances in networking, computer vision, and cloud computing, transformed AR from science to reality. ...
Article
Full-text available
In this paper, a representation based on digital assets and semantic annotations is established for Traditional Craft instances, in a way that captures their socio-historic context and preserves both their tangible and intangible Cultural Heritage dimensions. These meaningful and documented experiential presentations are delivered to the target audience through narratives that address a range of uses, including personalized storytelling, interactive Augmented Reality (AR), augmented physical artifacts, Mixed Reality (MR) exhibitions, and the Web. The provided engaging cultural experiences have the potential to have an impact on interest growth and tourism, which can support Traditional Craft communities and institutions. A secondary impact is the attraction of new apprentices through training and demonstrations that guarantee long-term preservation. The proposed approach is demonstrated in the context of textile manufacturing as practiced by the community of the Haus der Seidenkultur, a former silk factory that was turned into a museum where the traditional craft of Jacquard weaving is still practiced.
... 2.2.1. Augmented Reality for Overlaying Information on the Real World AR refers to an interactive experience in a real-world environment where the objects that reside in the real world are enhanced by computer-generated information [11,12] with or without the help of markers. According to [13], the evolution of the capabilities of mobile devices, combined with affordable Internet access and advances in networking, computer vision, and cloud computing, transformed AR from science to reality. ...
Article
Full-text available
This work regards the representation of handicrafts for craft training and demonstration in the environment of an ethnographic heritage museum. The craft of mastic cultivation is chosen as a use case. This paper presents the process of representation and presentation of this craft, following an articulated pipeline approach for data collection, annotation, and semantic representation. The outcomes were used to implement an exhibition that targets the presentation of craft context and craft training, through interactive experiences, mobile applications, and a hands-on training where users reenact the gestures of a mastic cultivator. Preliminary evaluation results show high acceptance for the installation and increased user interest.
... Prior MR studies suggest different approaches. Arrows visualizing off-screen information in the scene to counter narrow FOV show improvement in both task completion time and accuracy (Schinke et al., 2010). A recent work on HoloLens 1 demonstrates the spatial UI redesign of a MR museum system, considering four categories of factors including task, user, environment and system. ...
Conference Paper
Full-text available
Mixed reality (MR) devices blur the boundaries between the virtual world and reality, reshaping the way people work with assistive information. However, there are still strong lim-for information searching tasks in MR glasses. Added capabilities of HoloLens 2, a recently released MR device, bring new possibilities to deal with these issues. Interactive approaches are proposed in this study, including body/hand-locked components , view-locked navigation components, and view-sensitive information layout. Prototypes were developed with and without these interactive approaches, and user studies were conducted to measure the task performance, usability, and presence. Results show that interactive approaches have positive effect in terms of task completion time. Different cognitive and behavioral styles may lead to distinct preferences for different interactive approaches.
Article
Detecting and avoiding obstacles while navigating can pose a challenge for people with low vision, but augmented reality (AR) has the potential to assist by enhancing obstacle visibility. Perceptual and user experience research is needed to understand how to craft effective AR visuals for this purpose. We developed a prototype AR application capable of displaying multiple kinds of visual cues for obstacles on an optical see-through head-mounted display. We assessed the usability of these cues via a study in which participants with low vision navigated an obstacle course. The results suggest that 3D world-locked AR cues were superior to directional heads-up cues for most participants during this activity.
Article
Full-text available
Augmented Reality (AR) embeds digital information into objects of the physical world. Data can be shown in-situ, thereby enabling real-time visual comparisons and object search in real-life user tasks, such as comparing products and looking up scores in a sports game. While there have been studies on designing AR interfaces for situated information retrieval, there has only been limited research on AR object labeling for visual search tasks in the spatial environment. In this paper, we identify and categorize different design aspects in AR label design and report on a formal user study on labels for out-of-view objects to support visual search tasks in AR. We design three visualization techniques for out-of-view object labeling in AR, which respectively encode the relative physical position (height-encoded), the rotational direction (angle-encoded), and the label values (value-encoded) of the objects. We further implement two traditional in-view object labeling techniques, where labels are placed either next to the respective objects (situated) or at the edge of the AR FoV (boundary). We evaluate these ve different label conditions in three visual search tasks for static objects. Our study shows that out-of-view object labels are benecial when searching for objects outside the FoV, spatial orientation, and when comparing multiple spatially sparse objects. Angle-encoded labels with directional cues of the surrounding objects have the overall best performance with the highest user satisfaction. We discuss the implications of our ndings for future immersive AR interface design.
Article
Full-text available
In this paper, we describe the results of an experimental study whose objective was twofold: (1) comparing three navigation aids that help users perform wayfinding tasks in desktop virtual environments (VEs) by pointing out the location of objects or places; (2) evaluating the effects of user experience with 3D desktop VEs on their effectiveness with the considered navigation aids. In particular, we compared navigation performance (in terms of total time to complete an informed search task) of 48 users divided into two groups: subjects in one group had experience in navigating 3D VEs while subjects in the other group did not. The experiment comprised four conditions that differed for the navigation aid that was employed. The first and the second condition, respectively, exploited 3D and 2D arrows to point towards objects that users had to reach; in the third condition, a radar metaphor was employed to show the location of objects in the VE; the fourth condition was a control condition with no location-pointing navigation aid available. The search task was performed both in a VE representing an outdoor geographic area and in an abstract VE that did not resemble any familiar environment. For each VE, users were also asked to order the four conditions according to their preference. Results show that the navigation aid based on 3D arrows outperformed (both in terms of user performance and user preference) the others, except in the case when it was used by experienced users in the geographic VE. In that case, it was as effective as the others. Finally, in the geographic VE, experienced users took significantly less time than inexperienced users to perform the informed search, while in the abstract VE the difference was significant only in the control and the radar conditions. From a more general perspective, our study highlights the need to take into specific consideration user experience in navigating VEs when designing navigation aids and evaluating their effectiveness.
Conference Paper
Full-text available
We present findings from field trials of MapLens, a mobile augmented reality (AR) map using a magic lens over a pa- per map. Twenty-six participants used MapLens to play a location-based game in a city centre. Comparisons to a group of 11 users with a standard 2D mobile map uncover phenomena that arise uniquely when interacting with AR features in the wild. The main finding is that AR features facilitate place-making by creating a constant need for ref- erencing to the physical, and in that it allows for ease of bodily configurations for the group, encourages establish- ment of common ground, and thereby invites discussion, negotiation and public problem-solving. The main potential of AR maps lies in their use as a collaborative tool. Author Keywords Augmented reality, mobile maps, mobile use, field studies.
Conference Paper
Full-text available
As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the off-screen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.
Conference Paper
Full-text available
To overcome display limitations of small-screen devices, researchers have proposed techniques that point users to objects located off-screen. Arrow-based techniques such as City Lights convey only direction. Halo conveys direction and distance, but is susceptible to clutter resulting from overlapping halos. We present Wedge, a visualization technique that conveys direction and distance, yet avoids overlap and clutter. Wedge represents each off-screen location using an acute isosceles triangle: the tip coincides with the off-screen locations, and the two corners are located on-screen. A wedge conveys location awareness primarily by means of its two legs pointing towards the target. Wedges avoid overlap programmatically by repelling each other, causing them to rotate until overlap is resolved. As a result, wedges can be applied to numbers and configurations of targets that would lead to clutter if visualized using halos. We report on a user study comparing Wedge and Halo for three off-screen tasks. Participants were significantly more accurate when using Wedge than when using Halo.
Conference Paper
Full-text available
Users of mobile tour guides often express a strong desire for the system to be able to provide information on arbitrary objects they encounter during their visit - akin to pointing to a building or attraction and saying "what's that ?" to a human tour guide. This paper reports on a field study in which we investigated user reaction to the use of digital image capture and recognition to support such functionality. Our results provide an insight into usage patterns and likely user reaction to mobile tour guides that use digital photography for real-time object recognition. These results include the counter-intuitive observation that a significant class of users appear happy to use image recognition even when this is a more complex, lengthy and error-prone process than traditional solutions. Careful analysis of user behavior during the field trails also provides evidence that it may be possible to classify tourists according to the methods by which they prefer to acquire information about tourist attractions in their vicinity. If shown to be generally true these results have important implications for designers of future mobile tour guide systems.
Conference Paper
Full-text available
Browsing large information spaces such as maps on the limited screen of mobile devices often requires people to perform panning and zooming operations that move relevant display content off- screen. This makes it difficult to perform spatial tasks such as finding the location of Points Of Interest (POIs) in a city. Visualizing the location of off-screen objects can mitigate this problem: in this paper, we present a user study comparing the Halo (2) approach with two other techniques based on arrows. Halo surrounds off-screen objects with circles that reach the display window, so that users can derive the location and distance of objects by observing the visible portion of the corresponding circles. In the two arrow-based techniques, arrows point at objects and their size and body length, respectively, inform about the distance of objects. Our study involved four tasks requiring users to identify and compare off-screen objects locations, and also investigated the effectiveness of the three techniques with respect to the number of off-screen objects. Arrows allowed users to order off-screen objects faster and more accurately according to their distance, while Halo allowed users to better identify the correct location of off-screen objects. Implications of these results for mobile map-based applications are also discussed.
Conference Paper
Full-text available
In cars, Augmented Reality is becoming an interesting means to en- hance active safety in the driving task. Guiding a driver's attention to an imminent danger somewhere around the car is a potential ap- plication. In a research project with the automotive industry, we are exploring different approaches towards alerting drivers to such dan- gers. First results were presented last year. We have extended two of these approaches. One uses AR to visualize the source of danger in the driver's frame of reference while the other one presents in- formation in a bird's eye schematic map. Our extensions were the incorporation of a real Head-up Display, improved visual percep- tion and acoustic support. Both schemes were evaluated both with and without 3D encoded sound. This paper reports on a user test in which 24 participants provided objective and subjective measure- ments. The results indicate that the AR-based three-dimensional presentation scheme with and without sound support systematically outperforms the bird's eye schematic map. CR Categories: H.1.2 (Models and Principles): User/Machine Systems—Human Information Processing; H.5.2 (Information In- terfaces and Presentation): User Interfaces—Ergonomics; an ADAS system needs to monitor and track the car's environment, exploiting the increasing availability of sensors to detect imminent dangerous situations of traffic and other obstacles in the car's near distance. Such sensor data must be provided via suitable output channels to catch a driver's attention and to guide him in that di- rection. At ISMAR 2005 we reported on such a system (23) and on our first experiments with it. This paper analyzes findings of the work and extends the approach towards guiding drivers without distracting them from the driving task. A particular problem of imminent danger is the fact that it does not necessarily occur in the driver's field of view - e.g., from be- hind. How can Augmented Reality be used in this context? A direct indication of the dangerous location is not possible. Instead, drivers must realize that they should turn their heads towards the corre- sponding direction. The question arises, whether 3D Augmented Reality technology can outperform classic 2D assistance systems, that for instance can show a map like overview and thus can show the location of the danger. We investigated, improved and evaluated two visualizations, re- sulting in one using an animated bird's eye perspective direction indicator and the other one using an animated 3D arrow. The new bird's eye perspective scheme presents a car and a 2D arrow point- ing in the direction of the imminent danger (Figure 4). It is located in a position in front of the driver's in-car position and is placed on the HUD. The extended 3D arrow is attached to a pole that appears to be fixed to the car's front bumper when it is presented in the HUD (Figure 3). A further extension concerns the use of sound for support in di- rectional guidance. As tracking systems also enable 3D encoded sound, we furthermore generated a warning sound that came out of the direction of the dangerous situation. An experiment was set up and was executed to determine whether the 3D presentation scheme can outperform the bird's eye scheme by simultaneously reducing workload. We also conducted the experiment with and without providing 3D encoded sound, to determine whether multiple output channels enhance or deteriorate directional perception and workload. The experiment (15) was con- ducted with 24 participants, none of which had participated in pre- vious studies on the topic of attentional guidance. Objective as well as subjective measurements were collected and analyzed. This paper reports on general questions of fast and efficient per- ception and discusses ambiguities in the perception of sound. The experiment is illustrated and we discuss the results, showing recent advances of our enhanced setup.
Conference Paper
Full-text available
In this paper we present a novel method for real-time pose estimation and tracking on low-end devices such as mobile phones. The presented system can track multiple known targets in real-time and simultaneously detect new targets for tracking. We present a method to automatically and dynamically balance the quality of detection and tracking to adapt to a variable time budget and ensure a constant frame rate. Results from real data of a mobile phone Augmented Reality system demonstrate the efficiency and robustness of the described approach. The system can track 6 planar targets on a mobile phone simultaneously at framerates of 23 fps.
Article
The paper describes the implementation of SwissPeaks, a mobile augmented reality application for identifying mountains on Apple's iPhone 3GS. The presented prototype includes a novel approach for correcting inaccurate sensor data with manual user input and uses a web service to display only mountains which are actually visible from the user's point of view.