Conference PaperPDF Available

Designing Exocentric Pedestrian Navigation for AR Head Mounted Displays

Authors:

Abstract and Figures

Augmented reality (AR) is gradually becoming used for navigation in urban environments, allowing users to see instructions in their physical environment. However, viewing this information through a smartphone's screen is not ideal, as it can cause users to become inattentive to their surroundings. AR head-mounted displays (HMD) have the potential to overcome this issue by integrating navigational information into the user's field of view (FOV). While work has explored the design of turn-by-turn egocentric AR navigation interfaces, little work has explored the design of ex-ocentric interfaces, which provide the user with an overview of their desired route. In response to this, we examined the impact of three different exocentric AR map displays on pedestrian navigation performance and user experience. Our work highlights pedestrian safety concerns and provides design implications for future AR HMD pedestrian navigation interfaces.
Content may be subject to copyright.
Designing Exocentric Pedestrian
Navigation for AR Head Mounted
Displays
Tram Thi Minh Tran
University of Sydney
Sydney, Australia
ttra6156@uni.sydney.edu.au
Callum Parker
University of Sydney
Sydney, Australia
callum.parker@sydney.edu.au
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CHI ’20 Extended Abstracts, April 25–30, 2020, Honolulu, HI, USA.
© 2020 Copyright is held by the author/owner(s).
ACM ISBN 978-1-4503-6819-3/20/04.
http://dx.doi.10.1145/3334480.3382868
Abstract
Augmented reality (AR) is gradually becoming used for
navigation in urban environments, allowing users to see
instructions in their physical environment. However, view-
ing this information through a smartphone’s screen is not
ideal, as it can cause users to become inattentive to their
surroundings. AR head-mounted displays (HMD) have the
potential to overcome this issue by integrating navigational
information into the user’s field of view (FOV). While work
has explored the design of turn-by-turn egocentric AR nav-
igation interfaces, little work has explored the design of ex-
ocentric interfaces, which provide the user with an overview
of their desired route. In response to this, we examined the
impact of three different exocentric AR map displays on
pedestrian navigation performance and user experience.
Our work highlights pedestrian safety concerns and pro-
vides design implications for future AR HMD pedestrian
navigation interfaces.
Author Keywords
Wayfinding; Pedestrian navigation; Exocentric navigation;
Maps; Head mounted display; Augmented reality; Virtual
reality
CCS Concepts
Human-centered computing Human computer inter-
action (HCI);
Introduction
With an ideal technical configuration for running AR [7], a
community of more than five billion users [29] and a grow-
ing number of open-source frameworks, AR-enabled smart-
phones are more than just an interim technology. They
have been widely used as an apparatus for AR pedestrian
navigation applications [6]. However, due to traffic density in
urban areas, pedestrians need to be attentive to their sur-
roundings. Engagement with handheld devices may expose
them to higher risks of road accidents [15]. Therefore, the
handheld-AR view is intended to be used only while station-
ary, as suggested in the instructions for the Google Maps
Live View AR feature [3]. Furthermore, walking around
while viewing an augmented real-world through a smart-
phone’s camera feed may cause social embarrassment and
could make other people feel uncomfortable [6, 8].
In contrast, AR HMDs integrate synthetic imagery into the
user’s FOV, and as a result allow them to retain awareness
of the environment [5]. However, there are still many techni-
cal hurdles to be overcome before AR HMDs can progress
from novelty to ubiquity, such as registration accuracy,
human-like FOV, smaller form factor, and a semantic un-
derstanding of the real world [5, 18]. For those reasons, the
design of navigation interfaces for AR HMDs is relatively
under-explored compared to that of handheld AR.
From a review of literature, we found that research efforts in
AR HMD pedestrian navigation have utilised the AR view to
provide users with egocentric turn-by-turn guidance [23,
4]. The directional cues embedded in the real world re-
duces the abstraction of the conventional map and helps
users quickly locate themselves in unfamiliar settings [26].
However, to be effective a navigation system also needs to
provide users with an exocentric spatial layout of the envi-
ronment, such as a map. Because egocentric and exocen-
tric spatial information helps to address different navigation
tasks, and both are useful at different times [30]. Much of
previous work [21, 11, 1] that focused on exocentric navi-
gation interfaces have projected an interface at a distance
from the user’s eyes in an attempt to maximise the overlay
FOV and ensure readability [24]. However, this might lead
to divided attention from the real world and inattentional
blindness.
In addressing problems related to pedestrian’s awareness
of their surroundings, we could consider a map overlaid
on the street to offer the users both local guidance and a
bird’s-eye view of the environment, as seen in one of the
prototypes of Google Live View [12]. Alternatively, a map
could take the form of a hand-based interface [17], which
is similar to the activity of reading a physical paper map
and therefore facilitates a familiar experience. Besides, the
hand movement could induce users to safely reference the
map on demand, since we hypothesise that the pedestrians
would not walk with a seemingly unnatural pose of holding
an empty hand in front.
To understand the implications of such interfaces on pedes-
trian navigation and user experience, we trialled three dif-
ferent AR map displays with 18 participants. As AR is still
a developing technology, we overcame the technical limita-
tions of current AR HMDs by conducting the study in a VR
environment. This method of VR testing is becoming com-
mon practice for navigation [28, 27] and AR [16] studies, as
it allows researchers and designers to overcome technical
limitations and rapidly test their designs in a controlled en-
vironment. Performance data and user feedback collected
from participants during the study were interpreted along
with observations to provide implications for the design of
future AR HMD navigation interfaces.
VR Prototype
The VR prototype was created using the Unity Game En-
gine. To simulate the urban environment, we constructed a
city out of low-poly 3D models downloaded from the Unity
Asset Store [2]. The virtual environment had all the nec-
essary infrastructure of roads, buildings, and greenery. It
also featured pedestrian and car traffic through the Unity
Navigation System. Idle and animated urban dwellers were
placed around the city along with ambient city sounds (such
as people talking and car horns). Altogether, their presence
was to imitate critical aspects of a living city.
Figure 1: AR navigation displays:
(A) Map Up Front; (B) Map On
Street; and (C) Map On Hand
Interface Design
For reading ergonomics, the map’s design was a simplified
version of the city’s orthographic view. Visual emphasis
was given to the road network and location markers (e.g.
bus stops, car parks, and helicopter pads). A dotted line
represented the travel route for each scenario. The map
resembled a futuristic translucent interface that allowed the
users to see through it. The fidelity of the map decreased at
runtime because of the trade-off between render quality and
performance; therefore, we slightly increased its size.
In terms of map position, the Map Up Front was displayed
vertically in front of the user (Figure 1A). The Map On Street
formed a 12° angle with the street (Figure 1B). Both of them
were placed inside the user’s immersive line of sight but
at a comfortable distance to prevent eye strain. The Map
On Hand was anchored to the right-hand controller, which
means it was only visible when users moved their right
hand into their FOV (Figure 1C).
The current location of the user (“You Are Here”) was rep-
resented as a black arrow, and the facing direction was in-
dicated by the pointer of the arrow. In this study, one of the
map settings was automatic first-person alignment, in which
the map would automatically rotate to align to the user’s for-
ward direction. As pointed out by Levine [22], this alignment
would facilitate perspective transformation and therefore re-
duce user’s cognitive workload in matching map information
with the real world.
Several studies suggest that the map should be shown
upon request to reduce distraction and improve naviga-
tion performance [11, 21]. Therefore, in addition to the map
view, we developed a minimal arrow view to provide users
with travel directions.
Interaction Design
In our study, we used an Oculus Quest1for its compact
form factor and high performance. Participants could switch
between the map view and the arrow view by pressing a
button on the right-hand controller. Virtual navigation was a
combination of the first-person perspective and the use of
joysticks to walk and turn. Despite its ease of use, this loco-
motion scheme can cause sensory conflicts which may lead
to simulator sickness [14]. The first participant joining our
pilot study could not finish one interface trial. To minimise
the user’s discomfort, we reduced vection by slowing down
the user’s speed and kept it constant throughout the experi-
ment. We also asked participants to wear acupressure wrist
bands2to help mitigate potentially uncomfortable sensa-
tions caused by motion sickness [20]. These were worn five
to ten minutes before the start of the first trial. We also in-
structed participants to sit down rather than stand up while
using the VR prototype.
Study Design
This study was carried out following the ethical approval
granted by the University of Sydney (ID 2018/125).
1Oculus Quest - https://www.oculus.com/quest
2Sea-Band - https://www.sea-band.com
Preliminary Study
When the prototype was fully functional, a pilot study was
conducted with 5 participants. The feedback we received
from the preliminary study further informed the design of
the virtual environment. At first, our virtual urban environ-
ment contained just the essential infrastructure such as
roads, buildings, and greenery, but it lacked people, traffic,
and urban noises. As a result, some participants exhibited
careless navigation behaviors such as unlawfully crossing
the roads or walking on the street instead of the footpath.
Therefore, the virtual environment was updated to include
pedestrians, cars, and ambient noise to make the urban en-
vironment feel populated and to help deter pedestrians from
taking shortcuts.
Experimental Tasks
Informed by the work of Reichenbacher [25] on elementary
spatial user actions, we designed two tasks: (1) Navigating
to a destination and (2) Searching for a particular place.
The first task aimed to test aspects of Orientation, “Which
direction am I heading towards?”, Localisation “Where am
I?”, and Navigation “How do I get there?”. The second task
aimed to test the aspect of Search “Where is the nearest
coffee shop?”. To reduce learning effects, in every interface
trial the participant navigated to different destinations and
looked for different places (Table 1). The three travel routes
were designed to have the same distance and the same
number of turning points and street crossings (Figure 2).
Figure 2: Route design
Procedure
The study took up to 75 minutes in total. Trialling each map
interface took about three to five minutes. To minimise or-
dering effects, we randomised the order of interfaces using
the balanced Latin Square. Participants were instructed
to wear Sea-Bands and sit down on the chair throughout
the experiment. At the beginning of the study, participants
Map Display Navigate to... Search for...
(A) Up Front Nails Salon Bus Stop
(B) On Street The Hotel Helicopter Pad
(C) On Hand Petrol Station Car Park
Table 1: Experimental tasks used for each map display
learned to use Oculus Quest controllers to navigate and
had a short VR practice session. Before every interface
trial, participants received verbal and written instruction for
the first task. They were also made aware of a second task
of which instruction would be given from the application in-
terface. After every trial, participants filled out two standard
questionnaires and a general feedback form. Upon com-
pleting all the trials, participants were interviewed about
their experiences.
Measurement
To measure how efficiently the participants used the pro-
totype to complete the tasks, we recorded performance
data including Successful completion, Completion time,
and Errors. For task 1, there was an additional metric of
Map usage time (total time the map was activated during
the journey). The data were extracted from Android De-
bug Bridge logs, screen recordings of the VR scenes, and
video recordings of the study. For overall evaluation of the
prototype, we employed the System Usability Scale [9] and
NASA Task Load Index [13]. Lastly, to gain insights into
user behaviour, we analysed qualitative responses includ-
ing the verbal comments given during the trial, the written
responses in the General Feedback section (post-trial), and
the concluding semi-structured interview (post-study).
User Study
There were 18 participants (9 females, 9 males) aged from
18-35 years old. All participants, except for one, indicated
that they had little to no exposure to VR technology before
the study. All of them were familiar with digital navigation
applications such as Google Maps and had a minimum ed-
ucation level of a high school diploma.
Results
Some participants experienced motion sickness in the first
interface trial, but the discomfort decreased in the subse-
quent trials. All participants completed the tasks success-
fully without any errors. We performed one-way repeated
measures analysis of variance (ANOVA) for all other met-
rics to determine whether different map displays elicited
any statistically significant differences (Table 3). Only Loca-
tion Search time was followed up with a post hoc analysis
with a Bonferroni adjustment to find exactly where the differ-
ences occurred. In post-study interviews, nine participants
expressed their preferences for the prototype of Map On
Street, five participants favoured the Map Up Front, and four
participants chose the Map On Hand (Figure 3).
Figure 3: User preferences for
map displays (N = 18)
Figure 4: Simple line mean of
location search time (Confidence
level: 95)
Mean Diff Sig.
A-B -20.056 .027
A-C -24.556 .001
Table 2: Post hoc analysis with a
Bonferroni adjustment: (A) Map Up
Front, (B) Map On Street, (C) Map
On Hand (Confidence level: 95)
Map-based Navigation
There was a strong user preference for the Map On Street,
even though there were no significant changes in system
usability ratings or perceived workload among the three dis-
plays (Table 3). The majority of the participants liked how
this map was positioned in a way that did not obstruct the
view (P12)“It was not covering anything that I would want
to see” and how they could have it open all the time (P18)“I
can see the map without much effort”. The main reason for
the preference could be that information displayed in the
lower visual field is typically not interfering with the main
view. Several projection-based AR studies supporting nav-
igation with an arrow overlaid on the pathway found that
Measures F(2, 34) p-value
Navigation time 0.987 .383
Map usage 3.324 .066
SUS score 2.582 .090
NASA TLX score 1.831 .176
Location search time 6.917 .003
Table 3: Summary of quantitative data analysis (Since the p-value
is heavily dependent on the sample size, in the results we also
reported F value and effect size)
it led to a significantly higher ability of users to observe
real-world points of interest [10, 19]. Nevertheless, some
participants in our study found the map diverting their at-
tention as they had to look down physically. Additionally, the
relatively large map and the amount of spatial information
shown might hinder users from seeing obstacles such as
street potholes.
The Map Up Front, when being used en route, drew partici-
pants’ attention away from the surroundings and obstructed
the front view. Its usage, though not statistically significant,
was less compared to the other two maps. Many partic-
ipants pointed out that the size and position of the map
made it very distracting (P7) “I couldn’t avoid looking at it”,
obtrusive (P5) “The map was too big I couldn’t see what
exactly is happening in front of me” and inconvenient (P7)
“I had to stop or keep walking in an open area”. The semi-
transparency was helpful to some degree; nonetheless, the
map still prevented the participants from giving adequate
attention to the road ahead (P12)“Even though it is trans-
parent, I did not really look past the map”. There was only
one participant (P1) who commented that this upfront dis-
play was what he had expected to see.
Regarding the Map On Hand, the main issue experienced
by the majority of users was the unfamiliarity with this type
of display, (P6) “I did not know straight away that I could use
the controller to bring it up”. Conversely, other participants
enjoyed having the control towards the position of the map
and found it safer when the map did not get into their field
of vision (P2) “It did not get into my view until I wanted to
look at it”. The satisfaction of these participants suggested
that we could look into the possibility of flexible placement
of the interface in three-dimensional space to best accom-
modate the user’s personal preferences, requirements of
the surroundings, and the tasks at hand.
Location Searching
We found the Map Up Front ideal for stationary use as it
took statistically significant less time for the participants to
find a location (Figure 4) compared to the Map On Street (p
= .027) and the Map On Hand (p = .001) (Table 2). The re-
sult of the Map On Street could be explained by participants
needing to look from an oblique angle onto the display. As
for the Map On Hand, its relatively new position seemed to
affect the participant’s ability to browse places of interest.
Acknowledgement
We would like to thank Sydney
Informatics Hub, The University
of Sydney TechLab and the
participants of our study.
Limitations
The travel routes were relatively simple and may not reflect
the complexity of real city environments. The long study
time may have also caused participants to experience some
fatigue, affecting their performance. Despite this, the ma-
jority of participants reported a high level of immersion and
exhibited normal navigation behaviors as they do in real life.
However, VR is not a complete replacement of real-life ex-
perimental testing as there is a limit to the extent reality can
be mimicked. Two participants compared the prototype with
video games because of the animated graphics of buildings
and objects. Therefore, as AR continues to advance, real
world field studies are essential to validate the results.
Implications
This section presents three design implications of the study.
(1) Situational map displays: Different spatial tasks re-
quire different map displays. For tasks that involve reading
or searching for information on a map, an upfront display
has higher readability and will increase user performance.
Meanwhile, if the task is to navigate, displaying spatial infor-
mation on the street may be a better option as it alleviates
the problems of distraction.
(2) Designing for safety: Retaining the pedestrian’s situ-
ation awareness of the surroundings should be criteria to
evaluate different design choices. Apart from oncoming
traffic, obstacles in the street, such as litter, debris, and pot-
holes, are also particular hazards to avoid. Thus, future
work should consider this when displaying spatial informa-
tion in the lower part of a user’s visual field.
(3) Avoid over-simplifying the interface: The directional ar-
row view was designed to reduce the users’ cognitive load.
However, the lack of information may lead to users experi-
encing anxiety (e.g. not knowing the upcoming turns and
estimated travel time). Therefore, it is important to find the
right balance of information on screen.
Conclusion and Future Work
In this study, we simulated an AR HMD pedestrian naviga-
tion interface in a fully immersive virtual environment. The
system explored the impact of different map displays on
navigation performance and user experience. The research
offers design recommendations for AR HMD pedestrian
navigation interfaces and adds to existing knowledge of us-
ing VR to understand pedestrian behaviours. The results
suggest that an important future direction is towards the
design of adaptive AR navigational interfaces that change
based on individual needs, situation, and capabilities.
REFERENCES
[1] 2013. Directions - Google Glass. (2013). https:
//support.google.com/glass/answer/3086042?hl=en
[2] 2018. 1Texture City. (2018).
https://assetstore.unity.com/packages/3d/
environments/urban/1texture-city-112740
[3] 2019. Use Live View on Google Maps. (2019).
https://support.google.com/maps/answer/9332056?
co=GENIE.Platform
[4] Eswar Anandapadmanaban, Jesslyn Tannady,
Johannes Norheim, Dava Newman, and Jeff Hoffman.
2018. Holo-SEXTANT: an augmented reality planetary
EVA navigation interface. 48th International
Conference on Environmental Systems.
[5] Ronald T. Azuma. 2019. The road to ubiquitous
consumer augmented reality systems. Human
Behavior and Emerging Technologies 1, 1 (jan 2019),
26–32. DOI:http://dx.doi.org/10.1002/hbe2.113
[6] Gaurav Bhorkar. 2017. A survey of augmented reality
navigation. arXiv preprint arXiv:1708.05006 (2017).
[7] Mark Billinghurst, Huidong Bai, Gun Lee, and Robert
Lindeman. 2014. Developing handheld augmented
reality interfaces. In The Oxford Handbook of Virtuality.
[8] Mark Billinghurst, Adrian Clark, Gun Lee, and others.
2015. A survey of augmented reality. Foundations and
Trends® in Human–Computer Interaction 8, 2-3
(2015), 73–272.
[9] John Brooke. 1996. SUS - A quick and dirty usability
scale. Usability evaluation in industry 189, 194 (1996),
4–7.
[10] Jaewoo Chung, Francesco Pagnini, and Ellen Langer.
2016. Mindful navigation for pedestrians: Improving
engagement with augmented reality. Technology in
Society 45 (may 2016), 29–33. DOI:
http://dx.doi.org/10.1016/j.techsoc.2016.02.006
[11] Brian F. Goldiez, Ali M. Ahmad, and Peter A. Hancock.
2007. Effects of augmented reality display settings on
human wayfinding performance. IEEE Transactions on
Systems, Man and Cybernetics Part C: Applications
and Reviews 37, 5 (sep 2007), 839–845. DOI:
http://dx.doi.org/10.1109/TSMCC.2007.900665
[12] Google. 2019. Developing the First AR Experience for
Google Maps (Google I/O’19) . (2019).
https://www.youtube.com/watch?v=14wedZy90Tw
[13] Sandra G. Hart and Lowell E. Staveland. 1988.
Development of NASA-TLX (Task Load Index): Results
of Empirical and Theoretical Research. Advances in
Psychology 52, C (jan 1988), 139–183. DOI:
http://dx.doi.org/10.1016/S0166-4115(08)62386-9
[14] PA Howarth and M Finch. 1999. The nauseogenicity of
two methods of navigating within a virtual environment.
Applied Ergonomics 30, 1 (1999), 39–45.
[15] Ira E. Hyman, S. Matthew Boss, Breanne M. Wise,
Kira E. McKenzie, and Jenna M. Caggiano. 2010. Did
you see the unicycling clown? Inattentional blindness
while walking and talking on a cell phone. Applied
Cognitive Psychology 24, 5 (jul 2010), 597–607. DOI:
http://dx.doi.org/10.1002/acp.1638
[16] Richie Jose, Gun A Lee, and Mark Billinghurst. 2016.
A comparative study of simulated augmented reality
displays for vehicle navigation. In Proceedings of the
28th Australian conference on computer-human
interaction. ACM, 40–48.
[17] Zach Kinstner. 2015. Hovercast VR Menu: Power at
your fingertips. (2015). http://blog.leapmotion.com/
hovercast-vr-menu-power-fingertips/
[18] Kiyoshi Kiyokawa. 2015. Head-mounted display
technologies for augmented reality. Fundamentals of
Wearable Computing and Augmented Reality (2015),
59–84.
[19] Pascal Knierim, Steffen Maurer, Katrin Wolf, and
Markus Funk. 2018. Quadcopter-projected in-situ
navigation cues for improved location awareness. In
Conference on Human Factors in Computing Systems
- Proceedings, Vol. 2018-April. Association for
Computing Machinery. DOI:
http://dx.doi.org/10.1145/3173574.3174007
[20] Eun Jin Lee and Susan K Frazier. 2011. The efficacy
of acupressure for symptom management: a
systematic review. Journal of pain and symptom
management 42, 4 (2011), 589–603.
[21] J. Lehikoinen and R. Suomela. 2002. WalkMap:
Developing an augmented reality map application for
wearable computers. Virtual Reality 6, 1 (2002),
33–44. DOI:http://dx.doi.org/10.1007/BF01408567
[22] Marvin Levine. 1982. You-are-here maps:
Psychological considerations. Environment and
behavior 14, 2 (1982), 221–237.
[23] Yuji Makimura, Aya Shiraiwa, Masashi Nishiyama, and
Yoshio Iwai. 2019. Visual effects of turning point and
travel direction for outdoor navigation using
head-mounted display. In International Conference on
Human-Computer Interaction. Springer, 235–246.
[24] Pete Pachal. 2013. 2D Navigation. (2013). https:
//mashable.com/2013/05/08/google-glass-pov/
[25] Tumasch Reichenbacher. 2004. Mobile Cartography.
Ph.D. Dissertation. Technische Universität München.
[26] Tilman Reinhardt. 2019. Google AI Blog: Using global
localization to improve navigation. (2019).
https://ai.googleblog.com/2019/02/
using-global-localization-to-improve.html
[27] R. ShaynaVE Rosenbaum, Hugo Spiers, and
Véronique Bohbot. 2018. Behavioral studies of human
spatial navigation. In Human Spatial Navigation.
Princeton University Press, 23–44. DOI:
http://dx.doi.org/10.23943/9781400890460-003
[28] Sonja Schneider and Klaus Bengler. 2019. Virtually
the same? Analysing pedestrian behaviour by means
of virtual reality. Transportation Research Part F:
Traffic Psychology and Behaviour (2019).
[29] Jan Stryjak and Mayuran Sivakumaran. 2019. GSMA
Intelligence — The Mobile Economy 2019. (2019).
https://www.gsmaintelligence.com/research/2019/
02/the-mobile-economy-2019/731/
[30] Christopher D Wickens, Justin G Hollands, Simon
Banbury, and Raja Parasuraman. 2015. Engineering
psychology and human performance. Psychology
Press.
... While previous research has used these areas for content placement (e.g., [51,56]), a more profound and systematic treatment of the design dimensions considering AR content placement is still lacking. To fill this research gap, we comprehensively investigated AR content placement on the ceiling and the floor in indoor environments via two user studies. ...
... Another smaller group makes use of mobile devices, such as picoprojectors (e.g., [12]) or HMDs [51,56] for content presentation on the ceiling or the floor. For HMDs, it was shown that interfaces which require users to look at the floor for navigation permanently are not optimal with regards to ergonomics [51]. ...
... For HMDs, it was shown that interfaces which require users to look at the floor for navigation permanently are not optimal with regards to ergonomics [51]. However, it was also found a preference for displaying a map location on the floor in front of users than on their hands or as a floating display [56]. Yet, due to the mobile nature of such devices, it is challenging to optimize the augmentation based on the current environment. ...
... Likewise, Liu et al. [60] designed and implemented virtual semantic landmarks in the HoloLens and found AR landmarks can assist spatial knowledge acquisition during navigation in repetitive indoor environments. Researchers have also evaluated preferences for mixed reality navigation instructions in simulated virtual environments [57,87]. ...
Article
Full-text available
Daily travel usually demands navigation on foot across a variety of different application domains, including tasks like search and rescue or commuting. Head-mounted augmented reality (AR) displays provide a preview of future navigation systems on foot, but designing them is still an open problem. In this paper, we look at two choices that such AR systems can make for navigation: 1) whether to denote landmarks with AR cues and 2) how to convey navigation instructions. Specifically, instructions can be given via a head-referenced display (screen-fixed frame of reference) or by giving directions fixed to global positions in the world (world-fixed frame of reference). Given limitations with the tracking stability, field of view, and brightness of most currently available head-mounted AR displays for lengthy routes outdoors, we decided to simulate these conditions in virtual reality. In the current study, participants navigated an urban virtual environment and their spatial knowledge acquisition was assessed. We experimented with whether or not landmarks in the environment were cued, as well as how navigation instructions were displayed (i.e., via screen-fixed or world-fixed directions). We found that the world-fixed frame of reference resulted in better spatial learning when there were no landmarks cued; adding AR landmark cues marginally improved spatial learning in the screen-fixed condition. These benefits in learning were also correlated with participants' reported sense of direction. Our findings have implications for the design of future cognition-driven navigation systems.
... Table 1 shows prototype characteristics and participants' demographics in each case study. It is worth emphasising that these studies were designed to assess their respective AR concept prototypes, whose results were reported in prior publications [50,51]. In this paper, we analysed the qualitative data obtained from both studies together to demonstrate the efficacy of VR in simulating wearable AR experiences in a generalised manner. ...
Article
Full-text available
Augmented reality (AR) has the potential to fundamentally change how people engage with increasingly interactive urban environments. However, many challenges exist in designing and evaluating these new urban AR experiences, such as technical constraints and safety concerns associated with outdoor AR. We contribute to this domain by assessing the use of virtual reality (VR) for simulating wearable urban AR experiences, allowing participants to interact with future AR interfaces in a realistic, safe and controlled setting. This paper describes two wearable urban AR applications (pedestrian navigation and autonomous mobility) simulated in VR. Based on a thematic analysis of interview data collected across the two studies, we find that the VR simulation successfully elicited feedback on the functional benefits of AR concepts and the potential impact of urban contextual factors, such as safety concerns, attentional capacity, and social considerations. At the same time, we highlight the limitations of this approach in terms of assessing the AR interface’s visual quality and providing exhaustive contextual information. The paper concludes with recommendations for simulating wearable urban AR experiences in VR.
... Some research has focused on technology to resolve the challenges in a physical sense, and physiological safety, such as GPS technology, route planning method, and navigation instruction visualization; in contrast, the study of mental satisfaction during navigation is not rich [18]. For example, an increasing number of studies on human-computer interaction (HCI) have started to pay attention to the fundamental usability and preferences of user-interface designs for augmented and mixed-reality navigation [19][20][21][22][23][24]. However, in [6], Cartwright et al. suggested that mobile maps not only need to be efficient but also should be affective and emotionally pleasing to produce a positive user experience. ...
Article
Full-text available
The development of ubiquitous computing technology and the emergence of XR could provide pedestrian navigation with more options for user interfaces and interactions. In this work, we aim investigate the role of a mixed-reality map interface in urban exploration to enhance pedestrians’ mental satisfaction. We propose a mixed-reality 3D minimap as a part of the navigation interface which pedestrians could refer to and interact during urban exploration. To further explore the different levels of detail of the map interface, we conducted a user study (n = 28, two groups with two tasks). We designed two exploratory activities as experimental tasks with two map modes (a normal one and a simplified one) to discuss the detailed design of the minimap interface. The results indicated that participants showed a positive attitude toward our method. The simplified map mode could result in a lower perceived workload in both tasks while enhancing performance in specific navigation, such as wayfinding. However, we also found that pedestrians’ preference for the level of detail of the minimap interface is dynamic in navigation. Thus, we suggest discussing the different levels of detail further in specific scenarios. Finally, we also summarize some findings observed during user study for inspiring the study of virtual map interface of future mixed-reality navigation for urban exploration in various scenarios.
... Compared to using these physical props, virtual reality (VR) has recently gained in popularity as a way to simulate scenarios to rapidly test prototypes in a safe and controlled, fully-immersive environment [22,50]. VR has also been used across multiple disciplines from gaming, military training, medical procedure simulation, education, and rehabilitation programs [12,31,[58][59][60][61]. ...
Conference Paper
Full-text available
Despite being situated in public spaces, public interactive displays (PIDs) are not always designed to be fully inclusive, particularly for people with vision impairments. Prototyping physical PIDs with accessible features can be time-consuming and there are ethical and safety barriers to navigate when recruiting appropriate participants for studies. While it is still important to represent this user group in the design process, empathic modelling can also be used to rapidly simulate some challenges people with disability might face when interacting with a system. Traditionally this method has been performed using physical props, however Virtual reality (VR) is a promising way to help amplify it due to its immersive nature. Despite this, its use as a tool for empathic design remains unexplored. Therefore, our work is aimed towards filling this gap through the evaluation of a VR prototype which simulates the experience of a visually impaired person interacting with a public interactive display. This work contributes design considerations for VR simulations to generate empathy towards people with vision impairment.
Article
Full-text available
This study proposes a combined framework of multicriteria decision methods to describe, prioritize, and group the quality attributes related to the user experience of augmented reality applications. The attributes were identified based on studies of high-impact repositories. A hierarchy of the identified attributes was built through the multicriteria decision methods Fuzzy Cognitive Maps and DEMATEL. Additionally, a statistical analysis of clusters was developed to determine the most relevant attributes and apply these results in academic and industrial contexts. The main contribution of this study was the categorization of user-experience quality attributes in augmented reality applications, as well as the grouping proposal. Usability, Satisfaction, Stimulation, Engagement, and Aesthetics were found to be among the most relevant attributes. After carrying out the multivariate analysis, two clusters were found with the largest grouping of attributes, oriented to security, representation, social interaction, aesthetics, ergonomics of the application, and its relationship with the user’s emotions. In conclusion, the combination of the three methods helped to identify the importance of the attributes in training processes. The holistic and detailed vision of the causal, impact, and similarity relationships between the 87 attributes analyzed were also considered. This framework will allow the generation of a baseline for the use of multicriteria methods in research into relevant aspects of Augmented Reality.
Article
Full-text available
Mixed reality technology has been increasingly used for navigation. While most MR-based navigation systems are currently based on hand-held devices, for example, smartphones, head-mounted MR devices have become more and more popular in navigation. Much research has been conducted to investigate the navigation experience in MR. However, it is still unclear how ordinary users react to the first-person view and FOV (field of view)-limited navigation experience, especially in terms of spatial learning. In our study, we investigate how visualization in MR navigation affects spatial learning. More specifically, we test two related hypotheses: incorrect virtual information can lead users into incorrect spatial learning, and the visualization style of direction can influence users’ spatial learning and experience. We designed a user interface in Microsoft HoloLens 2 and conducted a user study with 40 participants. The user study consists of a walking session in which users wear Microsoft HoloLens 2 to navigate to an unknown destination, pre- and post-walking questionnaires, sketch map drawing, and a semi-structured interview about the user interface design. The results provide preliminary confirmation that users’ spatial learning can be misled by incorrect information, even in a small study area, but this misleading effect can be compensated by considerate visualization, for example, including lines instead of using only arrows as direction indicators. Arrows with or without lines as two visualization alternatives also influenced the user’s spatial learning and evaluation of the designed elements. Besides, the study shows that users’ preferences for navigation interfaces are diverse, and an adaptable interface should be provided. The results contribute to the design of head-mounted MR-based navigation interfaces and the application of MR in navigation in general.
Article
Full-text available
This study proposes a characterization of quality attributes for applications that use augmented reality. This classification is done from the perspective of the user experience. The attribute identification was based on primary studies of the IEEE Xplore, Scopus, and ACM repositories. From an initial set of 1165 papers, 101 documents were selected. The document proposes two categories: objective and subjective. In the objective category 4 subcategories and 40 attributes were found, and in the subjective one 5 subcategories and 54 attributes were found. This is the first time that all these criteria are presented in a single document, which is the input for designing a comprehensive quality assurance tool for user experience in augmented reality.
Article
Full-text available
Mixed reality (MR) is increasingly applied in indoor navigation. With the development of MR devices and indoor navigation algorithms, special attention has been paid to related cognitive issues and many user studies are being conducted. This paper gives an overview of MR technology, devices, and the design of MR-based indoor navigation systems for user studies. We propose a theoretical framework consisting of spatial mapping, spatial localization, path generation, and instruction visualization. We summarize some critical factors to be considered in the design process. Four approaches to constructing an MR-based indoor navigation system under different conditions are introduced and compared. Our gained insight can be used to help researchers select an optimal design approach of MR-based indoor navigation for their user studies.
Chapter
Full-text available
We investigate the visual effects of superimposing turning points and travel directions within the user’s field of view in a navigation system using a subjective assessment procedure. Existing methods were developed without conducting subjective assessments of the effects of superimposing the turning points and travel directions on the user’s display while walking outdoors. We therefore designed a questionnaire-based subjective assessment of the use of these navigation methods. We developed an outdoor navigation system using a recently launched optical see-through head-mounted display (HMD) product that was compact and lightweight. We demonstrated that the subjective scores in terms of understanding of the turning points and the travel directions were significantly increased by the visual effects of superimposing these cues on the display. We confirmed that the HMD helps to increase user likeability of use of the navigation system while walking outdoors.
Conference Paper
Full-text available
Every day people rely on navigation systems when exploring unknown urban areas. Many navigation systems use multimodal feedback like visual, auditory or tactile cues. Although other systems exist, users mostly rely on a visual navigation using their smartphone. However, a problem with visual navigation systems is that the users have to shift their attention to the navigation system and then map the instructions to the real world. We suggest using in-situ navigation instructions that are presented directly in the environment by augmenting the reality using a projector-quadcopter. Through a user study with 16 participants, we show that using in-situ instructions for navigation leads to a significantly higher ability to observe real-world points of interest. Further, the participants enjoyed following the projected navigation cues.
Article
Thanks to technological advancements, virtual reality has become increasingly flexible and affordable, resulting in a growing number of user studies conducted in virtual environments. Pedestrian simulators, visualizing traffic scenarios from a pedestrians’ perspective, have thereby emerged as a powerful tool in traffic safety research. However, while both the interest in this technology and the concern for vulnerable road users is high, a systematic overview of research employing pedestrian simulators has not been provided so far. The present literature survey is based on 87 studies published during the past decade, investigating pedestrian behaviour by means of virtual traffic scenarios. Results were categorized according to the research question, technical setup, experimental task, and participant sample. Identifying trends and gaps in knowledge and highlighting differences between methodological approaches, this work serves as an assessment of the current state and a baseline from which to develop future research questions. It aims to demonstrate both opportunities and challenges of this relatively new methodology. Thereby, it is hoped to foster the awareness of existing limitations, support the reasonable interpretation of the available data, and guide pedestrian research towards reliable and generalizable insights enhancing pedestrian mobility, comfort, and safety.
Article
The tremendous rise of interest and hype in augmented reality (AR) is justified by its long‐term potential as the technology with the best chance to supplant smartphones, by becoming the dominant platform and interface for accessing digital information. However, we are still far from achieving an ideal AR system that is ubiquitously accepted by consumers and reaches this potential. This paper focuses on the two fastest paths leading to ubiquitous consumer AR systems: (a) offshoots of enterprise AR systems, and (b) systems that initially target niche consumer AR usages and subsequently expand in capabilities. To justify these two paths, this paper describes some history of AR, recent changes due to new developments, and specific obstacles and challenges. Not all challenges are technical. Social acceptance and usages are also crucial. This paper identifies specific characteristics, approaches, and strategies that are most likely to succeed, and compares these to the characteristics and development of previous successful consumer technologies, including smartphones and devices that improve human eyesight in specific situations.
Conference Paper
In this paper we report on a user study in a simulated environment that compares three types of Augmented Reality (AR) displays for assisting with car navigation: Heads Up Display (HUD), Head Mounted Display (HMD) and Heads Down Display (HDD). The virtual cues shown on each of the interface were the same, but there was a significant difference in driver behaviour and preference between interfaces. Overall, users performed better and preferred the HUD over the HDD, and the HMD was ranked lowest. These results have implications for people wanting to use AR cues for car navigation.
Article
Navigation systems are often followed mindlessly, as users may focus the attention on the device and not on the path. The risk of errors and bias related to the mindless adherence of the instruction is high.We suggested that a mindful walking navigation system could reduce the errors and improve the overall exploration experience. Specifically, we tested the hypothesis with an Augmented Reality system, with information directly projected on the path, in comparison with a standard device. Moreover, we tested the hypothesis that when users feel they are in control of their path, both performance and overall experience would improve. We found that both conditions increased the navigation performance and experience, decreased travel time, errors, and confusion and it increased the number of landmarks noticed.