Content uploaded by Callum Parker
Author content
All content in this area was uploaded by Callum Parker on Feb 20, 2020
Content may be subject to copyright.
Designing Exocentric Pedestrian
Navigation for AR Head Mounted
Displays
Tram Thi Minh Tran
University of Sydney
Sydney, Australia
ttra6156@uni.sydney.edu.au
Callum Parker
University of Sydney
Sydney, Australia
callum.parker@sydney.edu.au
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CHI ’20 Extended Abstracts, April 25–30, 2020, Honolulu, HI, USA.
© 2020 Copyright is held by the author/owner(s).
ACM ISBN 978-1-4503-6819-3/20/04.
http://dx.doi.10.1145/3334480.3382868
Abstract
Augmented reality (AR) is gradually becoming used for
navigation in urban environments, allowing users to see
instructions in their physical environment. However, view-
ing this information through a smartphone’s screen is not
ideal, as it can cause users to become inattentive to their
surroundings. AR head-mounted displays (HMD) have the
potential to overcome this issue by integrating navigational
information into the user’s field of view (FOV). While work
has explored the design of turn-by-turn egocentric AR nav-
igation interfaces, little work has explored the design of ex-
ocentric interfaces, which provide the user with an overview
of their desired route. In response to this, we examined the
impact of three different exocentric AR map displays on
pedestrian navigation performance and user experience.
Our work highlights pedestrian safety concerns and pro-
vides design implications for future AR HMD pedestrian
navigation interfaces.
Author Keywords
Wayfinding; Pedestrian navigation; Exocentric navigation;
Maps; Head mounted display; Augmented reality; Virtual
reality
CCS Concepts
•Human-centered computing →Human computer inter-
action (HCI);
Introduction
With an ideal technical configuration for running AR [7], a
community of more than five billion users [29] and a grow-
ing number of open-source frameworks, AR-enabled smart-
phones are more than just an interim technology. They
have been widely used as an apparatus for AR pedestrian
navigation applications [6]. However, due to traffic density in
urban areas, pedestrians need to be attentive to their sur-
roundings. Engagement with handheld devices may expose
them to higher risks of road accidents [15]. Therefore, the
handheld-AR view is intended to be used only while station-
ary, as suggested in the instructions for the Google Maps
Live View AR feature [3]. Furthermore, walking around
while viewing an augmented real-world through a smart-
phone’s camera feed may cause social embarrassment and
could make other people feel uncomfortable [6, 8].
In contrast, AR HMDs integrate synthetic imagery into the
user’s FOV, and as a result allow them to retain awareness
of the environment [5]. However, there are still many techni-
cal hurdles to be overcome before AR HMDs can progress
from novelty to ubiquity, such as registration accuracy,
human-like FOV, smaller form factor, and a semantic un-
derstanding of the real world [5, 18]. For those reasons, the
design of navigation interfaces for AR HMDs is relatively
under-explored compared to that of handheld AR.
From a review of literature, we found that research efforts in
AR HMD pedestrian navigation have utilised the AR view to
provide users with egocentric turn-by-turn guidance [23,
4]. The directional cues embedded in the real world re-
duces the abstraction of the conventional map and helps
users quickly locate themselves in unfamiliar settings [26].
However, to be effective a navigation system also needs to
provide users with an exocentric spatial layout of the envi-
ronment, such as a map. Because egocentric and exocen-
tric spatial information helps to address different navigation
tasks, and both are useful at different times [30]. Much of
previous work [21, 11, 1] that focused on exocentric navi-
gation interfaces have projected an interface at a distance
from the user’s eyes in an attempt to maximise the overlay
FOV and ensure readability [24]. However, this might lead
to divided attention from the real world and inattentional
blindness.
In addressing problems related to pedestrian’s awareness
of their surroundings, we could consider a map overlaid
on the street to offer the users both local guidance and a
bird’s-eye view of the environment, as seen in one of the
prototypes of Google Live View [12]. Alternatively, a map
could take the form of a hand-based interface [17], which
is similar to the activity of reading a physical paper map
and therefore facilitates a familiar experience. Besides, the
hand movement could induce users to safely reference the
map on demand, since we hypothesise that the pedestrians
would not walk with a seemingly unnatural pose of holding
an empty hand in front.
To understand the implications of such interfaces on pedes-
trian navigation and user experience, we trialled three dif-
ferent AR map displays with 18 participants. As AR is still
a developing technology, we overcame the technical limita-
tions of current AR HMDs by conducting the study in a VR
environment. This method of VR testing is becoming com-
mon practice for navigation [28, 27] and AR [16] studies, as
it allows researchers and designers to overcome technical
limitations and rapidly test their designs in a controlled en-
vironment. Performance data and user feedback collected
from participants during the study were interpreted along
with observations to provide implications for the design of
future AR HMD navigation interfaces.
VR Prototype
The VR prototype was created using the Unity Game En-
gine. To simulate the urban environment, we constructed a
city out of low-poly 3D models downloaded from the Unity
Asset Store [2]. The virtual environment had all the nec-
essary infrastructure of roads, buildings, and greenery. It
also featured pedestrian and car traffic through the Unity
Navigation System. Idle and animated urban dwellers were
placed around the city along with ambient city sounds (such
as people talking and car horns). Altogether, their presence
was to imitate critical aspects of a living city.
Figure 1: AR navigation displays:
(A) Map Up Front; (B) Map On
Street; and (C) Map On Hand
Interface Design
For reading ergonomics, the map’s design was a simplified
version of the city’s orthographic view. Visual emphasis
was given to the road network and location markers (e.g.
bus stops, car parks, and helicopter pads). A dotted line
represented the travel route for each scenario. The map
resembled a futuristic translucent interface that allowed the
users to see through it. The fidelity of the map decreased at
runtime because of the trade-off between render quality and
performance; therefore, we slightly increased its size.
In terms of map position, the Map Up Front was displayed
vertically in front of the user (Figure 1A). The Map On Street
formed a 12° angle with the street (Figure 1B). Both of them
were placed inside the user’s immersive line of sight but
at a comfortable distance to prevent eye strain. The Map
On Hand was anchored to the right-hand controller, which
means it was only visible when users moved their right
hand into their FOV (Figure 1C).
The current location of the user (“You Are Here”) was rep-
resented as a black arrow, and the facing direction was in-
dicated by the pointer of the arrow. In this study, one of the
map settings was automatic first-person alignment, in which
the map would automatically rotate to align to the user’s for-
ward direction. As pointed out by Levine [22], this alignment
would facilitate perspective transformation and therefore re-
duce user’s cognitive workload in matching map information
with the real world.
Several studies suggest that the map should be shown
upon request to reduce distraction and improve naviga-
tion performance [11, 21]. Therefore, in addition to the map
view, we developed a minimal arrow view to provide users
with travel directions.
Interaction Design
In our study, we used an Oculus Quest1for its compact
form factor and high performance. Participants could switch
between the map view and the arrow view by pressing a
button on the right-hand controller. Virtual navigation was a
combination of the first-person perspective and the use of
joysticks to walk and turn. Despite its ease of use, this loco-
motion scheme can cause sensory conflicts which may lead
to simulator sickness [14]. The first participant joining our
pilot study could not finish one interface trial. To minimise
the user’s discomfort, we reduced vection by slowing down
the user’s speed and kept it constant throughout the experi-
ment. We also asked participants to wear acupressure wrist
bands2to help mitigate potentially uncomfortable sensa-
tions caused by motion sickness [20]. These were worn five
to ten minutes before the start of the first trial. We also in-
structed participants to sit down rather than stand up while
using the VR prototype.
Study Design
This study was carried out following the ethical approval
granted by the University of Sydney (ID 2018/125).
1Oculus Quest - https://www.oculus.com/quest
2Sea-Band - https://www.sea-band.com
Preliminary Study
When the prototype was fully functional, a pilot study was
conducted with 5 participants. The feedback we received
from the preliminary study further informed the design of
the virtual environment. At first, our virtual urban environ-
ment contained just the essential infrastructure such as
roads, buildings, and greenery, but it lacked people, traffic,
and urban noises. As a result, some participants exhibited
careless navigation behaviors such as unlawfully crossing
the roads or walking on the street instead of the footpath.
Therefore, the virtual environment was updated to include
pedestrians, cars, and ambient noise to make the urban en-
vironment feel populated and to help deter pedestrians from
taking shortcuts.
Experimental Tasks
Informed by the work of Reichenbacher [25] on elementary
spatial user actions, we designed two tasks: (1) Navigating
to a destination and (2) Searching for a particular place.
The first task aimed to test aspects of Orientation, “Which
direction am I heading towards?”, Localisation “Where am
I?”, and Navigation “How do I get there?”. The second task
aimed to test the aspect of Search “Where is the nearest
coffee shop?”. To reduce learning effects, in every interface
trial the participant navigated to different destinations and
looked for different places (Table 1). The three travel routes
were designed to have the same distance and the same
number of turning points and street crossings (Figure 2).
Figure 2: Route design
Procedure
The study took up to 75 minutes in total. Trialling each map
interface took about three to five minutes. To minimise or-
dering effects, we randomised the order of interfaces using
the balanced Latin Square. Participants were instructed
to wear Sea-Bands and sit down on the chair throughout
the experiment. At the beginning of the study, participants
Map Display Navigate to... Search for...
(A) Up Front Nails Salon Bus Stop
(B) On Street The Hotel Helicopter Pad
(C) On Hand Petrol Station Car Park
Table 1: Experimental tasks used for each map display
learned to use Oculus Quest controllers to navigate and
had a short VR practice session. Before every interface
trial, participants received verbal and written instruction for
the first task. They were also made aware of a second task
of which instruction would be given from the application in-
terface. After every trial, participants filled out two standard
questionnaires and a general feedback form. Upon com-
pleting all the trials, participants were interviewed about
their experiences.
Measurement
To measure how efficiently the participants used the pro-
totype to complete the tasks, we recorded performance
data including Successful completion, Completion time,
and Errors. For task 1, there was an additional metric of
Map usage time (total time the map was activated during
the journey). The data were extracted from Android De-
bug Bridge logs, screen recordings of the VR scenes, and
video recordings of the study. For overall evaluation of the
prototype, we employed the System Usability Scale [9] and
NASA Task Load Index [13]. Lastly, to gain insights into
user behaviour, we analysed qualitative responses includ-
ing the verbal comments given during the trial, the written
responses in the General Feedback section (post-trial), and
the concluding semi-structured interview (post-study).
User Study
There were 18 participants (9 females, 9 males) aged from
18-35 years old. All participants, except for one, indicated
that they had little to no exposure to VR technology before
the study. All of them were familiar with digital navigation
applications such as Google Maps and had a minimum ed-
ucation level of a high school diploma.
Results
Some participants experienced motion sickness in the first
interface trial, but the discomfort decreased in the subse-
quent trials. All participants completed the tasks success-
fully without any errors. We performed one-way repeated
measures analysis of variance (ANOVA) for all other met-
rics to determine whether different map displays elicited
any statistically significant differences (Table 3). Only Loca-
tion Search time was followed up with a post hoc analysis
with a Bonferroni adjustment to find exactly where the differ-
ences occurred. In post-study interviews, nine participants
expressed their preferences for the prototype of Map On
Street, five participants favoured the Map Up Front, and four
participants chose the Map On Hand (Figure 3).
Figure 3: User preferences for
map displays (N = 18)
Figure 4: Simple line mean of
location search time (Confidence
level: 95)
Mean Diff Sig.
A-B -20.056 .027
A-C -24.556 .001
Table 2: Post hoc analysis with a
Bonferroni adjustment: (A) Map Up
Front, (B) Map On Street, (C) Map
On Hand (Confidence level: 95)
Map-based Navigation
There was a strong user preference for the Map On Street,
even though there were no significant changes in system
usability ratings or perceived workload among the three dis-
plays (Table 3). The majority of the participants liked how
this map was positioned in a way that did not obstruct the
view (P12)“It was not covering anything that I would want
to see” and how they could have it open all the time (P18)“I
can see the map without much effort”. The main reason for
the preference could be that information displayed in the
lower visual field is typically not interfering with the main
view. Several projection-based AR studies supporting nav-
igation with an arrow overlaid on the pathway found that
Measures F(2, 34) p-value
Navigation time 0.987 .383
Map usage 3.324 .066
SUS score 2.582 .090
NASA TLX score 1.831 .176
Location search time 6.917 .003
Table 3: Summary of quantitative data analysis (Since the p-value
is heavily dependent on the sample size, in the results we also
reported F value and effect size)
it led to a significantly higher ability of users to observe
real-world points of interest [10, 19]. Nevertheless, some
participants in our study found the map diverting their at-
tention as they had to look down physically. Additionally, the
relatively large map and the amount of spatial information
shown might hinder users from seeing obstacles such as
street potholes.
The Map Up Front, when being used en route, drew partici-
pants’ attention away from the surroundings and obstructed
the front view. Its usage, though not statistically significant,
was less compared to the other two maps. Many partic-
ipants pointed out that the size and position of the map
made it very distracting (P7) “I couldn’t avoid looking at it”,
obtrusive (P5) “The map was too big I couldn’t see what
exactly is happening in front of me” and inconvenient (P7)
“I had to stop or keep walking in an open area”. The semi-
transparency was helpful to some degree; nonetheless, the
map still prevented the participants from giving adequate
attention to the road ahead (P12)“Even though it is trans-
parent, I did not really look past the map”. There was only
one participant (P1) who commented that this upfront dis-
play was what he had expected to see.
Regarding the Map On Hand, the main issue experienced
by the majority of users was the unfamiliarity with this type
of display, (P6) “I did not know straight away that I could use
the controller to bring it up”. Conversely, other participants
enjoyed having the control towards the position of the map
and found it safer when the map did not get into their field
of vision (P2) “It did not get into my view until I wanted to
look at it”. The satisfaction of these participants suggested
that we could look into the possibility of flexible placement
of the interface in three-dimensional space to best accom-
modate the user’s personal preferences, requirements of
the surroundings, and the tasks at hand.
Location Searching
We found the Map Up Front ideal for stationary use as it
took statistically significant less time for the participants to
find a location (Figure 4) compared to the Map On Street (p
= .027) and the Map On Hand (p = .001) (Table 2). The re-
sult of the Map On Street could be explained by participants
needing to look from an oblique angle onto the display. As
for the Map On Hand, its relatively new position seemed to
affect the participant’s ability to browse places of interest.
Acknowledgement
We would like to thank Sydney
Informatics Hub, The University
of Sydney TechLab and the
participants of our study.
Limitations
The travel routes were relatively simple and may not reflect
the complexity of real city environments. The long study
time may have also caused participants to experience some
fatigue, affecting their performance. Despite this, the ma-
jority of participants reported a high level of immersion and
exhibited normal navigation behaviors as they do in real life.
However, VR is not a complete replacement of real-life ex-
perimental testing as there is a limit to the extent reality can
be mimicked. Two participants compared the prototype with
video games because of the animated graphics of buildings
and objects. Therefore, as AR continues to advance, real
world field studies are essential to validate the results.
Implications
This section presents three design implications of the study.
(1) Situational map displays: Different spatial tasks re-
quire different map displays. For tasks that involve reading
or searching for information on a map, an upfront display
has higher readability and will increase user performance.
Meanwhile, if the task is to navigate, displaying spatial infor-
mation on the street may be a better option as it alleviates
the problems of distraction.
(2) Designing for safety: Retaining the pedestrian’s situ-
ation awareness of the surroundings should be criteria to
evaluate different design choices. Apart from oncoming
traffic, obstacles in the street, such as litter, debris, and pot-
holes, are also particular hazards to avoid. Thus, future
work should consider this when displaying spatial informa-
tion in the lower part of a user’s visual field.
(3) Avoid over-simplifying the interface: The directional ar-
row view was designed to reduce the users’ cognitive load.
However, the lack of information may lead to users experi-
encing anxiety (e.g. not knowing the upcoming turns and
estimated travel time). Therefore, it is important to find the
right balance of information on screen.
Conclusion and Future Work
In this study, we simulated an AR HMD pedestrian naviga-
tion interface in a fully immersive virtual environment. The
system explored the impact of different map displays on
navigation performance and user experience. The research
offers design recommendations for AR HMD pedestrian
navigation interfaces and adds to existing knowledge of us-
ing VR to understand pedestrian behaviours. The results
suggest that an important future direction is towards the
design of adaptive AR navigational interfaces that change
based on individual needs, situation, and capabilities.
REFERENCES
[1] 2013. Directions - Google Glass. (2013). https:
//support.google.com/glass/answer/3086042?hl=en
[2] 2018. 1Texture City. (2018).
https://assetstore.unity.com/packages/3d/
environments/urban/1texture-city-112740
[3] 2019. Use Live View on Google Maps. (2019).
https://support.google.com/maps/answer/9332056?
co=GENIE.Platform
[4] Eswar Anandapadmanaban, Jesslyn Tannady,
Johannes Norheim, Dava Newman, and Jeff Hoffman.
2018. Holo-SEXTANT: an augmented reality planetary
EVA navigation interface. 48th International
Conference on Environmental Systems.
[5] Ronald T. Azuma. 2019. The road to ubiquitous
consumer augmented reality systems. Human
Behavior and Emerging Technologies 1, 1 (jan 2019),
26–32. DOI:http://dx.doi.org/10.1002/hbe2.113
[6] Gaurav Bhorkar. 2017. A survey of augmented reality
navigation. arXiv preprint arXiv:1708.05006 (2017).
[7] Mark Billinghurst, Huidong Bai, Gun Lee, and Robert
Lindeman. 2014. Developing handheld augmented
reality interfaces. In The Oxford Handbook of Virtuality.
[8] Mark Billinghurst, Adrian Clark, Gun Lee, and others.
2015. A survey of augmented reality. Foundations and
Trends® in Human–Computer Interaction 8, 2-3
(2015), 73–272.
[9] John Brooke. 1996. SUS - A quick and dirty usability
scale. Usability evaluation in industry 189, 194 (1996),
4–7.
[10] Jaewoo Chung, Francesco Pagnini, and Ellen Langer.
2016. Mindful navigation for pedestrians: Improving
engagement with augmented reality. Technology in
Society 45 (may 2016), 29–33. DOI:
http://dx.doi.org/10.1016/j.techsoc.2016.02.006
[11] Brian F. Goldiez, Ali M. Ahmad, and Peter A. Hancock.
2007. Effects of augmented reality display settings on
human wayfinding performance. IEEE Transactions on
Systems, Man and Cybernetics Part C: Applications
and Reviews 37, 5 (sep 2007), 839–845. DOI:
http://dx.doi.org/10.1109/TSMCC.2007.900665
[12] Google. 2019. Developing the First AR Experience for
Google Maps (Google I/O’19) . (2019).
https://www.youtube.com/watch?v=14wedZy90Tw
[13] Sandra G. Hart and Lowell E. Staveland. 1988.
Development of NASA-TLX (Task Load Index): Results
of Empirical and Theoretical Research. Advances in
Psychology 52, C (jan 1988), 139–183. DOI:
http://dx.doi.org/10.1016/S0166-4115(08)62386-9
[14] PA Howarth and M Finch. 1999. The nauseogenicity of
two methods of navigating within a virtual environment.
Applied Ergonomics 30, 1 (1999), 39–45.
[15] Ira E. Hyman, S. Matthew Boss, Breanne M. Wise,
Kira E. McKenzie, and Jenna M. Caggiano. 2010. Did
you see the unicycling clown? Inattentional blindness
while walking and talking on a cell phone. Applied
Cognitive Psychology 24, 5 (jul 2010), 597–607. DOI:
http://dx.doi.org/10.1002/acp.1638
[16] Richie Jose, Gun A Lee, and Mark Billinghurst. 2016.
A comparative study of simulated augmented reality
displays for vehicle navigation. In Proceedings of the
28th Australian conference on computer-human
interaction. ACM, 40–48.
[17] Zach Kinstner. 2015. Hovercast VR Menu: Power at
your fingertips. (2015). http://blog.leapmotion.com/
hovercast-vr-menu-power-fingertips/
[18] Kiyoshi Kiyokawa. 2015. Head-mounted display
technologies for augmented reality. Fundamentals of
Wearable Computing and Augmented Reality (2015),
59–84.
[19] Pascal Knierim, Steffen Maurer, Katrin Wolf, and
Markus Funk. 2018. Quadcopter-projected in-situ
navigation cues for improved location awareness. In
Conference on Human Factors in Computing Systems
- Proceedings, Vol. 2018-April. Association for
Computing Machinery. DOI:
http://dx.doi.org/10.1145/3173574.3174007
[20] Eun Jin Lee and Susan K Frazier. 2011. The efficacy
of acupressure for symptom management: a
systematic review. Journal of pain and symptom
management 42, 4 (2011), 589–603.
[21] J. Lehikoinen and R. Suomela. 2002. WalkMap:
Developing an augmented reality map application for
wearable computers. Virtual Reality 6, 1 (2002),
33–44. DOI:http://dx.doi.org/10.1007/BF01408567
[22] Marvin Levine. 1982. You-are-here maps:
Psychological considerations. Environment and
behavior 14, 2 (1982), 221–237.
[23] Yuji Makimura, Aya Shiraiwa, Masashi Nishiyama, and
Yoshio Iwai. 2019. Visual effects of turning point and
travel direction for outdoor navigation using
head-mounted display. In International Conference on
Human-Computer Interaction. Springer, 235–246.
[24] Pete Pachal. 2013. 2D Navigation. (2013). https:
//mashable.com/2013/05/08/google-glass-pov/
[25] Tumasch Reichenbacher. 2004. Mobile Cartography.
Ph.D. Dissertation. Technische Universität München.
[26] Tilman Reinhardt. 2019. Google AI Blog: Using global
localization to improve navigation. (2019).
https://ai.googleblog.com/2019/02/
using-global-localization-to-improve.html
[27] R. ShaynaVE Rosenbaum, Hugo Spiers, and
Véronique Bohbot. 2018. Behavioral studies of human
spatial navigation. In Human Spatial Navigation.
Princeton University Press, 23–44. DOI:
http://dx.doi.org/10.23943/9781400890460-003
[28] Sonja Schneider and Klaus Bengler. 2019. Virtually
the same? Analysing pedestrian behaviour by means
of virtual reality. Transportation Research Part F:
Traffic Psychology and Behaviour (2019).
[29] Jan Stryjak and Mayuran Sivakumaran. 2019. GSMA
Intelligence — The Mobile Economy 2019. (2019).
https://www.gsmaintelligence.com/research/2019/
02/the-mobile-economy-2019/731/
[30] Christopher D Wickens, Justin G Hollands, Simon
Banbury, and Raja Parasuraman. 2015. Engineering
psychology and human performance. Psychology
Press.