ArticlePDF Available

Abstract and Figures

Augmented reality (AR) has the potential to fundamentally change how people engage with increasingly interactive urban environments. However, many challenges exist in designing and evaluating these new urban AR experiences, such as technical constraints and safety concerns associated with outdoor AR. We contribute to this domain by assessing the use of virtual reality (VR) for simulating wearable urban AR experiences, allowing participants to interact with future AR interfaces in a realistic, safe and controlled setting. This paper describes two wearable urban AR applications (pedestrian navigation and autonomous mobility) simulated in VR. Based on a thematic analysis of interview data collected across the two studies, we find that the VR simulation successfully elicited feedback on the functional benefits of AR concepts and the potential impact of urban contextual factors, such as safety concerns, attentional capacity, and social considerations. At the same time, we highlight the limitations of this approach in terms of assessing the AR interface’s visual quality and providing exhaustive contextual information. The paper concludes with recommendations for simulating wearable urban AR experiences in VR.
Content may be subject to copyright.
Citation: Tran, T.T.M.; Parker, C.;
Hoggenmüller, M.; Hespanhol, L.;
Tomitsch, M. Simulating Wearable
Urban Augmented Reality
Experiences in VR: Lessons Learnt
from Designing Two Future Urban
Interfaces. Multimodal Technol.
Interact. 2023,7, 21. https://
doi.org/10.3390/mti7020021
Academic Editor: Gerd Bruder
Received: 23 January 2023
Revised: 11 February 2023
Accepted: 14 February 2023
Published: 16 February 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Multimodal Technologies
and Interaction
Article
Simulating Wearable Urban Augmented Reality Experiences in
VR: Lessons Learnt from Designing Two Future
Urban Interfaces
Tram Thi Minh Tran * , Callum Parker, Marius Hoggenmüller, Luke Hespanhol and Martin Tomitsch
Design Lab, Sydney School of Architecture, Design and Planning, The University of Sydney,
Sydney, NSW 2006, Australia
*Correspondence: tram.tran@sydney.edu.au
Abstract:
Augmented reality (AR) has the potential to fundamentally change how people engage
with increasingly interactive urban environments. However, many challenges exist in designing
and evaluating these new urban AR experiences, such as technical constraints and safety concerns
associated with outdoor AR. We contribute to this domain by assessing the use of virtual reality
(VR) for simulating wearable urban AR experiences, allowing participants to interact with future
AR interfaces in a realistic, safe and controlled setting. This paper describes two wearable urban AR
applications (pedestrian navigation and autonomous mobility) simulated in VR. Based on a thematic
analysis of interview data collected across the two studies, we find that the VR simulation successfully
elicited feedback on the functional benefits of AR concepts and the potential impact of urban contex-
tual factors, such as safety concerns, attentional capacity, and social considerations. At the same time,
we highlight the limitations of this approach in terms of assessing the AR interface’s visual quality
and providing exhaustive contextual information. The paper concludes with recommendations for
simulating wearable urban AR experiences in VR.
Keywords:
prototyping; user evaluation; augmented reality; virtual reality; simulations; urban
applications
1. Introduction
Augmented reality (AR) is defined as a technology that blends computer-generated
graphics with the real world in three dimensions (3D), enabling real-time interaction with
virtual content [
1
]. Over the past two decades, AR research and development have gained
significant momentum. AR experiences are now accessible to a broad audience due to the
ubiquity of smartphones having the essential hardware requirement for AR and the avail-
ability of various development frameworks that simplify the creation of AR applications.
The technology has also proven to be compelling to the public. In 2016, the location-based
AR game Pokémon Go became a worldwide phenomenon [
2
], demonstrating the potential
of AR in general and urban AR more specifically.
Although smartphones have facilitated the widespread adoption of AR, wearable
devices, such as head-mounted displays (HMDs), are envisioned as involving a seamless
integration of virtual imagery within an individual’s field of view (FOV). The displays
naturally augment a user’s sensory perception of reality while allowing them to maintain
an awareness of their real-world surroundings [
3
]. Nonetheless, only a small number of
AR studies have utilised HMDs in urban applications (e.g., navigation, tourism, and explo-
ration) [
4
]. Technical challenges in making AR HMDs work outdoors, including ergonomic
issues, insufficient display contrast, and unstable tracking systems [
5
,
6
], may have con-
tributed to their under-exploration, compromising designers’ ability to prototype and
evaluate potential future applications.
Multimodal Technol. Interact. 2023,7, 21. https://doi.org/10.3390/mti7020021 https://www.mdpi.com/journal/mti
Multimodal Technol. Interact. 2023,7, 21 2 of 23
In this paper, we investigate virtual reality (VR) as a way to circumvent the challenges
associated with prototyping AR experiences, thus bridging the gap between AR’s current
status quo and its potential application for urban interactions. VR has been shown to be an
effective approach for simulating and evaluating different kinds of interfaces, from smart
home products [
7
] and public displays [
8
,
9
] to yet-to-be-created technologies, such as fully
autonomous vehicles (AVs) [
10
,
11
] and windscreen heads-up displays (HUDs) [
12
14
].
When it comes to wearable urban AR concepts, augmentations are typically intended
to relate meaningfully to the actual physical environment (e.g., navigation instructions
overlaid onto the physical road). In these urban application scenarios, a VR simulation
can reduce the complexity of creating conformal AR visualisations that require the precise
alignment of virtual content and ‘real’ physical objects, bypassing the need for world-scale
AR technology [
6
]. Moreover, many facets of the urban AR user experience are subject to
contextual factors, such as social circumstances, temporal demands, and spatial settings.
By using VR, it is possible to recreate and manipulate these aspects and, to a considerable
extent, evoke realistic emotional [7,15] and behavioural responses [8,16] from users.
These advantages have led to studies utilising VR to simulate wearable urban AR
interactions, allowing participants to experience idealised AR interfaces in an immersive
VR environment (AR-within-VR); for example, those that assist police officers in traffic stop
operations [
17
] and those that facilitate the interaction between AVs and pedestrians [
18
].
To date, however, studies have yet to investigate the effectiveness of VR simulations in
prototyping and eliciting relevant feedback about future urban AR experiences. As a
prototyping method, the value of VR simulations is linked to the way they enable designers
to ‘traverse the design space’ and gain meaningful insights into the envisioned design [
19
].
This use of VR simulations as a medium to ‘understand, explore or communicate what
it might be like to engage with the product, space or system’ [
20
] is distinct from the
practice of replicating AR systems, where the simulator’s efficacy is demonstrated through
comparable findings to real-world systems [2123].
We contribute to the emerging domain of wearable urban AR through an analysis
of two case studies: a map-based pedestrian navigation application and an interface
supporting the interactions between pedestrians and AVs in future cities. These case studies
are our own work, allowing for a thorough understanding of the design, prototyping,
and evaluation process. Our analysis was driven by the following research questions (RQs):
1.
To what extent can the method of simulating wearable urban AR experiences in VR
elicit insights into the functional benefits of AR concepts and the impact of urban
contextual factors?
2. What are the limitations of simulating wearable urban AR experiences in VR?
The paper makes two contributions. First, it determines the extent to which VR
simulation helps participants to experience wearable AR in context and provide feedback
on wearable urban AR experiences. Second, it provides a set of recommendations for
simulating wearable urban AR experiences in VR, assisting researchers and practitioners in
optimising their VR simulation.
2. Related Work
2.1. Urban AR Applications
Cities are evolving into complex ecosystems capable of utilising technology and data
to tackle the daily challenges that residents are facing, improving their quality of life and
the community’s long-term resilience [
24
]. Urban interfaces, such as urban AR applications,
have the potential to bridge the gap between individuals and the technological backbone of
cities [
25
] and have been explored across various urban domains, such as navigation [
26
],
tourism [
27
], civic participation [
28
] and autonomous mobility [
29
]. Most of them have been
developed for smartphones, taking advantage of their omnipresence, hardware capacities,
and established interaction paradigm. One recent example is Google Maps Live View
(https://blog.google/products/maps/take-your-next-destination-google-maps/, last ac-
cessed 13 February 2023), which utilises the smartphone’s camera to enhance localisation
Multimodal Technol. Interact. 2023,7, 21 3 of 23
accuracy and projects directions in the real world. However, considering the urban context,
handheld AR applications present one significant disadvantage: if not carefully designed,
they might disengage users and place them at an increased risk of road accidents [
30
].
For example, pedestrians tend to use the AR navigation interfaces while walking [
31
]; thus
it is critical to promote a safer use (https://support.google.com/maps/answer/9332056,
last accessed 13 February 2023). Therefore, with the ability to seamlessly integrate infor-
mation into the user’s view, wearable AR devices could become an even more attractive
choice for urban applications.
Wearable AR technology is still in its early stages of development and has yet to
attain operational maturity. However, it is essential to explore how we might design for
compelling wearable urban AR experiences and address the potential contextual challenges
associated with urban settings. According to Rauschnabel and Ro
[32]
, functional benefits
are one of the most important drivers of wearable AR acceptance. They are defined as ‘the
degree to which a user believes that using smart glasses enhances his or her performance
in everyday life’ [
33
]. Meanwhile, Azuma
[3]
suggested that consumer AR applications are
relevant only when the combination of real and virtual content is substantially benefiting
the overall experience. Therefore, one of the biggest challenges in designing wearable
urban AR experiences is to maximise the perceived usefulness of digital augmentations.
Other challenges concern the urban context in which wearable AR experiences are situ-
ated. The seemingly dense and messy urban environment could potentially interfere with
the experience; for example, the constant flow of people passing by might interrupt the
augmentation [
34
]. Complex traffic conditions in towns and cities pose various risks to
pedestrians, such as falls, stumbles, and traffic collisions [
35
]. Therefore, failure of an appli-
cation to maintain the user’s situational awareness might result in physical injuries [
36
].
Additionally, using AR glasses in public settings entails social acceptance issues for both
users and bystanders [
3
]. People are conscious about their image when wearing such
glasses [32] and are hesitant to publicly use voice control or mid-air hand gestures [37].
Through a rigorous analysis of user feedback for two representative wearable urban AR
applications, this research determines whether these aspects of concerns (i.e., functional benefits,
the impact of urban contextual factors) could be evaluated using a simulated environment.
2.2. Simulating Wearable AR Experiences in VR
Prototypes are means for designers to gain valuable knowledge about different aspects
of the final design [
19
]. Because AR is fundamentally spatial, it is essential to depict 3D
structures and relationships among design elements already early in the design process.
Thereby, different AR prototyping methods of varying fidelity aim to incorporate some
level of immersion in their artefacts, allowing participants to experience AR, interestingly,
through a VR HMD or a Cave Automatic Virtual Environment. For rapid design exploration,
sketches on equirectangular grids [
38
], 360-degree panoramas [
39
] and videos [
40
] are
becoming increasingly popular as a relatively lightweight approach to creating immersive
AR interfaces. A number of specialised immersive authoring tools enable AR concept
designs inside a virtual environment [
41
] (e.g., Sketchbox3D (https://design.sketchbox3
d.com/, last accessed 13 February 2023), Microsoft Maquette (https://docs.microsoft.
com/en-us/windows/mixed-reality/design/maquette/, last accessed 13 February 2023),
and Tvori (https://tvori.co/, last accessed 13 February 2023)). These tools aim to lower the
technical barriers, making spatial prototyping more accessible to creators. In the case of 3D
game engine platforms such as Unity (https://unity.com/, last accessed 13 February 2023),
the tools demand increased technical expertise, posing considerable challenges for non-
programmers. However, they allow for more sophisticated AR interfaces while offering
context manipulation, paving the way for context-aware or pervasive AR research [
42
].
This method—using high-fidelity VR simulation to prototype and evaluate wearable urban
AR experiences—is, therefore, the focus of our paper. We discuss its advantages in greater
detail below:
Multimodal Technol. Interact. 2023,7, 21 4 of 23
Simulation of AR Systems: A high-fidelity VR system features a powerful processor
enabling low-latency positional tracking and a discrete graphics processing unit supporting
high-resolution image rendering. Using high-end VR hardware and a software frame-
work that allows independent manipulation of immersion parameters (e.g., FOV, latency,
and stereoscopy), researchers can simulate different AR systems (including those yet to
be developed) and investigate the effects of the isolated parameters in controlled stud-
ies [
21
]. Beyond AR system replication, VR hardware allows prototyping of immersive
cross-reality systems [
43
] and future AR interfaces with a larger augmentable area and
perfect registration of in-situ ‘AR’ graphics [
17
]. When used in conjunction with external
input devices, VR also enables the prototyping of 3D interaction concepts [
44
]. Furthermore,
a simulated AR prototype can be rapidly iterated for numerous rounds of evaluation prior
to deployment [43,45,46].
Simulation of Contextual Factors: VR simulation provides control over various environ-
mental factors, such as weather conditions and lighting. For example, Gabbard et al.
[47]
were able to reproduce the lighting conditions at night, dawn, and dusk in an attempt to
investigate the effect of outdoor illuminance values on an AR text identification task. More
significantly, the simulation approach allows for assessing AR systems in a broad range of
dynamic situations that would be markedly hazardous, costly, or impossible to produce in a
real setting. For example, VR simulation has been used to mitigate the risks associated with
industrial maintenance [
45
], firefighting [
17
,
46
], law enforcement [
17
], and AV–pedestrian
interactions [
18
]. Regarding context-aware AR interfaces designed to respond to contextual
changes (e.g., information widgets change as a user goes from their home office to the
kitchen), the virtual surroundings and even the accuracy in predicting user intent could be
effectively simulated in VR [48].
These benefits of VR simulation are of substantial relevance to the development of
wearable urban AR applications, particularly in their ability to replicate an urban set-
ting and its numerous contextual factors. According to Hassenzahl and Tractinsky
[49]
,
a good user experience promotes positive emotions and is concerned with the interactive
encounter’s dynamic, temporal and situatedness-related aspects. Therefore, an appropriate
context, although simulated, may help to assess user experiences more holistically. Un-
til recently, however, there has been little knowledge on utilising VR simulation to evaluate
wearable urban AR applications. Thus, an assessment of the method’s efficacy in obtaining
insightful user feedback is lacking, and factors that should be considered when simulating
wearable AR experiences in VR remain unclear. The research described in this paper aims
to address these gaps and enhance the prospects of more wearable urban AR applications
being prototyped and evaluated in context.
3. Case Studies
The following sections describe two case studies involving the design and evaluation
of two wearable urban AR applications, both of which were developed using immersive
VR as a means for prototyping. In the first case, we evaluated an AR pedestrian navi-
gation application featuring different exocentric map displays. In the second case, we
evaluated a wearable AR application assisting crossing pedestrians in autonomous traffic.
Despite sharing the urban context, these applications were conceptualised, prototyped,
and evaluated at different times and independently of each other. Both studies were led
by the first author of this paper. Table 1shows prototype characteristics and participants’
demographics in each case study. It is worth emphasising that these studies were designed
to assess their respective AR concept prototypes, whose results were reported in prior
publications [
50
,
51
]. In this paper, we analysed the qualitative data obtained from both
studies together to demonstrate the efficacy of VR in simulating wearable AR experiences
in a generalised manner.
Multimodal Technol. Interact. 2023,7, 21 5 of 23
Table 1. Prototype characteristics and participants’ demographics in each case study.
Pedestrian Navigation AV Interaction
Number of Conditions 3 4
VR Exposure per Condition 3–5 min 1–1.5 min
Movement Joystick-based Real walking
Interaction Controller Hand gesture
AR Content Maps, turn arrow Text prompt, crossing cues
Number of Participants (m/f) 18 (9/9) 24 (9/15)
Previous VR Experience
Never 2 8
Less than 5 times 15 13
More than 5 times 1 3
Study Location Australia Vietnam
3.1. Pedestrian Navigation
3.1.1. Design Context
Urban wayfinding has always been a challenge due to the dense network of streets
and buildings. In recent years, a substantial amount of work has examined how AR could
improve guidance by superimposing directional cues on the real world [
26
]. However, few
studies have utilised AR HMDs despite hands-free devices potentially providing a more
seamless experience. The narrow FOV of existing AR HMDs also restricts the amount of
information that can be displayed and searched through [
52
], contributing to the relatively
sparse research on map-based navigation compared to turn-by-turn directions [
53
,
54
]. It is
expected that a wider FOV would enable improved freedom in displaying high-level spatial
information to users. Thus, determining the influence of topographic map placement on
user experience and navigation performance is of relevance.
In this case study, we examined three distinct map positions, namely (1) at a distance
in front of the users, (2) on the surface of the street, and (3) on the user’s hand
(Figure 1)
.
Except for the on-hand map, which was anchored to the right hand and visible only
when the user brought it up, the other two maps followed the user’s visual field once
activated. These different map positions were inspired by previous works on AR HMD
maps in the industry (e.g., Google Glass [
55
]) and academic research [
56
,
57
]. Following
the recommendation by Goldiez et al.
[56]
, we offered users the flexibility to access the
maps on demand to avoid obstructing views and gain insights into their map usage. The VR
simulation allowed users to experience an idealised interface that was not bound by existing
AR constraints (e.g., limited FOV, insufficient brightness and contrast, unstable large-scale
tracking, and lack of GPS sensors) in a setting similar to the real world. Concerns such as
pedestrian safety and having control over the experimental setting were also major factors
in the decision to use VR.
3D directional arrow
(A) Up-front map
(B) On-street map
(C) On-hand map
Figure 1.
The AR application to support pedestrian navigation featured a directional arrow to display
turn directions when the map was not in use (as shown in the far left image). For the AR map view,
we investigated three different map positions: up-front map (
A
), on-street map (
B
) and on-hand
map (C).
Multimodal Technol. Interact. 2023,7, 21 6 of 23
3.1.2. Prototype Development
The prototype was created using the Unity game engine and experienced with an
Oculus Quest 1 VR headset. The selected VR system offered inside-out tracking, a display
resolution of 1440
×
1600 pixels per eye and built-in audio [
58
]. We designed a virtual city
(Figure 2) using pre-made 3D models from the Unity Asset Store (https://assetstore.unity.
com/, last accessed 13 February 2023). In order to create a lively urban atmosphere for the
city, we simulated road traffic and various activities along navigation routes (e.g., people
waiting at bus stops or police pursuing criminals). Additionally, ambient street sounds
were added to the virtual scenes and configured to support spatial sound perception.
Figure 2.
The simulated environment of the navigation study (
left
) and a participant using controllers
for movement and interactions (right).
To create the map interface, we captured the orthographic view of the virtual city in
Unity. The image was then imported into Adobe Photoshop for further editing. Specifically,
we reduced the amount of information encoded in the map to avoid unnecessary details
and made it semi-transparent. A white dotted line indicated the recommended route,
and a black arrow denoted the user’s current position, with a pointer indicating their
facing direction. In addition to the map, the navigation application incorporated egocentric
turn-by-turn guidance represented by a 3D arrow. While navigating, participants could
switch between the map and arrow views by pressing a button.
Participants traversed the virtual environment using the controllers’ joysticks. This
locomotion technique was implemented to facilitate long-distance travel without requir-
ing specialised hardware (e.g., an omnidirectional treadmill [
59
,
60
]). Compared to other
techniques such as teleportation, joysticks are easier to use [
61
] and allow participants
sufficient time to observe the surroundings. However, they generally induce more motion
sickness [
62
], necessitating the use of a constant and slow-moving speed to minimise nau-
sea. Participants were also instructed to remain seated throughout the VR experience as a
safety precaution. Due to the lack of ecological validity of artificial locomotion, the study
examined only the user’s augmented visual immersion in the environment and less about
their bodily immersion.
3.1.3. Evaluation Study
Experimental Design. We conducted our user study in a within-subjects design with
three conditions (three map placements) presented in a counterbalanced order. Participants
were asked to navigate to a pre-determined destination (e.g., a petrol station) and to locate
a specific location (e.g., the nearest bus stop).
Participants and Location. We recruited 18 participants from the university campus
and the neighbouring area using the university’s mailing lists, our social media net-
works, and word of mouth. Nine participants self-identified as male and nine as female
(
aged 18–34
). All participants spoke English fluently and had normal or corrected-to-
normal eyesight. Prior to the study, the majority of participants stated that they had little to
Multimodal Technol. Interact. 2023,7, 21 7 of 23
no experience with VR technology. Only one participant had extensive experience with
VR as a result of her enrolment in a VR development course. Participants were all familiar
with digital navigation applications (e.g., Google Maps). The study was carried out in our
lab in Australia.
Study Procedure. After briefing participants about the navigation study, we asked them
to read and sign a consent form. We then assisted participants in wearing the headset
and adjusting the head strap to their comfort. This step was followed by an interactive
tutorial designed to mitigate the novelty effect of VR [
63
], where participants learnt to
operate the controllers and traverse the virtual city. Following each condition, participants
were asked to complete a set of standardised questionnaires and a general feedback form.
Upon completion of the study, participants were invited to partake in a semi-structured
interview. The experiment lasted for a maximum of 75 min, including the time required for
participants to rest between conditions. Each interface was experienced for approximately
3 to 5 min (similar to other map navigation studies, e.g., [56]).
Data Collection. Along with questionnaire data and HMD-logged data such as task
completion times, we collected qualitative data using the post-trial general feedback form
(probing favourable aspects and areas for improvement) and post-study semi-structured
interviews. The interview aimed to learn about (1) participants’ overall experience,
(2) their
preferred conditions and (3) their opinions about different aspects of the prototype, such as
interaction and information display. Participants were also asked about their VR experience
and whether they suffered from motion sickness.
3.2. Interaction with Autonomous Vehicles
3.2.1. Design Context.
Future AVs may be equipped with external human-machine interfaces (eHMIs) to
compensate for the absence of driver cues. It has been shown that the presence of the
eHMIs makes it easier for pedestrians to grasp an AV’s intent and prevents ambiguous
situations [64]. Most of these external display technologies are integrated into or attached
to the vehicle, utilising its outer surface (e.g., windshields) or nearby surroundings (e.g.,
laser projection). Other possible placements include road infrastructures and wearable
devices [
65
]. Faced with scalability challenges arising from multi-agent situations [
66
],
researchers have recently shown an increased interest in wearable AR for its potential in
assisting pedestrians with highly personalised and contextual information [
67
]. To date,
there have not been any studies examining AR concepts in heavy traffic, and evaluations of
AR have primarily focused on comparing holographic visualisations [
18
,
68
,
69
] rather than
on whether pedestrians would prefer a wearable AR solution.
To address this gap, this case study investigated the extent to which users prefer to
use AR glasses for sending crossing requests to approaching AVs compared to a traditional
pedestrian crossing button. This AR design concept represents an infrastructure-free
method that may become feasible in the advent of the Virtual Traffic Lights system (as
detailed in the following patent [
70
]). Additionally, we examined the impact of three
communication approaches—a visual cue being placed on (1) each vehicle (distributed) or
(2) the street (aggregated) or (3) both—on pedestrians’ perceived cognitive load and trust.
Utilising VR, we were able to construct a complex traffic scenario with many AVs travelling
down a two-way street. In the real world, such a scenario would have been impossible due
to the possibility of causing physical harm to both participants and researchers.
3.2.2. Prototype Development.
We employed Oculus Quest 2, a standalone VR system, to allow participants to
freely move around unconstrained by cables and cords. Its hand-tracking feature also
enabled us to prototype hand gestures without the need for additional sensors. The virtual
environment was developed in Unity. We used off-the-shelf 3D models from the Unity Asset
Store to create an urban road with two lanes and two-way traffic
(Figures 3and 4).
Traffic
was a random mix of three vehicle types: a black/orange sports car, a silver sedan and a
Multimodal Technol. Interact. 2023,7, 21 8 of 23
white hatchback, all of which travelled at approximately 30 km/h. The scene also included
several lifelike 3D characters obtained from Mixamo (https://www.mixamo.com/, last
accessed 13 February 2023), engaging in various activities such as exercises and talking.
These virtual persons were carefully situated at a distance from participants to avoid
causing distraction [
71
]. To further enhance participants’ sense of presence in VR, we
included an ambient urban soundscape with bird chirps and street noise. Additionally,
we used sounds designed by Audi (https://www.e-tron-gt.audi/en/e-sound-13626, last
accessed 13 February 2023) for electric vehicles and adjusted the Unity Spatial Blend
parameter to simulate 3D sounds.
Text prompt
(A) Animated zebra crossing
(B) Green overlay on cars
(C) Zebra crossing + Green overlay
Figure 3.
A text prompt indicating that the crossing request was successfully received by the AR
application (far left). To indicate to pedestrians when it is safe to cross, we compared three different
AR visual cues: animated zebra crossing (
A
), green overlay on cars (
B
) and a combination of both (
C
).
Figure 4.
The simulated environment used in the AV study (
left
) and a participant tapping on the
headset to send a crossing request (right).
In the simulated environment, participants could see and use their virtual hands to
press the pedestrian button or activate the AR application by tapping on the VR headset
(Figure 4). The latter interaction was made possible by enclosing the headset in an invisible
collision zone that detects any contact with the user’s fingertips. The AR application
featured three different holographic visualisations: a zebra crossing, a green overlay on each
vehicle and a text prompt that informed users of the application’s action and instructed them
to wait. We used a semi-transparent white colour for both the text and loading animation,
aiming to imitate a futuristic light-based interface. The zebra crossing’s appearance and
animation were modelled after Mercedes-Benz F 015’s projected crosswalk (https://youtu.
be/fvtlobbMENo, last accessed 13 February 2023). The car overlay was a duplicated
version of the car frame, with the mesh’s material changed to green neon paint and the
opacity set at 100. The overlay was also enlarged to appear slightly detached from the car.
Along with visual elements, we included audio to accentuate specific events. In particular,
successful activation of the application was signalled by a bell sound, slow chirps were
Multimodal Technol. Interact. 2023,7, 21 9 of 23
used to indicate waiting time, and a rapid tick-tock sound meant crossing time. These
audio cues were obtained from the PB/5 pedestrian button because the spoken texts of the
original button were deemed unsuitable as user interface (UI) sounds.
3.2.3. Evaluation Study
Experimental Design. The study was designed as a within-subjects experiment with four
conditions: a baseline (the pedestrian button) and three variations of the AR application
(Figure 3). Each experimental condition began with the participants standing on the
sidewalk. Their task was to use the provided system and physically cross the road.
Participants and Location. A total of 24 participants (aged 18–34) took part in the
study, of which nine self-identified as male and fifteen as female. Participants consisted of
professionals and university students interested in the topic. All participants were recruited
through social media networks and word of mouth. To participate, they were required
to have (corrected to) normal eyesight, normal mobility, good command of English and
to have been living in the city where the study took place for at least one year. Most
participants had little to no previous experience with VR or AR. The study took place at a
shared workspace for technology startups in Vietnam.
Study Procedure. Following a briefing and the signing of a consent form, we invited
participants to stand in the starting position and put on the headset. An instructional
session was designed for participants to familiarise themselves with the immersive virtual
environment, learn how to use hand-based interactions and practice crossing the street.
The physical movement area was approximately 3 by 8 metres. Before each experimental
condition, participants were informed about the technology used (either AR glasses or a
pedestrian button). Following each condition, participants were instructed to remove the
headset and to answer a series of standardised questionnaires. After having completed
all the conditions, participants were asked about their experiences in a semi-structured
interview. The study took about 60 min to complete. Considering previous VR studies
investigating pedestrian behaviour [
72
], we decided on a short task duration (approximately
1 to 1.5 min) to ensure that participants are sufficiently immersed in the scenario without
becoming bored or fatigued from repeated crossings.
Data Collection. In addition to questionnaire data, we obtained qualitative data rele-
vant to the research questions addressed in this paper using post-study semi-structured
interviews. Participants were inquired about (1) the overall experience, (2) their preferred
conditions and (3) their opinions about aspects such as interaction, information display
and simulated environment.
4. Data Analysis
The qualitative and reflective analysis presented in this paper seeks to understand how
participants experienced and valued different aspects of the AR prototypes. Furthermore, it
examines participants’ sense of presence in the virtual environment and how it influenced
their emotional and behavioural responses.
Post-study interviews from the two case studies were transcribed by speech-to-text
software and then reviewed and edited by the interviewer. We applied an inductive
thematic analysis method [
73
] to analyse the data, using digital sticky notes on Miro.
The analysis was performed by two coders with different levels of engagement throughout
the studies. Whereas the first coder designed and performed both studies, the second coder
was not involved in their implementation. The process involved the first coder reading
through all data and selecting a representative subset (4/18 interviews for the navigation
study, 5/24 interviews for the AV study). Both coders worked on the same subset of
interviews independently, followed by a discussion to review differences and to finalise
the code book. The first coder then applied the agreed coding frame to all interviews. Any
new data points discovered during this process were discussed with the second coder. All
identified themes and sub-themes were reviewed by the research team and formed the
basis of the Results section.
Multimodal Technol. Interact. 2023,7, 21 10 of 23
5. Results
5.1. Participants’ Feedback on AR Prototypes
In this section, participants from the navigation study are denoted as N (e.g., N18)
and from the AV study as A (e.g., A18). The number of participants from each study is
indicated by navi and av, respectively (e.g., navi = 6, av = 3).
Figure 5summarises different aspects of the AR prototypes about which the partici-
pants provided feedback. It also illustrates the differences between the two case studies
regarding the type of user feedback and quantities.
Aordances (4, 2)
Cues for participants to recognise possible interactions, e.g. 'I was not even
aware that I can actually use my hand' (A7).
Simplicity (2, 1)
e simplicity of the interaction, e.g. 'ere needs to be a simple way of
toggling on and o ' (N7).
System feedback (0, 3)
e application communicates interaction results, e.g. 'With that message, the
system has actually conrmed, give me the conrmation' (A6).
Intervention interfaces (2, 1)
e application provides contextual intervention options, e.g. 'Maybe the
glasses can use computer vision to detect the scene and send me a prompt, for
example, 'Do you want to initiate the crossing?' (A1).
Interaction with content (9, 0)
Manipulation of AR content, e.g. 'e one on the hand was interesting because
you can zoom it very intuitively' (N14).
Social aspects (5, 0)
Considerations towards other people, e.g. 'If I do that, keep walking and
moving my hand. People will be like "this guy is crazy"' (N6).
Interactivity
Central region (13, 0)
e placement of AR content in the central visual eld, e.g. 'I was really
frightened when the map was covering my vision' (N12).
Lower region (15, 0)
e placement of AR content in the lower visual eld, e.g. 'Imagine if I use
this in real life, I can have the map under my feet and just look down for
direction while continuing to look around' (N3).
Hands as anchors (6, 0)
AR content appearing on participants' hands, e.g. 'Moving your hand to se e
the map wherever you want is practical and more convenient' (N3).
Essential information (0, 1)
e positioning of critical information, e.g. '[e text]should follow you,
staying in your peripheral vision' (A1).
Spatial Positioning
Sound
Application sounds (4, 0)
Participants' feedback on sounds of the AR application, e.g. 'Maybe a voice to
tell me to turn le or turn right' (N10).
Recognisability (0, 10)
Whether participants were aware of the visualisation, e.g. 'I didn't notice the
green overlay on the car […], I just saw that the vehicles stopped' (A1).
Clarity (0, 10)
Whether participants understood the visualisation, e.g. 'I do not understand
the signal, yeah. I thought it was just [a] random soware colour' (A8).
Aesthetics (6, 0)
How pleasing was the visualisation, e.g. 'I thought the map was prey. I just
felt like the map was very aesthetic' (N17).
Colour (3, 5)
e interpretation of colours, e.g. 'For me, red means stopping, green means
moving, […] I didn't expect to see green lights over the vehicles' (A6).
Transparency (7, 0)
How transparent was the visualisation, e.g. 'I would prefer to have it more
transparent. To make it easier to notice things around you' (N5).
Size (8, 0)
How big or small was the visualisation, e.g. 'e right-hand side map, […] it's
smaller than the other one, it doesn't annoy me very much' (N5).
Animation (1,1)
Feedback on moving graphics, e.g. 'I'm afraid it will make people feel o-
balance, those who are sensitive to motions I don't know, or drunk people' (A1).
Visualisation
AR Glasses
Larger ecosystem (0, 4)
e dierent functions and features that the AR glasses might oer, e.g.
'When you have the AR glasses, you can use it for multiple things and not just
crossing the street. And for that, it's just way beer' (A2).
Alternative technologies (1, 3)
Participants suggested alternative technologies that they thought could more
or less support the task, e.g. 'A smartphone is probably the easiest way for me
to send the signal maybe' (A1).
Necessity (0, 3)
e necessity of using AR glasses to accomplish the given task, e.g. 'Maybe
there is a beer way to cross the street' (A8).
Availability (0, 2)
e availability of AR glasses to support daily tasks, e.g. 'It would be very
problematic for me to cross the street if I don't bring these glasses with me' (A6).
Maintenance (0, 2)
e responsibility to maintain the glasses, e.g. 'e AR glasses are a p ersonal
device, […] I have to take care of it myself, to ensure it's working well' (A4).
Economic concerns (0, 4)
AR glasses were perceived as a costly technology, e.g. 'People are not going to
have enough money for the glasses' (A2).
Familiarity (0, 4)
AR glasses came across as unfamiliar for participants, e.g. 'just that the rst
time you don't know how it works' (A20).
Functionality
Perceived values (9, 6)
How the design concept addressed participants' needs, e.g. 'It completely xed
the issue of standing and wondering which direction you are facing' (N12).
Data privacy (0, 4)
e collection, usage, and management of user data, e.g. 'is kind of system
will involve too many types of data, […] how they are going to be used' (A16).
Feasibility (0, 7)
Whether the design concept will work in complex real-world scenarios, e.g.
'My concern will be the mixed trac because the glasses can only send the
signal to the autonomous vehicles but not the manual ones' (A21).
Hedonic quality (3,3)
Aributes of the application that are non-task oriented, e.g. 'If it is about the
excitement, the one that I want to use, then I will go with the glasses' (A16).
Pragmatic quality (18, 11)
Aributes of the application that address task-related needs, e.g. 'I can
understand the function, how it works, and I can control it' (A9).
User Experience
Information types (7, 23)
Usefulness of dierent information, e.g. 'I like the way the arrow is showing
the way' (N1) or 'I knew exactly where I was spatially on the map' (N12).
Information load (8, 12)
e amount of information provided by the application, e.g. 'If there is,
saying, lights here and there, then it's going to distract my aention' (A6).
Additional information (15, 11)
Other information participants expecte d to have when using the application,
e.g. 'I would like to be able to see the details of the places around me' (N1).
Information
Figure 5.
Different aspects of the AR prototypes about which the participants provided feedback.
The orange bar represents the navigation study; the grey bar represents the AV study. The bar length
represents the number of participants who provided feedback for a specific aspect for the navigation
study and the AV study, respectively.
Multimodal Technol. Interact. 2023,7, 21 11 of 23
5.1.1. AR Glasses
As the majority of our participants had no prior experience interacting with AR glasses,
their opinions regarding the technology and its potential adoption were based solely on
their direct experiences with the prototypes. In our analysis, participants commented
on different factors influencing the adoption of AR glasses, such as their eyewear nature
(navi = 3,
av = 4), the unfamiliarity of AR technology (av = 4) and potentially high cost
(av = 4).
Because AR glasses are individually owned—in contrast to a public solution, such
as the pedestrian button used in one of our studies—a few participants mentioned the
inconvenience of forgetting their glasses at home (av = 2), and another two pointed out that
they would need to bear the responsibility of taking care of the device (av = 2). Of note,
three participants found the device to not be absolutely necessary for the street crossing
task (av = 3) and the AR glasses were only ‘nice to have’ (A6). Alternative technologies, such
as smartphones, were mentioned by several participants (navi = 1, av = 3).
Four participants made reference to the potentially larger ecosystem of applications
available on such AR glasses (av = 4): as a multi-purpose device, the AR glasses appealed
to some (av = 3), but at the same time, there existed concerns about the potential advertise-
ments or disconnection from the physical world. For example, A9 commented, ‘You can
also read news and watch TV, but I am afraid that we lose connection in the real world’. Sharing a
different perspective, A1 believed that AR glasses might help users retain their attention
compared to using mobile devices, ‘I can’t browse social media feeds, since it’s not that great to
do that on the AR glasses’.
5.1.2. Functionality
Prototyping plays a critical role in validating and informing design concepts in the
early phase of product design [
19
]. Our analysis revealed that a number of participants ar-
ticulated precisely which aspects of the design concepts they found most valuable
(navi = 9,
av = 6). In the navigation study, participants commended the blending of spatial directions
into the natural environment, mentioning that it fixed the issue of not knowing which
direction they were facing (navi = 4). Further, they commented that the design concept
would be particularly useful for navigating through unfamiliar places or dense areas with
complex walkways (navi = 5). In the AV study, participants emphasised the convenience
of sending crossing requests to vehicles and being able to safely cross the street at any
location (av = 6). A6, for example, mentioned her negative experiences with impatient
drivers, ‘I’m very concerned about my safety when crossing the street, but now I trust that when I
choose the “activate” mode to send signals to all of the vehicles, they will all stop, and I will have
time to cross the street’. Nevertheless, the perceived usefulness of the application was less
about graphical augmentations. Instead, participants appreciated the flexibility in choosing
their crossing locations and having control over the interaction, both of which smartphones
could, however, readily provide.
More than half of the participants in the AV study (av = 13) considered the complex
real-world situations when providing feedback about the design concept. They commented
on the feasibility of sending crossing requests in mixed traffic scenarios (av = 5) in which
‘the traffic is still filled with a lot of manual [vehicles], and motorbikes’ (A2). Two participants
also mentioned local compliance with traffic laws as a factor influencing their trust in the
implementation (av = 2). A1, for example, stated, ‘In the EU, I will trust it, but here in Vietnam,
I doubt it’. Six participants questioned whether misuse or an increase in the number of
crossing requests would negatively impact traffic efficiency (av = 6). Lastly, because the
concept involves communication between AR glasses and vehicles, several participants
were suspicious about the handling of their personal data (av = 4).
5.1.3. User Experience
We discovered various evaluative statements concerning the pragmatic qualities of the
prototypes in both case studies. In regard to positive aspects, participants described the AR
applications as ‘useful’,‘easy to use’,‘convenient’,‘well integrated’ and ‘intuitive’ (navi = 12,
Multimodal Technol. Interact. 2023,7, 21 12 of 23
av = 4). Meanwhile, participants’ sense of safety appeared to be a deciding factor in their
preference towards a specific version of the application. Many participants in the navigation
study were particularly cautious about how augmented navigational instructions might
interfere with concurrent activities, such as sight-seeing and paying attention to the streets
(navi = 15). N7, for example, complained about the opaque up-front map, ‘I couldn’t see
where I was going while using the map [...] I had to stop or keep walking in an open area’. As
a result, they preferred a safer design solution for mobile use; one could better maintain
their situational awareness, ‘While I was following the arrow, I was actually observing people
around, [...] I saw a car making a U-turn. There were some policemen running after a woman’
(N1). Whereas safety was perceived as one of the design requirements for the navigation
application, participants considered safety as the most critical aspect when interacting with
AVs. They evaluated different conditions based on their subjective feelings when crossing
the street regarding whether they would feel ‘safe’,‘confident’,‘uneasy’,‘insecure’,‘rushed’
or ‘worried’ (av = 11). Hedonic aspects occurred rarely in the qualitative data, even though
the AR applications were sometimes described as ‘cool’,‘exciting’,‘awesome’ and ‘impressive’
(navi = 3, av = 3).
5.1.4. Information
A large number of participants (navi = 7, av = 23) provided feedback on the usefulness
of different message types. Several statements, interestingly, revealed differences between
the designer’s intention and the user ’s interpretation. For example, the zebra crossing was
intended to be a visual cue indicating when the user should cross. However, we found
that participants also relied on the crosswalk to recognise the safe crossing zone and the
boundary where the vehicles would stop (av = 5). In the conditions without the crosswalk,
they felt less confident. One participant, A16, even decided not to cross, ‘I just do not know
where [the cars] will stop, inside or outside the area. They may stop very close to me’. Regarding
the number of visual cues, most participants in the AV study felt assured receiving multiple
indications to cross given the dangerous nature of road traffic (av = 12). For A18, ‘it’s like
double the security’. Meanwhile, two participants were concerned about getting distracted
by multiple visual (A6) and audible (A21) cues.
In the navigation study, the amount of information presented on the map was expected
to be ‘minimal’ (N7) yet sufficient for the task at hand (navi = 8). Further, our analysis
recorded a large number of suggestions for additional information (navi = 15, av = 11),
uncovering what participants thought was missing from the applications. Participants in
the navigation study desired cardinal directions (navi = 5), estimated time or distance to
arrival (navi = 7), information about nearby locations (navi = 4), upcoming turns (navi = 5)
and warnings (navi = 3). In the AV study, the application was requested to display waiting
time (av = 3), crossing time (av = 7), the number of vehicles that had received the crossing
request (av = 5), and notification of task completion (av = 1). It is worth noting that among
the feedback, a variety of proposals for new features were offered, e.g., oriented search,
a navigation approach in which users rely on distal visual cues to orient themselves and
work out the direction to the destination [
74
]. N1 suggested, ‘This is AR, right? You could
add something to the location of the destination. If I am going to the Four Seasons Hotel, there could
be something in my sight that I can see from a far distance throughout the process’.
5.1.5. Visualisation
There was a wide variety of user feedback pertaining to the visualisation of AR content.
In terms of recognisability, a large number of participants in the AV study failed to notice
the car overlay (av = 10). Meanwhile, we did not observe a similar issue with the zebra
crossing, even though it was also designed to be part of the real world. According to
the user feedback, the problem could be attributed to two factors. First, the participants
did not notice when the overlay appeared due to the sheer number of moving vehicles.
For example, A21 stated, ‘There were so many cars on the road, [...] I had to turn left and right,
and my attention was divided’. Second, they could not distinguish the AR overlay from the
Multimodal Technol. Interact. 2023,7, 21 13 of 23
car itself, thinking ‘[it was] just a green car’ (A20). Of note, one participant stated that AR
overlays affected his impression of real-world objects, ‘the vehicles [.. . ] were more futuristic,
probably because of the overlay’ (A1). Regarding clarity or comprehensibility, participants
expressed preferences for short and straightforward instructions (av = 7) combined with
relevant icons (av = 3). In indicating when it is safe to cross, five participants stated that a
text prompt might convey the message more clearly than graphical representations (av = 5).
With respect to the visual appearance of the AR content, the following characteristics
were mentioned: aesthetics (navi = 6), colour (navi = 3, av = 5), transparency (navi = 7),
size (navi = 8), and animation (navi = 1, av = 1). It is worth noting how those visual aspects
might divert users’ attention (navi = 2, av = 1). As N7 expressed, ‘the vertical map is somehow
transparent but still distracts me from the streets’. Besides, moving or rotating overlays had to
be used with caution because two participants reported their potential in causing sickness
(navi = 1, av = 1).
5.1.6. Sound
In the navigation study, we found several suggestions to incorporate voice-guided
navigation (navi = 4). In the AV study, although different sounds accompanied user
interactions, there was no feedback related to the auditory aspect. Interestingly, in the
conditions in which crossing was supported through the AR glasses, participants seemed
to take little notice of the sounds. On the other hand, when the same sounds were applied
to interacting with the pedestrian button, five participants felt the pressure to cross the
street as quickly as possible (av = 5). For A1, the sounds felt like ‘an alarm clock’. Meanwhile,
for A16, the sound was similar to ‘a countdown’.
5.1.7. Spatial Positioning
AR content can be placed anywhere in the 3D environment, making it challenging
for designers to choose an optimal position. A1 stated that essential information, such as
crossing indication, should be rendered within the user’s field of vision rather than situated
in the world space. He reasoned that ‘both the zebra crossing and the vehicle’s colour could
only be seen when you look at the street. When you look in a different direction, you won’t see
them anymore’. Meanwhile, placing content that might be used en route for an extended
period (e.g., navigational interface) in the central view was reported to cover real-world
objects (navi = 10) and divert users’ attention (navi = 4). This feedback concerned not
only the map (navi = 8) but also the relatively small directional arrow (navi = 4). N9,
for example, expressed her frustration that ‘the arrow in the middle [of the view] made [her] feel
annoyed’. When content was placed in the lower region (i.e., on the street), despite being
less obstructive (navi = 10), it led to issues such as divided attention (navi = 5), neck strain
(navi = 3), and suboptimal text legibility (navi = 2). Additionally, three participants worried
about not being able to spot obstacles on the ground (navi = 3). Anchoring the map to the
user’s hand was perceived as natural (navi = 6). N10 was reminded of when ‘[he] used
Google Maps’ on his smartphone while walking, and felt that ‘the experience [was] quite real’.
Meanwhile, N14 found it safe when the on-hand map did not hinder his vision, and he only
referred to it when required, explaining that ‘you’re not working like this with your hand. It’s
very unnatural. So you need to put your hand down’. Four participants, however, mentioned
that the on-hand map position was strange and ‘awkward’ (N14) (navi = 4).
5.1.8. Interactivity
Several participants were unsatisfied about the absence of affordances for interaction,
stating that they had received no hints regarding possible engagement with the AR glasses
(navi = 4, av = 2). Three participants reflected on the simplicity of interactions (navi = 2,
av = 1).
A1, for example, mentioned the ease of ‘tapping on the glasses’ as opposed to ‘going
through many menus’. At the same time, the same participant was concerned about the
possibility of accidental interaction. System feedback to interaction requests was perceived
Multimodal Technol. Interact. 2023,7, 21 14 of 23
as necessary by several participants (av = 3). For example, A1 assumed, ‘If I don’t see the
message, I suppose I will press it a lot of times’.
A few participants expected the AR glasses to sense the environment and suggest
relevant interventions [
75
] (navi = 2, av = 1). A4, for example, wished to be notified if
crossing the street at the wrong spot and taking the wrong route. In the navigation study,
the analysis revealed a large amount of feedback relating to interacting with the holographic
map (navi = 9). Participants appreciated having physical control over the positioning of
content (navi = 4). N15 stated, ‘I like that you can control the map; it is like you are literally
holding it’. Five participants mentioned the opportunity to view the content in multiple
ways. In particular, they expected to extract relevant information by zooming (navi = 4) and
filtering (navi = 2). Social aspects of the interaction were mentioned by several participants
(navi = 5). For N6, moving his hands while walking would be considered unusual, ‘People
will be like, “this guy is crazy”’. Meanwhile, N7 was concerned more about how she might
accidentally hurt nearby people, ‘obviously you couldn’t use [the on-hand map] when it’s close
to people because you might hit them with your hand’.
5.2. Participants’ Behaviour in Immersive Virtual Environments
Joystick-based navigation resulted in more motion sickness feedback than real walking
(navi = 12, av = 3); however, the discomfort was reported to be mild and decreased over
time. The VR simulation used for the AV study presented an upgrade in image resolution
(Oculus Quest 2 versus Oculus Quest 1) and visual fidelity (high-poly versus low-poly
models). Interestingly, it received less positive feedback regarding perceived realism than
the navigation study (navi = 8, av = 5). Ambient sounds of people talking (navi = 6),
familiar scenes (av = 2), and various social activities in the background (navi = 2, av = 2)
had a noticeable impact on participants ‘feeling real’. For example, N5 felt as though she was
‘walking in a real city’ due to the sounds of people talking, while N7 expressed a fondness
for the ‘similarities to a real-life city’. We further found that the perceived realism of the
experience triggered strong emotional and behavioural responses among the participants;
these reactions were notably evident in the encounters with virtual traffic. The participants
exercised caution around the cars (navi = 1, av = 19) despite being aware of VR as a safe
environment. For example, in the navigation study, N13 stated, ‘I know that those cars might
not harm me because they are not real, but I have the instinct to run away, [...] when the car was
near me I almost jumped’. Meanwhile, participants in the AV study demonstrated typical
pedestrian behaviour, such as hesitating to cross, waiting for vehicles to come to a complete
stop, or rushing across the street. Nevertheless, the simulations could not completely
recreate the various sensory inputs prevalent in the real world, as revealed by participants
in both studies (navi = 3, av = 6). In this regard, A21 believed that there would be ‘more
sensory inputs’ that could influence his crossing decisions in real life.
6. Discussion
In this section, we first review the user feedback obtained and discuss the extent to
which VR simulation elicit insights that are valuable for the design of wearable urban
AR experiences, thereby addressing RQ1. We organise this section around the challenges
for designing wearable urban AR applications: (1) functional benefits and (2) impact of
urban contextual factors, as detailed in the Related Work section. Next, we address RQ2 by
discussing the limitations of VR simulation. Finally, we present a set of recommendations
for simulating wearable urban AR experiences in VR. The discussion concludes with the
acknowledgement of the study’s limitations.
6.1. VR Simulation Efficacy in Evaluating Wearable Urban AR Applications (RQ1)
Undoubtedly, simulations were unable to reproduce the physical characteristics of the
AR glasses (e.g., size, weight, and aesthetics); any feedback related to these characteristics
was directed towards the VR headset. However, one can postulate that to a certain extent VR
HMD could give participants the impression of wearing smartglasses, thus helping them to
Multimodal Technol. Interact. 2023,7, 21 15 of 23
envision better how AR technology could become part of their daily lives. For instance, AR
glasses were not appealing to some participants in the AV study, but the reason was less
about their physical form factors, as previously discovered [
76
], and more about the fact that
participants were obliged to wear glasses. This concern was especially prevalent among
people who underwent medical procedures to correct vision problems (e.g., astigmatism,
nearsightedness, and farsightedness). Notably, the feedback was only discovered in the
AV case study, implying the acceptability of wearing AR glasses might depend on the
envisioned frequency of use. Street crossing, in particular, is a daily activity, whereas
travelling to a new location, requiring extensive use of navigational support, is a once-in-a-
while occurrence. Qualitative results further revealed that participants—when assessing
the wearable AR applications—differentiated between the perceived functional benefits
that the application would provide and the technology involved. In other words, some
participants found the functionalities beneficial, but at the same time, questioned whether
AR glasses were the most appropriate technology for the job at hand. The application of
AR glasses had to be sufficiently compelling to be used on a day-to-day basis and, to a
significant extent, regarded as superior when compared to alternative technologies.
The post-study interviews showed that participants looked beyond experienced sce-
narios related to many real-world situations. For instance, participants in the AV study
were interested not only in how the design concept aided their crossing but also in its
effect on traffic efficiency. Interestingly, they voiced valid concerns about the concept’s
viability in mixed traffic conditions and data privacy, sharing similar viewpoints with
scientific experts [
67
]. These observations suggest that VR simulation could inspire a sense
of an envisioned end-user experience, prompting participants to consider concepts for
AR glasses more holistically, including aspects related to hardware, perceived functional
benefits and the larger context. In other words, VR enabled the participants to immerse
in the speculative future of AR glasses and their applications [
77
]. This benefit is espe-
cially significant given that one of the challenges industry practitioners face in prototyping
extended reality is bringing users into a world they have never experienced before [78].
Evaluating the Impact of Urban Contextual Factors
Based on a review and clustering of the results relating to contextual factors, we
determined the extent of their impact on user perception of AR applications.
(1)
Safety concerns: The simulated context heightened the participants’ awareness about
their safety, prompting them to take note of the safety aspects of the design concepts (as
in the navigation study) and their feeling of safety (as in the AV study). For instance,
most participants in the navigation study expected to use the map interface while
walking and, therefore, disfavoured a design that might obscure the real world or
distract them from their surroundings. As a result, researchers can utilise VR to
experiment with different spatial arrangements of AR content or even give users the
option to customise the placement in ways that best fit the situational context.
(2)
Attentional capacity: User statements suggested that the sheer quantity of visual
distractions in the urban setting (e.g., two-way traffic) makes it more likely for them
to overlook conformal AR contents, particularly those positioned at the periphery
(e.g., car overlays). The divided attention exemplifies how VR prototyping may aid in
the discovery of usability issues that might arise during in-situ deployments. Further-
more, simulated environmental distractions help determine whether the AR content
might exceed the users’ capabilities and cognitive demand in a specific situation.
For example, the potential information overload in AV–pedestrian communication
was examined in a multi-vehicle scenario, and the obtained feedback revealed the
relevance of each information cue. Notably, much of the feedback was influenced by
the temporal and spatial awareness of the virtual environment. For example, many
participants expressed their fear of not knowing how close the vehicles would stop from
them, with one participant even hesitating to cross the road when the zebra crossing
appeared before the AVs had come to a complete stop.
Multimodal Technol. Interact. 2023,7, 21 16 of 23
(3)
Social considerations: Participants were considerate of the potential adverse effects
of their AR usage on others; for example, one participant in the AV study wanted
to ensure that traffic movement would resume after he had crossed the street (A21).
However, it should be noted that the number of social considerations related to inter-
action techniques was relatively small. This result could be attributed to the simple
interaction in the AV study and the fact that only interactions with the on-hand map in
the navigation study involved hand movement.
6.2. VR Simulation Limitations (RQ2)
Using the AR-within-VR method, designers and researchers can explore a number of
design dimensions. However, as with other manifestations, it has some inherent limitations
that should be considered. We believe it is essential to view VR simulation as a method
with its own advantages and disadvantages and not as a replacement for high-fidelity AR
prototypes and field research. In this section, we discuss the feedback that could not be
anticipated when simulating wearable urban AR experiences in VR.
Examining visual fidelity: Regarding AR visualisations, the prototypes conveyed a
rather sophisticated appearance, inducing user feedback on numerous visual proper-
ties. However, the validity of the findings might be called into doubt when qualities
such as colour, transparency, and text legibility are concerned. For example, while
feedback about how the map was not transparent enough and the place-mark icons
were hard to read aided in identifying usability issues, these observed issues may
have been partly caused by the VR system (e.g., display resolution of the headset
used). In addition, there may be issues that will not manifest under simulated con-
ditions because the influencing factors, such as outdoor lighting [
47
] and the visual
complexity of background textures [
79
], cannot be realistically recreated. Further,
an interview study with industry practitioners reported that the targeted display
technology could also tamper with the colours of an AR application [
78
], necessitating
the testing of colour variations via actual deployment on the target device. For these
reasons, a simulated AR prototype may not be the most suited manifestation to ex-
amine the fine-grained visual qualities of AR interfaces and related usability issues
(e.g., distinguishing colours [71]).
Producing exhaustive contextual feedback: Although the VR simulation provided in-
context feedback on wearable urban AR applications, it is important to note that the
method should only be used as a first step to narrow down potential concepts and
it does not replace field experiments for a number of reasons. First, prototyping and
evaluating wearable AR applications in VR is a highly controlled activity by its very
nature. This means that not only the application and the intended use environment
are simulated, but also the instruction and tasks, resulting in a noisy approximation
of how people actually use a product. In this regard, in-the-wild AR studies can be a
more effective approach to understanding how an AR application is used in authentic
everyday situations; for example, the participants in the study by
Lu and Bowman [80]
used a working prototype in their own contexts for at least 2.5 h unsupervised. Second,
it is nearly impossible to replicate the messiness and variability of the real world and its
potential effects. For example, in a field trial of a mobile AR map,
Morrison et al. [81]
found that the sunlight and sunshade of the sun influenced the way participants held
the device to view the AR visualisations. Therefore, VR bridges the gap between
ecological validity and experimental control, but does not eliminate the ‘hassle’ of
conducting field studies [82,83].
6.3. Recommendations for Simulating Wearable Urban AR Experiences in VR
Building on the results of two case studies as the foundation and reflecting on our
prototyping process, we distil and consolidate a set of preliminary recommendations (R)
for simulating wearable urban AR experiences in VR. The recommendations focus on three
dimensions that influence the efficacy of VR simulation, namely: (1) experience of wearing
Multimodal Technol. Interact. 2023,7, 21 17 of 23
smartglasses, (2) AR content rendering, and (3) contextual setting. Examples from the two
case studies are included throughout to illustrate the recommendations.
6.3.1. Experience of Wearing Smartglasses
When prototyping wearable AR applications, not only the mix of virtual and ‘real’
content but also the experiential qualities of wearing spectacles are essential to consider.
Although a VR headset does provide a similar experience to a certain extent, there is a
need to reinforce the user’s impression of wearing smartglasses. The purpose is not to
assess the physical ergonomics of an AR system but to ensure that users (those unfamiliar
with smartglasses) remain aware of the wearable technology employed and can conceive
of its usage in everyday contexts. This emphasis applies especially when a study aims to
compare non-AR interfaces with AR ones (as was the case in our AV study and a study
by Pratticò et al. [18]).
R1—Emphasising the experience of wearing smartglasses
. Tapping is one of the
most common hand gestures that allow users to engage with AR content. Smartglasses
users can either perform air-tap gestures (as in HoloLens applications) or tap a touch-
pad on the temple (as in Google Glass applications). In the AV study, we implemented
the latter mainly because it involves physical contact with the headset, which serves
to emphasise the experience of wearing smartglasses. The gesture was natural, easy
to implement, and contributed to greater user feedback on the AR glasses in the AV
study. Yet, one disadvantage of this technique is that it is not applicable when AR
interaction techniques [44,84] are the focus of the investigation.
6.3.2. AR Content Rendering
Employing VR to simulate wearable AR resembles the Russian nesting doll effect,
in which a UI is situated within another UI [
44
]. Although we paid special attention to
visual perceptions of AR content during the development process, the interaction effects
remained strong and affected their recognisability, as evidenced by the qualitative feedback.
It was sometimes difficult for participants to distinguish AR content from the virtual
world, particularly when they were superimposed on ‘real’ objects. Moreover, there was
another issue with interaction effects between AR and VR that we did not anticipate:
misunderstanding over system messages. Two participants were confused about whether
the system feedback (an AR text prompt) was part of the actual AR experience or originated
from the VR system. Given that wearable AR is an emerging technology that most people
are still unfamiliar with, these issues are likely to occur and confound usability findings.
To minimise the interaction effect between AR and VR content, AR augmentations should
be rendered in a way that visibly separates them from the surrounding virtual environment
that represents the ‘real’ world.
R2—Making AR content stand out:
To differentiate AR imagery from virtual envi-
ronments, we created AR materials in emissive and translucent colours, resembling
interfaces typically seen in science fiction films. To strengthen the effect, we propose
increasing the realism of the environment with high-poly 3D models while using
lower poly meshes for AR elements. However, because there is a trade-off between
using complex mesh models and maintaining high simulation performance, particular
attention must be paid to different VR optimisation techniques.
R3—Simulating registration inaccuracy:
A key issue of existing AR HMDs is the lack
of image registration accuracy, which results in misalignments between the augmented
imagery and physical objects [
6
]. While VR usage alleviates this problem, it was
found during pilot tests that a perfect registration made distinguishing between AR
and VR elements challenging. Therefore, we deliberately simulated a small degree
Multimodal Technol. Interact. 2023,7, 21 18 of 23
of registration inaccuracy, for example, creating a noticeable gap between the car
overlay and the car body. Participants in the pilot tests specifically commented on the
usefulness of this technique in recognising digital overlays.
6.3.3. Contextual Setting
Consideration should be given to the design of virtual environments to identify
relevant circumstances that might influence user perceptions. While related literature
and methods such as contextual observation are beneficial for this process, we found that
pilot tests generated numerous insights for simulation iterations. For example, pilot test
participants described a city without much traffic or people as ‘creepy’ and engaged in
dangerous behaviours such as jaywalking.
R4—Determining the social and environmental factors to be incorporated:
Our find-
ings demonstrate that the simulation of social and environmental factors frequently
found in an urban setting, such as road traffic and human activities, contributed to
participants’ sense of presence. These factors are particularly critical when individuals
are exposed to the virtual environment for an extended period (e.g., navigating or
exploring the city). In addition to improving the experience in VR, however, the overall
rationale for incorporating social and environmental factors should be to better assess
their influence on the usability of urban AR applications. For example, in the naviga-
tion study, participants referring to background scenes we deliberately created (e.g., a
policeman running after a thief) provided us with implicit but valuable feedback that
our AR application offered sufficient situational awareness.
R5—Incorporating different levels of details:
The extent to which contextual factors
are modelled in detail, we argue, should vary according to their role and possible
impact on the urban AR experience under investigation. Vehicle behaviour, for ex-
ample, was not replicated as precisely in the navigation research as in the AV study
because participants were not meant to engage directly with road traffic. Rather than
managing every driving parameter (e.g., speed and deceleration rate), we used the
built-in navigation system of Unity to fill the city with intelligent car agents, lowering
the time and effort required to build the prototype. This also conforms with what Lim
et al. [19] more broadly refers to as the economic principle of prototyping.
6.4. Limitations and Future Work
According to Dole and Ju
[85]
, researchers can use measures such as users’ sense of
presence to assess a simulation’s face validity, which is a proxy for ecological validity. In the
selected case studies, participants’ sense of presence was assessed through their statements
and observed behaviours, rather than a standardised questionnaire, such as the Presence
Questionnaire [
86
]. To a large extent, the employed measures allowed us to ascertain
specific elements that influence participants’ sensations (e.g., soundscape and vehicles).
However, questionnaires could potentially be useful in quantifying different dimensions
of the construct. The immersion quality of the VR simulations was also constrained by
the VR hardware used, the visual and interaction fidelity of the scenarios and the lack
of self-presentation. While these issues may have impacted the sense of presence of the
participants and, consequently, the quality of their feedback, they also present ample
opportunities for VR simulations to become more effective in the future as VR technology
advances [44,77] (e.g., a full body tracking would enable avatar legs).
The potential impact of social interactions was not apparent in the results, owing
mainly to the case studies being designed to address their respective research questions.
Social cues were incorporated into the virtual environment in ways that did not unneces-
sarily divert participants’ attention away from the main experimental task. Future research
should investigate whether the addition of virtual avatars in the same interaction space elic-
its increased input regarding social experience. Furthermore, similar to how
Flohr et al. [87]
Multimodal Technol. Interact. 2023,7, 21 19 of 23
incorporated an actor as part of a shared ride video simulation, implementing social VR
with real users may present an interesting research direction.
With respect to the validity of utilising VR to simulate AR, the literature has been
concerned with two distinct aspects: the differences between the simulated and actual AR
systems and those between the simulated and real-world environments. Regarding the for-
mer, several studies have reported initial evidence that the simulator results are equivalent
to those obtained with real AR systems [
21
23
,
88
]. Regarding the latter, human behaviours
and experiences in a virtual environment have been actively researched, with pedestrian
simulators as one of the most prominent examples. According to a literature review con-
ducted by Schneider and Bengler
[72]
, empirical data supporting generalisability to the real
world exists but have been rather insufficient, and one should interpret such findings cau-
tiously in terms of general trends rather than absolute validity. For these reasons, our study
focused only on determining the relevance of obtained feedback to evaluating wearable
AR experiences.
AR hardware limitations and safety risks associated with our futuristic urban interfaces
have made it nearly impossible to compare simulated AR with real-life implementations.
For instance, the current AR headsets do not have the wide FOV required for our naviga-
tion concepts, and it is difficult to mitigate the risk associated with interacting with AVs.
Nevertheless, we believe it will be valuable to investigate the differences in user feedback
by running follow-up field studies when the AR hardware and safety conditions are met.
Furthermore, more simulation studies may and should be conducted to extend and refine
the recommendations offered in this paper.
7. Conclusions
Wearable AR applications hold significant promise for transforming the relationship
between individuals and urban environments. Furthermore, they have the potential to
become a necessary component for interaction with emerging urban technologies, such as
connected and cyber-physical systems in cities (e.g., AVs). An engaging wearable urban
AR experience is closely linked to its perceived functional benefits and the context in which
it is situated, both of which are essential to explore and assess throughout the design
process. Through a comprehensive analysis of qualitative data from two wearable urban
AR applications, this paper provides evidence demonstrating the potential of immersive VR
simulation for evaluating a holistic and contextual user experience. The paper contributes
to the body of knowledge in designing wearable urban AR applications in two specific
ways. First, it offers empirically-based insights into the efficacy of VR simulation in terms of
evaluating functional benefits and the impact of urban contextual factors. Second, the paper
presents a set of recommendations for simulating wearable urban AR experiences in VR.
We hope that our contributions will help overcome the barriers and complexities associated
with prototyping and evaluating wearable urban AR applications.
Author Contributions:
Conceptualization, T.T.M.T., C.P., M.H., L.H. and M.T.; methodology, T.T.M.T.,
C.P., M.H., L.H. and M.T.; data analysis, T.T.M.T. and M.H.; writing—original draft preparation,
T.T.M.T.; writing—review and editing, T.T.M.T., C.P., M.H., L.H. and M.T.; visualization, T.T.M.T.; su-
pervision, C.P. and M.T. All authors have read and agreed to the published version of the manuscript.
Funding:
This research is supported by an Australian Government Research Training Program (RTP)
Scholarship and through the ARC Discovery Project DP200102604 Trust and Safety in Autonomous
Mobility Systems: A Human-centred Approach.
Institutional Review Board Statement:
The navigation study was carried out following the ethical
approval granted by the University of Sydney (ID 2018/125). The AV study was carried out following
the ethical approval granted by the University of Sydney (ID 2020/779).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in
the research.
Multimodal Technol. Interact. 2023,7, 21 20 of 23
Data Availability Statement:
The interview transcripts presented in this research are not readily
available because The University of Sydney Human Research Ethics Committee (HREC) has not
granted the authors permission to publish the study data.
Conflicts of Interest: The authors declare no conflict of interest.
Acknowledgments:
We thank all the anonymous reviewers for their insightful suggestions and
comments which led to an improved manuscript.
References
1. Azuma, R.T. A survey of augmented reality. Presence Teleoperators Virtual Environ. 1997,6, 355–385. [CrossRef]
2.
Paavilainen, J.; Korhonen, H.; Alha, K.; Stenros, J.; Koskinen, E.; Mayra, F. The Pokémon GO experience: A location-based
augmented reality mobile game goes mainstream. In Proceedings of the 2017 CHI Conference on Human Factors in Computing
Systems, Denver, CO, USA, 6–11 May 2017; pp. 2493–2498.
3.
Azuma, R.T. The road to ubiquitous consumer augmented reality systems. Hum. Behav. Emerg. Technol.
2019
,1, 26–32. [CrossRef]
4.
Dey, A.; Billinghurst, M.; Lindeman, R.W.; Swan, J. A systematic review of 10 years of augmented reality usability studies: 2005
to 2014. Front. Robot. AI 2018,5, 37. [CrossRef] [PubMed]
5.
Azuma, R.T. The challenge of making augmented reality work outdoors. Mix. Real. Merging Real Virtual Worlds
1999
,1, 379–390.
6. Billinghurst, M. Grand Challenges for Augmented Reality. Front. Virtual Real. 2021,2, 12. [CrossRef]
7.
Voit, A.; Mayer, S.; Schwind, V.; Henze, N. Online, VR, AR, Lab, and In-Situ: Comparison of Research Methods to Evaluate Smart
Artifacts. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019;
Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–12.
8.
Mäkelä, V.; Radiah, R.; Alsherif, S.; Khamis, M.; Xiao, C.; Borchert, L.; Schmidt, A.; Alt, F. Virtual Field Studies: Conducting
Studies on Public Displays in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing
Systems, CHI ’20, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020;
pp. 1–15. [CrossRef]
9.
Yao, T.; Yoo, S.; Parker, C. Evaluating Virtual Reality as a Tool for Empathic Modelling of Vision Impairment. In Proceedings of
the OzCHI ’21, Melbourne, VIC, Australia, 30 November –2 December 2021.
10.
Shah, S.; Dey, D.; Lovett, C.; Kapoor, A. Airsim: High-fidelity visual and physical simulation for autonomous vehicles.
In Proceedings of the 11th Conference on Field and Service Robotics, Zurich, Switzerland, 12–15 September 2017; Springer: Cham,
Switzerland, 2018; pp. 621–635.
11.
Tran, T.T.M.; Parker, C.; Tomitsch, M. A review of virtual reality studies on autonomous vehicle–pedestrian interaction. IEEE
Trans. Hum. Mach. Syst. 2021,51, 641–652. [CrossRef]
12.
Colley, M.; Eder, B.; Rixen, J.O.; Rukzio, E. Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and
Cognitive Load in Highly Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing
Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–11.
13.
Kim, S.; Dey, A.K. Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009;
pp. 133–142.
14.
Jose, R.; Lee, G.A.; Billinghurst, M. A comparative study of simulated augmented reality displays for vehicle naviga-
tion. In Proceedings of the 28th Australian Conference on Computer-Human Interaction, Launceston, TAS, Australia,
29 November–2 December 2016; pp. 40–48.
15.
Riva, G.; Mantovani, F.; Capideville, C.S.; Preziosa, A.; Morganti, F.; Villani, D.; Gaggioli, A.; Botella, C.; Alcañiz, M. Affective
interactions using virtual reality: The link between presence and emotions. Cyberpsychology Behav.
2007
,10, 45–56. [CrossRef]
[PubMed]
16.
Deb, S.; Carruth, D.W.; Sween, R.; Strawderman, L.; Garrison, T.M. Efficacy of virtual reality in pedestrian safety research. Appl.
Ergon. 2017,65, 449–460. [CrossRef]
17.
Grandi, J.G.; Cao, Z.; Ogren, M.; Kopper, R. Design and Simulation of Next-Generation Augmented Reality User Interfaces in
Virtual Reality. In Proceedings of the 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops
(VRW), Lisbon, Portugal, 27 March–1 April 2021; pp. 23–29.
18.
Pratticò, F.G.; Lamberti, F.; Cannavò, A.; Morra, L.; Montuschi, P. Comparing State-of-the-Art and Emerging Augmented Reality
Interfaces for Autonomous Vehicle-to-Pedestrian Communication. IEEE Trans. Veh. Technol. 2021,70, 1157–1168. [CrossRef]
19.
Lim, Y.K.; Stolterman, E.; Tenenberg, J. The anatomy of prototypes: Prototypes as filters, prototypes as manifestations of design
ideas. ACM Trans. Comput. Hum. Interact. (TOCHI) 2008,15, 7. [CrossRef]
20.
Buchenau, M.; Suri, J.F. Experience prototyping. In Proceedings of the 3rd Conference on Designing Interactive Systems:
Processes, Practices, Methods, and Techniques, New York City, NY, USA, 17–19 August 2000; pp. 424–433.
21.
Bowman, D.A.; Stinson, C.; Ragan, E.D.; Scerbo, S.; Höllerer, T.; Lee, C.; McMahan, R.P.; Kopper, R. Evaluating effectiveness in
virtual environments with MR simulation. In Proceedings of the Interservice/Industry Training, Simulation, and Education
Conference, Orlando, FL, USA, 3–6 December 2012; Volume 4, p. 44.
Multimodal Technol. Interact. 2023,7, 21 21 of 23
22.
Lee, C.; Bonebrake, S.; Hollerer, T.; Bowman, D.A. A replication study testing the validity of ar simulation in vr for controlled
experiments. In Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality, Orlando, FL, USA,
19–22 October 2009; pp. 203–204.
23.
Lee, C.; Bonebrake, S.; Bowman, D.A.; Höllerer, T. The role of latency in the validity of AR simulation. In Proceedings of the 2010
IEEE Virtual Reality Conference (VR), Boston, MA, USA, 20–24 March 2010; pp. 11–18.
24.
UN ECOSOC. The UNECE–ITU Smart Sustainable Cities Indicators; UN ECOSOC: New York City, NY, USA; Geneva, Switzer-
land, 2015.
25. Tomitsch, M. Making Cities Smarter; JOVIS Verlag GmbH: Berlin, Germany, 2017.
26.
Narzt, W.; Pomberger, G.; Ferscha, A.; Kolb, D.; Müller, R.; Wieghardt, J.; Hörtner, H.; Lindinger, C. Augmented reality navigation
systems. Univers. Access Inf. Soc. 2006,4, 177–187. [CrossRef]
27.
Jingen Liang, L.; Elliot, S. A systematic review of augmented reality tourism research: What is now and what is next? Tour. Hosp.
Res. 2021,21, 15–30. [CrossRef]
28.
Parker, C.; Tomitsch, M.; Kay, J.; Baldauf, M. Keeping it private: An augmented reality approach to citizen participation with
public displays. In Proceedings of the Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and
Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, Osaka, Japan,
7–11 September 2015; pp. 807–812.
29.
Riegler, A.; Riener, A.; Holzmann, C. A Research Agenda for Mixed Reality in Automated Vehicles. In Proceedings of the 19th
International Conference on Mobile and Ubiquitous Multimedia, Essen, Germany, 22–25 November 2020; pp. 119–131.
30.
Simmons, S.M.; Caird, J.K.; Ta, A.; Sterzer, F.; Hagel, B.E. Plight of the distracted pedestrian: A research synthesis and
meta-analysis of mobile phone use on crossing behaviour. Inj. Prev. 2020,26, 170–176. [CrossRef] [PubMed]
31.
Dünser, A.; Billinghurst, M.; Wen, J.; Lehtinen, V.; Nurminen, A. Exploring the use of handheld AR for outdoor navigation.
Comput. Graph. 2012,36, 1084–1095. [CrossRef]
32.
Rauschnabel, P.A.; Ro, Y.K. Augmented reality smart glasses: An investigation of technology acceptance drivers. Int. J. Technol.
Mark. 2016,11, 123–148. [CrossRef]
33.
Rauschnabel, P.A.; Brem, A.; Ivens, B.S. Who will buy smart glasses? Empirical results of two pre-market-entry studies on the role
of personality in individual awareness and intended adoption of Google Glass wearables. Comput. Hum. Behav.
2015
,49, 635–647.
[CrossRef]
34.
Javornik, A. Directions for studying user experience with augmented reality in public. In Augmented Reality and Virtual Reality;
Springer: Berlin/Heidelberg, Germany, 2018; pp. 199–210.
35.
Publishing, O.; Forum, I.T.; Forum, I.T. Pedestrian Safety, Urban Space and Health; Organisation for Economic Co-Operation and
Development: Paris, France, 2012.
36.
Aromaa, S.; Väätänen, A.; Aaltonen, I.; Goriachev, V.; Helin, K.; Karjalainen, J. Awareness of the real-world environment when
using augmented reality head-mounted display. Appl. Ergon. 2020,88, 103145. [CrossRef]
37.
Hsieh, Y.T.; Jylhä, A.; Orso, V.; Gamberini, L.; Jacucci, G. Designing a willing-to-use-in-public hand gestural interaction technique
for smart glasses. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA,
7–12 May 2016; pp. 4203–4215.
38.
Nebeling, M.; Madier, K. 360proto: Making interactive virtual reality & augmented reality prototypes from paper. In Proceedings
of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–13.
39.
Pfeiffer-Leßmann, N.; Pfeiffer, T. ExProtoVAR: A lightweight tool for experience-focused prototyping of augmented reality
applications using virtual reality. In Proceedings of the International Conference on Human-Computer Interaction, Las Vegas,
NV, USA, 15–20 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 311–318.
40.
Berning, M.; Yonezawa, T.; Riedel, T.; Nakazawa, J.; Beigl, M.; Tokuda, H. pARnorama: 360 degree interactive video for
augmented reality prototyping. In Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct
Publication, Zurich, Switzerland, 8–12 September 2013; pp. 1471–1474.
41.
Freitas, G.; Pinho, M.S.; Silveira, M.S.; Maurer, F. A Systematic Review of Rapid Prototyping Tools for Augmented Reality. In Pro-
ceedings of the 2020 22nd Symposium on Virtual and Augmented Reality (SVR), Porto de Galinhas, Brazil, 7–10 November 2020;
pp. 199–209.
42.
Grubert, J.; Langlotz, T.; Zollmann, S.; Regenbrecht, H. Towards pervasive augmented reality: Context-awareness in augmented
reality. IEEE Trans. Vis. Comput. Graph. 2016,23, 1706–1724. [CrossRef]
43.
Gruenefeld, U.; Auda, J.; Mathis, F.; Schneegass, S.; Khamis, M.; Gugenheimer, J.; Mayer, S. VRception: Rapid Prototyping of
Cross-Reality Systems in Virtual Reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New
Orleans, LA, USA, 29 April–5 May 2022; pp. 1–15.
44.
Alce, G.; Hermodsson, K.; Wallergård, M.; Thern, L.; Hadzovic, T. A prototyping method to simulate wearable augmented reality
interaction in a virtual environment—A pilot study. Int. J. Virtual Worlds Hum. Comput. Interact. 2015,3, 18–28. [CrossRef]
45.
Burova, A.; Mäkelä, J.; Hakulinen, J.; Keskinen, T.; Heinonen, H.; Siltanen, S.; Turunen, M. Utilizing VR and gaze tracking to
develop AR solutions for industrial maintenance. In Proceedings of the 2020 CHI Conference on Human Factors in Computing
Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13.
Multimodal Technol. Interact. 2023,7, 21 22 of 23
46.
Bailie, T.; Martin, J.; Aman, Z.; Brill, R.; Herman, A. Implementing user-centered methods and virtual reality to rapidly prototype
augmented reality tools for firefighters. In Proceedings of the 10th International Conference on Augmented Cognition, Toronto,
ON, Canada, 17–22 July 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 135–144.
47.
Gabbard, J.L.; Swan, J.E.; Hix, D. The effects of text drawing styles, background textures, and natural lighting on text legibility in
outdoor augmented reality. Presence 2006,15, 16–32. [CrossRef]
48.
Lu, F.; Xu, Y. Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented Reality. In Proceedings of the CHI
Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–16.
49. Hassenzahl, M.; Tractinsky, N. User experience—A research agenda. Behav. Inf. Technol. 2006,25, 91–97. [CrossRef]
50.
Thi Minh Tran, T.; Parker, C. Designing exocentric pedestrian navigation for AR head mounted displays. In Proceedings of the
Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020;
pp. 1–8.
51.
Tran, T.T.M.; Parker, C.; Wang, Y.; Tomitsch, M. Designing Wearable Augmented Reality Concepts to Support Scalability in
Autonomous Vehicle–Pedestrian Interaction. Front. Comput. Sci. 2022,4, 866516. [CrossRef]
52.
Trepkowski, C.; Eibich, D.; Maiero, J.; Marquardt, A.; Kruijff, E.; Feiner, S. The effect of narrow field of view and information
density on visual search performance in augmented reality. In Proceedings of the 2019 IEEE Conference on Virtual Reality and
3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 575–584.
53.
Lee, J.; Jin, F.; Kim, Y.; Lindlbauer, D. User Preference for Navigation Instructions in Mixed Reality. In Proceedings of the 2022
IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; pp. 802–811.
54.
Zhao, Y.; Kupferstein, E.; Rojnirun, H.; Findlater, L.; Azenkot, S. The effectiveness of visual and audio wayfinding guidance on
smartglasses for people with low vision. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems,
Honolulu, HI, USA, 25–30 April 2020; pp. 1–14.
55. Joy, P.C. This Is What the World Looks Like through Google Glass; 2013.
56.
Goldiez, B.F.; Ahmad, A.M.; Hancock, P.A. Effects of augmented reality display settings on human wayfinding performance.
IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2007,37, 839–845. [CrossRef]
57.
Lehikoinen, J.; Suomela, R. WalkMap: developing an augmented reality map application for wearable computers. Virtual Real.
2002,6, 33–44. [CrossRef]
58. Oculus. Introducing Oculus Quest, Our First 6DOF All-In-One VR System; Oculus VR: Irvine, CA, USA, 2019.
59.
Souman, J.L.; Giordano, P.R.; Schwaiger, M.; Frissen, I.; Thümmel, T.; Ulbrich, H.; Luca, A.D.; Bülthoff, H.H.; Ernst, M.O.
CyberWalk: Enabling unconstrained omnidirectional walking through virtual environments. ACM Trans. Appl. Percept. (TAP)
2011,8, 1–22. [CrossRef]
60.
Jayaraman, S.K.; Creech, C.; Robert Jr, L.P.; Tilbury, D.M.; Yang, X.J.; Pradhan, A.K.; Tsui, K.M. Trust in AV: An uncertainty
reduction model of AV-pedestrian interactions. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference
on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 133–134.
61.
Boletsis, C.; Cedergren, J.E. VR locomotion in the new era of virtual reality: An empirical comparison of prevalent techniques.
Adv. Hum. Comput. Interact. 2019,2019, 7420781. [CrossRef]
62.
Di Luca, M.; Seifi, H.; Egan, S.; Gonzalez-Franco, M. Locomotion vault: The extra mile in analyzing vr locomotion techniques.
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–10.
63.
Miguel-Alonso, I.; Rodriguez-Garcia, B.; Checa, D.; De Paolis, L.T. Developing a Tutorial for Improving Usability and User Skills
in an Immersive Virtual Reality Experience. In Proceedings of the International Conference on Extended Reality, Lecce, Italy,
6–8 July 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 63–78.
64.
Rouchitsas, A.; Alm, H. External human–machine interfaces for autonomous vehicle-to-pedestrian communication: A review of
empirical work. Front. Psychol. 2019,10, 2757. [CrossRef]
65.
Dey, D.; Habibovic, A.; Löcken, A.; Wintersberger, P.; Pfleging, B.; Riener, A.; Martens, M.; Terken, J. Taming the eHMI jungle: A
classification taxonomy to guide, compare, and assess the design principles of automated vehicles’ external human-machine
interfaces. Transp. Res. Interdiscip. Perspect. 2020,7, 100174. [CrossRef]
66.
Colley, M.; Walch, M.; Rukzio, E. Unveiling the Lack of Scalability in Research on External Communication of Autonomous
Vehicles. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems;
Association for Computing Machinery, CHI EA ’20, Honolulu, HI, USA, 25–30 April 2020; pp. 1–9. [CrossRef]
67.
Tabone, W.; de Winter, J.; Ackermann, C.; Bärgman, J.; Baumann, M.; Deb, S.; Emmenegger, C.; Habibovic, A.; Hagenzieker, M.;
Hancock, P.; et al. Vulnerable road users and the coming wave of automated vehicles: Expert perspectives. Transp. Res. Interdiscip.
Perspect. 2021,9, 100293. [CrossRef]
68.
Hesenius, M.; Börsting, I.; Meyer, O.; Gruhn, V. Don’t panic! guiding pedestrians in autonomous traffic with augmented reality.
In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct,
Barcelona, Spain, 3–6 September 2018; pp. 261–268.
69.
Tabone, W.; Happee, R.; García, J.; Lee, Y.M.; Lupetti, M.L.; Merat, N.; de Winter, J. Augmented Reality Interfaces for Pedestrian-Vehicle
Interactions: An Online Study; 2022.
70. Tonguz, O.; Zhang, R.; Song, L.; Jaiprakash, A. System and Method Implementing Virtual Pedestrian Traffic Lights. U.S. Patent
Application No. 17/190,983, 25 May 2021.
Multimodal Technol. Interact. 2023,7, 21 23 of 23
71.
Hoggenmüller, M.; Tomitsch, M.; Hespanhol, L.; Tran, T.T.M.; Worrall, S.; Nebot, E. Context-Based Interface Prototyping:
Understanding the Effect of Prototype Representation on User Feedback. In Proceedings of the 2021 CHI Conference on Human
Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–14.
72.
Schneider, S.; Bengler, K. Virtually the same? Analysing pedestrian behaviour by means of virtual reality. Transp. Res. Part Traffic
Psychol. Behav. 2020,68, 231–256. [CrossRef]
73. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006,3, 77–101. [CrossRef]
74. Golledge, R.G. Wayfinding Behavior: Cognitive Mapping and Other Spatial Processes; JHU Press: Baltimore, MD, USA, 1999.
75.
Schmidt, A.; Herrmann, T. Intervention user interfaces: a new interaction paradigm for automated systems. Interactions
2017
,
24, 40–45. [CrossRef]
76.
Rauschnabel, P.A.; Hein, D.W.; He, J.; Ro, Y.K.; Rawashdeh, S.; Krulikowski, B. Fashion or technology? A fashnology perspective
on the perception and adoption of augmented reality smart glasses. i-com 2016,15, 179–194. [CrossRef]
77.
Simeone, A.L.; Cools, R.; Depuydt, S.; Gomes, J.M.; Goris, P.; Grocott, J.; Esteves, A.; Gerling, K. Immersive Speculative
Enactments: Bringing Future Scenarios and Technology to Life Using Virtual Reality. In Proceedings of the CHI Conference on
Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–20.
78.
Krauß, V.; Nebeling, M.; Jasche, F.; Boden, A. Elements of XR Prototyping: Characterizing the Role and Use of Prototypes in
Augmented and Virtual Reality Design. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New
Orleans, LA, USA, 29 April–5 May 2022; pp. 1–18.
79.
Merenda, C.; Suga, C.; Gabbard, J.L.; Misu, T. Effects of “Real-World” Visual Fidelity on AR Interface Assessment: A Case Study
Using AR Head-Up Display Graphics in Driving. In Proceedings of the 2019 IEEE International Symposium on Mixed and
Augmented Reality (ISMAR), Beijing, China, 14–18 October 2019; pp. 145–156.
80.
Lu, F.; Bowman, D.A. Evaluating the potential of glanceable ar interfaces for authentic everyday uses. In Proceedings of the 2021
IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March–1 April 2021; pp. 768–777.
81.
Morrison, A.; Oulasvirta, A.; Peltonen, P.; Lemmela, S.; Jacucci, G.; Reitmayr, G.; Näsänen, J.; Juustila, A. Like bees around the
hive: A comparative study of a mobile augmented reality map. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 1889–1898.
82.
Kjeldskov, J.; Skov, M.B. Was it worth the hassle? Ten years of mobile HCI research discussions on lab and field evaluations.
In Proceedings of the 16th International Conference on Human-Computer Interaction with MOBILE devices & Services, Toronto,
ON, Canada, 23–26 September 2014; pp. 43–52.
83.
Rogers, Y.; Connelly, K.; Tedesco, L.; Hazlewood, W.; Kurtz, A.; Hall, R.E.; Hursey, J.; Toscos, T. Why it’s worth the hassle: The
value of in-situ studies when designing ubicomp. In Proceedings of the International Conference on Ubiquitous Computing,
Innsbruck, Austria, 16–19 September 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 336–353.
84. Lee, L.H.; Hui, P. Interaction methods for smart glasses: A survey. IEEE Access 2018,6, 28712–28732. [CrossRef]
85.
Dole, L.; Ju, W. Face and ecological validity in simulations: Lessons from search-and-rescue HRI. In Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–8.
86.
Witmer, B.G.; Singer, M.J. Measuring presence in virtual environments: A presence questionnaire. Presence
1998
,7, 225–240.
[CrossRef]
87.
Flohr, L.A.; Janetzko, D.; Wallach, D.P.; Scholz, S.C.; Krüger, A. Context-based interface prototyping and evaluation for (shared)
autonomous vehicles using a lightweight immersive video-based simulator. In Proceedings of the 2020 ACM Designing
Interactive Systems Conference, Eindhoven, The Netherlands, 6–10 July 2020; pp. 1379–1390.
88.
Ragan, E.; Wilkes, C.; Bowman, D.A.; Hollerer, T. Simulation of augmented reality systems in purely virtual environments.
In Proceedings of the 2009 IEEE Virtual Reality Conference, Lafayette, LA, USA, 14–18 March 2009; pp. 287–288.
Disclaimer/Publisher’s Note:
The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
... However, humans are actively engaged in all the different modalities perceived at any moment, and in AR applications, saturation may be reached due to humans' limited cognitive abilities [1]. Therefore, it is necessary for researchers to divide AR scenes into directly interactive components and real background environments [10]- [13] to avoid confusion. The directly interactive components include physical objects and virtual objects, while the background environment will interfere with the participants. ...
... Furthermore, it can be employed as a more general tool for testing the potential applications of augmented reality in other contexts. Tran et al. developed a system that enables participants to interact in a controlled VR environment to simulate the experiences of wearing urban AR devices [13]. They conducted two simulations: pedestrian navigation and autonomous mobility. ...
... Although many researchers have proposed and applied VR simulations to design AR experiments [2], [13], [24], [25]. However, there has been no research into the effectiveness of VR rendering for AR scenes with high levels of environmental dynamic interference. ...
Preprint
Virtual content in Augmented Reality (AR) applications can be constructed according to the designer's requirements, but real environments, are difficult to be accurate control or completely reproduce. This makes it difficult to prototype AR applications for certain real environments. One way to address this issue is to use Virtual Reality (VR) to simulate an AR system, enabling the design of controlled experiments and conducting usability evaluations. However, the effectiveness of using VR to simulate AR has not been well studied. In this paper, we report on a user study (N=20) conducted to investigate the impact of using an VR simulation of AR on participants' task performance and cognitive workload (CWL). Participants performed several office tasks in an AR scene with virtual monitors and then again in the VR-simulated AR scene. While using the interfaces CWL was measured with Electroencephalography (EEG) data and a subjective questionnaire. Results showed that frequent visual checks on the keyboard resulted in decreased task performance and increased cognitive workload. This study found that using AR centered on virtual monitor can be effectively simulated using VR. However, there is more research that can be done, so we also report on the study limitations and directions for future work.
... Despite the rich body of information that was extracted from the online questionnaire study, one of the limitations of Tabone et al. (2023) was that it did not offer high ecological validity and presented only low perceived risks to the participants. A possible solution to this problem would be to use a VR simulation method that embodies the participant. ...
... (A) spawn point, (B) AR interface onset, (C) AV deceleration onset, (D) stopping point, (E) participant location. Image adapted from Tabone et al. (2023). ...
... The post-block interviews were analysed using an approach introduced by Tabone and De Winter (2023). Specifically, the 300 post-block voice recordings were transcribed using Whisper-v2 (Radford et al., 2023). ...
Article
Full-text available
Introduction: Augmented reality (AR) has been increasingly studied in transportation, particularly for drivers and pedestrians interacting with automated vehicles (AVs). Previous research evaluated AR interfaces using online video-based questionnaires but lacked human-subject research in immersive environments. This study examined if prior online evaluations of nine AR interfaces could be replicated in an immersive virtual environment and if AR interface effectiveness depends on pedestrian attention allocation. Methods: Thirty participants completed 120 trials in a CAVE-based simulator with yielding and non-yielding AVs, rating the interface's intuitiveness and crossing the road when they felt safe. To emulate visual distraction, participants had to look into an attention-attractor circle that disappeared 1 s after the interface appeared. Results: The results showed that intuitiveness ratings from the current CAVE-based study and the previous online study correlated strongly (r ≈ 0.90). Head-locked interfaces and familiar designs (augmented traffic lights, zebra crossing) yielded higher intuitiveness ratings and quicker crossing initiations than vehicle-locked interfaces. Vehicle-locked interfaces were less effective when the attention-attractor was on the environment's opposite side, while head-locked interfaces were relatively unaffected by attention-attractor position. Discussion: In conclusion, this 'AR in VR' study shows strong congruence between intuitiveness ratings in a CAVE-based study and online research, and demonstrates the importance of interface placement in relation to user gaze direction.
... In HCI, XR simulations have been utilized to explore novel authentication procedures [43,74], shared interactive spaces [32], or to explore user behavior in social contexts when interacting with virtual data [46,51]. Previous work also evaluated the feasibility of novel MR visualizations in XR simulations, such as advanced adaptive user interfaces [17,38], guidance visualizations for industrial maintenance [15], or the use of MR in urban environments [33,42,68]. However, to fully realize the potential of XR simulation for product development and prototyping novel user experiences, research must establish ecological validity, i.e., if results gathered in a simulation can be transferred to a real-world implementation of the evaluated scenario. ...
... H3. Similar to previous work [14,20,68], we found comparable qualitative feedback between both Realism conditions, as seen by the similar rankings in both conditions (see Fig. 4). However, our focus was not finding similar qualitative feedback, but determining relative validity for objective performance measures. ...
Preprint
Full-text available
Researching novel user experiences in medicine is challenging due to limited access to equipment and strict ethical protocols. Extended Reality (XR) simulation technologies offer a cost- and time-efficient solution for developing interactive systems. Recent work has shown Extended Reality Prototyping (XRP)'s potential, but its applicability to specific domains like controlling complex machinery needs further exploration. This paper explores the benefits and limitations of XRP in controlling a mobile medical imaging robot. We compare two XR visualization techniques to reduce perceived latency between user input and robot activation. Our XRP validation study demonstrates its potential for comparative studies, but identifies a gap in modeling human behavior in the analytic XRP validation framework.
... These simulations are particularly advantageous for prototyping external Human-Machine Interfaces (eHMIs) [19,53]-systems that communicate a vehicle's intentions or operational status to pedestrians. They allow for rapid and cost-effective iterations of eHMI designs, especially those utilising emerging technologies like smart infrastructures [30] and wearable augmented reality [61,65]. ...
Preprint
Full-text available
Recent research has increasingly focused on how autonomous vehicles (AVs) communicate with pedestrians in complex traffic situations involving multiple vehicles and pedestrians. VR is emerging as an effective tool to simulate these multi-entity scenarios, offering a safe and controlled study environment. Despite its growing use, there is a lack of thorough investigation into the effectiveness of these VR simulations, leaving a notable gap in documented insights and lessons. This research undertook a retrospective analysis of two distinct VR-based studies: one focusing on multiple AV scenarios (N=32) and the other on multiple pedestrian scenarios (N=25). Central to our examination are the participants' sense of presence and their crossing behaviour. The findings highlighted key factors that either enhance or diminish the sense of presence in each simulation, providing considerations for future improvements. Furthermore, they underscore the influence of controlled scenarios on crossing behaviour and interactions with AVs, advocating for the exploration of more natural and interactive simulations that better reflect real-world AV and pedestrian dynamics. Through this study, we set a groundwork for advancing VR simulators to study complex interactions between AVs and pedestrians.
... The research was conducted in VR, primarily for safety reasons and to reduce the development costs associated with creating futuristic prototypes such as smart infrastructure and wearable AR [50,59,64,65]. While this approach offers significant benefits, it may not fully capture the complexities of real-world settings. ...
Article
Full-text available
With the rise of autonomous vehicles (AVs) in transportation, a pressing concern is their seamless integration into daily life. In multi-pedestrian settings, two challenges emerge: ensuring unambiguous communication to individual pedestrians via external Human–Machine Interfaces (eHMIs), and the influence of one pedestrian over another. We conducted an experiment (N=25) using a multi-pedestrian virtual reality simulator. Participants were paired and exposed to three distinct eHMI concepts: on the vehicle, within the surrounding infrastructure, and on the pedestrian themselves, against a baseline without any eHMI. Results indicate that all eHMI concepts improved clarity of communication over the baseline, but differences in their effectiveness were observed. While pedestrian and infrastructure communications often provided more direct clarity, vehicle-based cues at times introduced uncertainty elements. Furthermore, the study identified the role of co-located pedestrians: in the absence of clear AV communication, individuals frequently sought cues from their peers.
... Furthermore, complex technological solutions that require extensive effort to implement, such as user-perspective rendering [6], and shared interactive spaces utilizing various technologies [31], have been efficiently prototyped and explored in simulations. Social contexts that influence user behavior, e.g., when placing virtual information in public [50,56], have also been part of simulations, as well as the simulation of complex and potentially dangerous outdoor environments to explore MR visualizations in urban environments [70], or the use of augmentations to improve situational awareness of pedestrians [32,43]. ...
... This includes video recordings which are displayed on a conventional screen [4], as well as computer-generated simulations presented to participants through projection-based VR environments (also known as CAVE) [45,46] or VR headsets [47]. Several studies across different contexts (e.g., public displays [3], wearable technology [48] and smart home devices [49]) have demonstrated that the evaluation of interfaces in immersive VR environments holds comparable usability results and user behaviour to that of real-world settings. Researchers have further found that the use of 360-degree video recordings of real-world prototypes can further improve perceived representational fidelity and sense of presence [50]. ...
Article
Full-text available
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged.
Article
Full-text available
Interaction between drivers and pedestrians is often facilitated by informal communicative cues, like hand gestures, facial expressions, and eye contact. In the near future, however, when semi- and fully autonomous vehicles are introduced into the traffic system, drivers will gradually assume the role of mere passengers, who are casually engaged in non-driving-related activities and, therefore, unavailable to participate in traffic interaction. In this novel traffic environment, advanced communication interfaces will need to be developed that inform pedestrians of the current state and future behavior of an autonomous vehicle, in order to maximize safety and efficiency for all road users. The aim of the present review is to provide a comprehensive account of empirical work in the field of external human–machine interfaces for autonomous vehicle-to-pedestrian communication. In the great majority of covered studies, participants clearly benefited from the presence of a communication interface when interacting with an autonomous vehicle. Nevertheless, standardized interface evaluation procedures and optimal interface specifications are still lacking.