Available via license: CC BY 4.0
Content may be subject to copyright.
Light My Way: Developing and Exploring a Multimodal Interface
to Assist People With Visual Impairments to Exit Highly
Automated Vehicles
Luca-Maxim Meinhardt
luca.meinhardt@uni-ulm.de
Institute of Media Informatics, Ulm
University
Ulm, Germany
Lina Wilke
lina.wilke@uni-ulm.de
Institute of Media Informatics, Ulm
University
Ulm, Germany
Maryam Elhaidary
maryam.elhaidary@uni-ulm.de
Institute of Media Informatics, Ulm
University
Ulm, Germany
Julia von Abel
julia.von-abel@uni-ulm.de
Institute of Media Informatics
Ulm, Germany
Paul Fink
paul.nk@maine.edu
The University of Maine
Maine, US
Michael Rietzler
michael.rietzler@uni-ulm.de
Institute of Media Informatics
Ulm, Germany
Mark Colley
m.colley@ucl.ac.uk
Institute of Media Informatics
Ulm, Germany
UCL Interaction Centre
London, United Kingdom
Enrico Rukzio
enrico.rukzio@uni-ulm.de
Institute of Media Informatics, Ulm
University
Ulm, Germany
Figure 1: (Left) Interactive workshop (N=5) exploring the information needs of blind and visually impaired people when exiting
future highly automated vehicles. Participants engaged with three initial low-delity prototypes: a smartphone, a window touch
prototype, and tactile bars. (Right) Study setup featuring three monitors and a real car door with PathFinder–a multimodal
interface–attached to simulate a ride with an HAV. The top section explains PathFinder’s functionalities, including the
compass needle, ve extendable obstacle buttons, and the vehicle button. We used this setup to conduct a three-factorial
within-between-subject study, using system and scenarios as our two within factors and participants’ visual acuity as the
between factor
This work is licensed under a Creative Commons Attribution 4.0 International License.
CHI ’25, Yokohama, Japan
©2025 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-1394-1/25/04
https://doi.org/10.1145/3706598.3713454
Abstract
The introduction of Highly Automated Vehicles (HAVs) has the po-
tential to increase the independence of blind and visually impaired
people (BVIPs). However, ensuring safety and situation awareness
when exiting these vehicles in unfamiliar environments remains
challenging. To address this, we conducted an interactive work-
shop with N=5 BVIPs to identify their information needs when
arXiv:2501.11801v1 [cs.HC] 21 Jan 2025
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
exiting an HAV and evaluated three prior-developed low-delity
prototypes. The insights from this workshop guided the develop-
ment of PathFinder, a multimodal interface combining visual,
auditory, and tactile modalities tailored to BVIP’s unique needs. In
a three-factorial within-between-subject study with N=16 BVIPs,
we evaluated PathFinder against an auditory-only baseline in ur-
ban and rural scenarios. PathFinder signicantly reduced mental
demand and maintained high perceived safety in both scenarios,
while the auditory baseline led to lower perceived safety in the
urban scenario compared to the rural one. Qualitative feedback
further supported PathFinder’s eectiveness in providing spatial
orientation during exiting.
CCS Concepts
•Hardware
→
Sensors and actuators;•Human-centered com-
puting
→
User studies;Laboratory experiments;Haptic devices;
Sound-based input / output;Accessibility design and evaluation
methods;Empirical studies in accessibility;Accessibility tech-
nologies;Accessibility systems and tools.
Keywords
people with visual impairments, multimodal interfaces, situation
awareness, highly automated vehicles
ACM Reference Format:
Luca-Maxim Meinhardt, Lina Wilke, Maryam Elhaidary, Julia von Abel, Paul
Fink, Michael Rietzler, Mark Colley, and Enrico Rukzio. 2025. Light My Way:
Developing and Exploring a Multimodal Interface to Assist People With
Visual Impairments to Exit Highly Automated Vehicles. In CHI Conference
on Human Factors in Computing Systems (CHI ’25), April 26-May 1, 2025,
Yokohama, Japan. ACM, New York, NY, USA, 20 pages. https://doi.org/10.
1145/3706598.3713454
1 Introduction
Over 270 million people worldwide live with vision impairments [
2
,
69
], and this number is expected to rise as the population ages [
11
].
These impairments can limit daily activities such as driving [
56
],
making independent mobility a signicant challenge. Hence, the
introduction of Highly Automated Vehicles (HAVs) in the near fu-
ture has the potential to improve transportation for people who
are blind or have visual impairments (BVIPs) [
19
]. Studies indicate
that sighted individuals barely expect increased independence using
HAVs, but this expectation is signicantly higher among BVIPs [
52
].
Hence, by enabling independent and safe mobility among this de-
mographic, HAVs represent a crucial step toward achieving greater
equality in transportation [
10
]. However, ensuring safety and situ-
ation awareness when exiting these vehicles in unfamiliar environ-
ments remains a critical challenge as in today’s manually driven
vehicles, BVIPs often rely on drivers to drop them o at convenient
locations that make it easier to navigate their surroundings [
15
].
With the introduction of HAVs, BVIPs may gain more indepen-
dence [
52
] but are likely to face situations alone without human
assistance. This is where situation awareness will be particularly
important as BVIP. This situation awareness involves “the percep-
tion of the elements in the environment within a volume of time
and space, the comprehension of their meaning, and the projection
of their status in the near future”[
26
, p. 5]. In exciting situations,
specic assistance—like detailed information about the vehicle’s
surroundings while parked—can benet BVIPs [
15
,
30
]. However,
situation awareness is not exclusive to BVIPs. Research shows that
it also signicantly enhances trust and perceived safety for sighted
passengers in HAVs [23, 29, 45, 73, 87, 90].
Related research has already been conducted on parts of the trip
via HAVs to enhance BVIPs’ situation awareness, including locat-
ing ride-sharing vehicles with a smartphone application [
31
] and
enhancing situation awareness during rides by conveying trac-
relevant information [
35
,
37
,
60
,
61
]. However, enhancing situation
awareness and safety while exiting the HAV remains underexplored.
Yet this part of the trip is crucial as it requires immediate awareness
of potential hazards like moving cyclists [
15
] or obstacles that could
cause trips or falls, posing signicant safety risks. Unlike typical
pedestrian navigation, where BVIPs rely on tools like canes or guide
dogs, exiting an HAV involves rapidly adapting to a potentially un-
familiar and more hazardous environment. This situation requires
new solutions to complement traditional navigation aids.
A notable attempt to address this research gap is the prototype
ATLAS developed by Brinkley et al
. [18]
, which utilizes computer
vision to articulate the surroundings upon arrival at the destina-
tion [
18
]. Despite its advancements, such as increased trust to-
wards the HAV, this solution is limited to auditory feedback only.
However, incorporating additional modalities, such as tactile feed-
back [
88
], might be even more helpful by providing a multimodal
approach [
35
]. In fact, research suggests that integrating multiple
modalities enriches the quality of information conveyed and signi-
cantly enhances situation awareness for BVIPs, oering advantages
over single-modality feedback [
89
]. Specically, the combination
of voice-based and tactile feedback is particularly eective for nav-
igation tasks [57].
This paper explores a new interface designed to support BVIPs
in such situations. Recognizing the advantages of multimodal in-
terfaces in conveying information to BVIPs [
35
,
61
], we developed
three initial prototypes (a smartphone, a window touch prototype,
and tactile bars prototype) based on related work (e.g. [
43
,
47
,
55
,
61
]). Each prototype featured various modalities, including tactile,
auditory, and visual cues, as well as dierent interaction strategies
like pointing and sensing. This enabled us to conduct a focused
evaluation of each modality and interaction strategy during an
interactive workshop with N=5 BVIPs. In addition to evaluating the
initial prototypes, the workshop explored the information needs of
BVIPs when exiting a future HAV and possible methods to convey
this information.
The workshop results highlight the need for a multimodal ap-
proach to provide information about the vehicle’s surroundings.
In response, we developed PathFinder, a system designed to help
BVIPs safely exit HAVs. By integrating visual, tactile, and auditory
modalities into its design, PathFinder adapts to BVIPs with dif-
ferent degrees of visual impairments. This approach ensures that
PathFinder eectively supports each passenger’s individual needs
in HAVs.
We evaluated PathFinder in a subsequent three-factorial, within-
between-subject user study with N=16 BVIPs. This study compared
PathFinder to an auditory-only baseline, the current standard in
accessible navigation technology, across two scenarios: a complex
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
urban environment and a simpler rural setting. Quantitative re-
sults demonstrated that PathFinder signicantly reduced mental
demand compared to the baseline. Additionally, the multimodal
system consistently maintained high perceived safety in both sce-
narios, whereas the auditory baseline resulted in lower perceived
safety in the urban scenario compared to the rural one. Additionally,
qualitative feedback revealed a preference for multimodal informa-
tion of conveyance of PathFinder, which improved participant’s
spatial orientation.
Contribution Statements [86]
•
Empirical study that tells us about people. We developed
three low-delity prototypes with dierent interaction strategies
and modalities to assist BVIPs in exiting HAVs, which were used
as inspirational input for the interactive workshop with N=5
BVIPs. We found that participants preferred tactile cues as the
basic modality to gain an overview of the surrounding HAVs,
with auditory cues used for critical information, highlighting the
need for multimodal accessible interfaces.
•
Artifact or System. Based on insights from the interactive work-
shop, we designed and developed PathFinder, a multimodal
interface including tactile, auditory, and visual modalities to as-
sist BVIPs in exiting HAVs. This artifact demonstrates how the
ndings of the interactive workshop are applied to a concrete
interface design that can be reproduced in future studies, as we
provided all construction les as open source.
•
Empirical study that tells us about how people use a system.
In the following user study with N=16 BVIPs, we found that
PathFinder signicantly reduced mental demand compared to
an auditory-only baseline and maintained high perceived safety
in both urban and rural scenarios. These results provide empirical
evidence that multimodal interfaces can outperform unimodal
systems in the HAV context, especially in complex environments,
and highlight the need to tailor the interface to the user’s visual
acuity and the situation at hand.
2 Related Work
This research is grounded in current research on BVIPs and HAVs.
We present navigation aids for BVIPs primarily designed to support
pedestrians. Following this, we dive into the context of HAVs by
describing current research on the needs of BVIPs within these
vehicles.
2.1 Navigation Aids for Visually Impaired
People
Giudice and Legge
[40]
explored how technological aids assist with
navigation for people with visual impairments, identifying four
key considerations: (1) The conveyance of visual information into
auditory or tactile modalities should be dened clearly, accommo-
dating the cognitive demands and learning curve of users. (2) The
presented information should be minimized to the essentials. (3)
Given that each system has unique advantages and disadvantages
depending on the context, combining various aids might be neces-
sary for eective navigation across dierent scenarios. (4) Devices
should be designed to be non-intrusive and aesthetically pleasing.
Building on these guidelines, several navigation aids have been
developed and assessed to support BVIPs. For this, Ducasse et al
.
[25]
reviewed various dynamic tactile maps for BVIPs, classifying
them into Digital Interactive Maps displayed on at surfaces like
screens and Hybrid Interactive Maps that incorporate both digital
and physical elements. These dynamic tactile maps have demon-
strated higher performance compared to touchscreen and swell
paper maps (a type of tactile paper that raises printed images or
text) regarding map reading speed and the ability to create a mental
map of the route [
91
]. Further research by Holloway et al
. [48]
demonstrated that tactile maps [...] support orientation and mobility
through identication of landmarks, route planning and creation of a
mental map [...] [48, p.184].
In general, multimodal approaches to conveying information
seem to outperform those that rely on a single modality, as sup-
ported in multiple-resource theory by Wickens et al
. [84]
, which
states that distributing information across modalities such as audi-
tory and visual reduces competition for attention and processing
resources, leading to better task performance, and reduced mental
demands as task diculty increases [
70
]. Neuroimaging studies
further support this, showing that the occipital cortex in blind in-
dividuals represents spatial information similarly across dierent
sensory inputs [
6
], facilitating sensory-independent spatial repre-
sentations [
59
]. Therefore, multimodal interfaces leverage cognitive
advantages and neural adaptability, potentially leading to more ef-
fective navigation aids for BVIPs. This argument is in line with
nding from Kuriakose et al
. [54]
, who reviewed multiple tools and
technologies that support BVIPs in their navigation task, recom-
mending that “if there is an option for multiple feedback modalities,
the user will get the exibility to choose one based on a situation or
environment” [
54
, p. 12]. This aligns with Yatani et al
. [89]
, who
found that handheld tactile maps combining tactile feedback with
audio instructions oer superior spatial orientation compared to
audio-only feedback. Additionally, the study revealed dierences in
the eectiveness of verbal audio vs. auditory icons, aligning with
the ndings of Glatz et al
. [41]
, who found auditory icons to be
more eective for conveying contextual information, while verbal
audio was better for urgent requests. Further, by comparing the ef-
fectiveness of auditory, visual, and combined audio-visual feedback,
the combination of audio and visual feedback improved partici-
pants’ situation awareness more than visual feedback alone [
66
].
Additionally, multimodal maps with tactile elements, augmented by
audio feedback when touched, enhanced navigation skill improve-
ment for BVIPs. Participants especially valued the combination of
audio and tactile cues, highlighting the importance of designing
such tools in line with users’ preferences and needs [4].
Given the advantages of multimodal systems for navigation tasks
and following the guidelines of Giudice and Legge
[40]
, this work
investigates the potential of multimodal interfaces for BVIPs in the
automotive domain. The following sections will explore the specic
needs of BVIPs inside HAVs, providing a foundation for developing
new systems tailored to their requirements.
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
2.2 Needs and Opinions of Visually Impaired
People in the Context of Highly Automated
Vehicles
While the aforementioned studies primarily focused on pedestrian
navigation, the introduction of HAVs presents new opportunities
and challenges for BVIPs. Most BVIPs are enthusiastic about the
autonomy HAVs promise, potentially granting access to previously
challenging locations [
52
]. Despite this, initial qualitative research
showed that BVIPs raised concerns regarding whether HAVs will be
truly designed to meet BVIPs needs [
10
,
17
,
19
]. To envision these
needs, current rideshare services have been used as a proxy for
future HAV scenarios [
15
,
30
] and workshops have been conducted
to identify passengers’ needs and imagine accessible interfaces
for HAVs [
16
]. Results have demonstrated the need for non-visual
support for BVIPs throughout the complete transportation trip via
HAVs, from locating an HAV [
32
] to conveying trac information,
such as the reason for the HAV stopping during the ride [
35
,
61
]. The
following sections review the small but emerging eld of literature
on nonvisual interfaces across the complete trip via HAVs.
2.3 Non-visual Interface Development Across
the Entire Journey
Initial research has sought to design and test accessible interfaces
in HAVs. For instance, researchers have evaluated mid-air haptics
and tactile interfaces to enhance situation awareness during the
ride [
33
,
61
]. Additionally, gestural interactions have been explored
for in-vehicle control by BVIPs [
30
]. While these in-vehicle studies
have shown promising results, they have primarily focused on the
on-road part of the trip. Only a few studies have investigated other
parts of the trip, such as pre-journey mapping [
36
] and vehicle lo-
calization [
31
,
74
]. While the ATLAS system by Brinkley et al
. [18]
explored supporting BVIPs in gaining situation awareness when
exiting the vehicle, it solely utilized the auditory modality, missing
the benets of multimodal interfaces that can support BVIPs inside
HAVs [35, 61].
To address this research gap, this work will examine BVIPs’ infor-
mation needs for exiting HAVs and investigate the potential of a
multimodal interface to assist them during this phase. To achieve
this, we hosted an interactive workshop with BVIPs and created
three initial low-dality prototypes, which will be described in the
following section.
3 Initial Low-Fidelity Prototypes
Prior to hosting the interactive workshop with N=5 BVIPs, we de-
veloped three initial low-delity prototypes (smartphone, window
touch, and tactile bars) based on existing research [
43
,
47
,
55
,
61
] and
state-of-the-art smartphone applications for BVIPs [
9
,
35
,
43
,
63
].
This section will detail each prototype and explain the related work
from which they were derived. The three prototypes were designed
to serve as concrete examples to inspire and facilitate discussion
during the workshop, providing participants with tangible pro-
totypes to interact with rather than relying solely on conceptual
discussions about potential interaction strategies and modalities.
Hence, each prototype employs dierent modalities and interaction
strategies. This approach enabled us to evaluate each modality and
strategy independently in a focused manner during the subsequent
workshop. Below, we describe each prototype in detail, along with
the rationale for their design choices. Additionally, we provided
the construction les for each initial prototype for reconstruction
in a git repository (see section Open Science).
Scenario Design for the Prototypes. For the prototypes, we de-
signed a simulated suburban scene using Unity [
82
] version 2023.2.1f1
and the Suburb Neighborhood House Pack asset [
38
]. This scene,
which was used in both the smartphone prototype and the window
touch prototype, depicted an HAV parked on the side of the road,
with a pedestrian/cyclist path next to it and a house (the nal desti-
nation) behind the path. The scene included static obstacles such
as a tree and a street sign near the HAV’s door, as well as dynamic
obstacles like pedestrians and cyclists moving in front of the door.
3.1 Smartphone Prototype
Smartphones are prevalent among BVIPs, especially among young
BVIPs (19-34); 76% of them own a smartphone [
1
]. Building on
this familiarity, we developed a smartphone prototype inspired by
previous work, such as the object detection application by Zhong
et al
. [92]
and more recent smartphone-based navigation aids in
the automotive context for BVIPs [
31
]. Research has indicated that
BVIPs prefer to move the smartphone to scan their surroundings
when exploring their environment, compared to other interaction
strategies via smartphone [
43
]. Therefore, our smartphone proto-
type (Figure 2a) allows users to scan their surroundings using the
smartphone’s camera, triggering auditory descriptions of static ob-
jects and dynamic obstacles within the simulated suburban scene.
A button on the screen allows participants to cast a ray within
the Unity environment, identifying objects like "Tree" via verbal
auditory feedback. Vibration feedback conrms successful button
activation, enhancing interaction [
71
]. Dynamic obstacles, such as
approaching cyclists, are automatically announced in real-time to
ensure immediate awareness.
The interaction strategy combines verbal and auditory modalities,
supported by visual feedback through the smartphone’s scanning
and pointing mechanism.
3.2 Window Touch Prototype
Our window touch prototype ( Figure 2b) draws inspiration from
Ford’s "Feel the View" system [
42
], which allows users to receive
tactile feedback about the outlines of the environment on the vehi-
cle’s side window. Our prototype extends this concept, but instead
of using vibration, we employed verbal auditory feedback of the
obstacles when the participants touch the window, as based on
touch-exploration of images [55, 63].
To demonstrate this prototype, we created a setup with a car
door and a 75" monitor displaying the same suburban scene used
in the smartphone prototype. This setup was designed as a Wizard-
of-Oz prototype, where one of the workshop moderators manually
triggered the corresponding verbal output upon participants’ point-
ing. For example, if a participant pointed toward a tree, the verbal
sound "Tree" was played. Like the smartphone prototype, infor-
mation about dynamic obstacles, including their direction, was
communicated automatically upon the HAV reaching its destina-
tion.
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
(a) Smartphone Prototype (b) Window Touch Prototype (c) Tactile Bars Prototype
Figure 2: Initial low-delity prototypes that where used during the interactive workshop
This prototype focuses on a verbal, auditory modality, enhanced
by touch-based interaction, allowing participants to receive de-
tailed information about their surroundings directly through the
car window.
3.3 Tactile Bars Prototype
Related work showed that tactile cues can help gain situation aware-
ness for BVIPs inside HAVs [
21
,
33
,
35
,
60
,
61
,
78
]. Hence, we de-
signed a tactile bar prototype (see Figure 2c) to convey potential
obstacles when exiting the vehicle. Unlike the smartphone and
window touch prototypes, this one does not rely on a visual Unity
scene; instead, it solely uses tactile feedback to convey information
about obstacles.
The tactile bars prototype features two rows of nine movable bars.
The rst row (from the perspective of the participants) represents
static obstacles, such as trees or street signs, while the second
row represents dynamic obstacles, like cyclists or pedestrians. We
rounded the edges of the bars to ensure a smooth surface to avoid
discomfort when touching them, as recommended by Holloway
et al
. [47]
. The rationale for having two distinct rows is to separate
the types of obstacles, assuming to make it easier for participants
to understand the environment. Static obstacles are presented as
constant and unchanging, while dynamic obstacles are represented
with motion, created by manipulating the bars in the second row
to simulate movement.
In the rst row, each bar is controlled by one of nine levers,
operated by one of the workshop moderators, allowing individual
movement up and down. A slider at the prototype’s bottom ma-
nipulates the dynamic obstacle bars. Moving this slider creates a
wave-like eect on the bars, creating a tactile illusion of motion.
The decision to convey motion via bars that move up and down
was inspired by Holloway et al
. [46]
, who noted that using height
dierences is a perceivable method to convey tactile motion for
BVIPs.
The prototype’s design prioritizes simplicity and tactile feedback,
oering BVIPs to gather information about their surroundings with-
out relying on visual or auditory cues.
4 Interactive Workshop
In this section, we describe the interactive workshop we conducted
with N=5 BVIPs (three female, two male and no non-binary) aged
between 44 to 67 (M = 57.80, M = 8.61). The female participants
reported being completely blind, with one having light percep-
tion. The male participants had impaired vision, with visual acuity
between 3-5% (see Appendix B for more details).
The workshop was conducted to identify BVIPs’ specic infor-
mation needs to improve their situation awareness and assist in safe
exit of HAVs. Further, we explored preferred interaction strategies
to convey the necessary information for these tasks. To provide
prior inspiration and a starting point for discussion, the three ini-
tial low-delity prototypes (see section 3) were presented to the
participants in individual sessions during the workshop.
The following section details the interactive workshop procedure
and the implications of the results, which eventually led to the
nal design of PathFinder. By including the participants from the
beginning of the design phase, we adopted the Participatory Design
approach from Muller and Kuhn [65].
4.1 Procedure
The workshop, moderated by ve of the authors, was scheduled to
last three hours and divided into four phases. The detailed agenda
for the session is outlined in Table 1. Before the start, we ensured
that all participants had consented to share their data, which al-
lowed us to proceed with audio and video recordings during the
study.
During the rst phase, we started with brief introductions, where
each participant shared their visual abilities. This was followed by a
concise overview of the capabilities of HAVs, which can reach their
destination without any intervention, as dened by SAE levels 4
and 5 [75].
Transitioning into the workshop’s core discussion in the second
phase, we opened the oor to conversations about the participants’
personal experiences with exiting traditional vehicles, such as taxis.
Following this, we asked participants about their specic informa-
tion needs to gain situation awareness of the surrounding environ-
ment. Next, we invited participants to generate ideas on how to
convey these information needs to them.
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
Table 1: Scheduled agenda for the three hours interactive workshop
phase scheduled duration agenda
1 15 min Introduction of the Participants and Moderators (4 authors). Overview of HAVs capabilities
2 45 min Open Group Discussion about Information Needs and Information Conveyance to Exit the Vehicle
3 75 min Individual Interactive Prototype Sessions (15 min per Participant and Prototype)
4 45 min Open Group Discussion about the Prototype Interaction
After collecting their unbiased ideas, we moved to the third
phase: the interactive prototype sessions. Here, we presented the
three initial prototypes to each participant individually in a counter-
balanced order. This approach ensured that the participants’ initial
ideas remained independent of our prototypes, thus avoiding bias
in their creativity. During each individual interactive prototype ses-
sion, we prompted the participants with an initial story to envision
themselves in an HAV, traveling alone to a friend’s house for the
rst time, simulating their unfamiliarity with the area. While inter-
acting with the prototypes, we asked the participants to perform
the Thinking-Aloud method [
50
]. Hence, we asked participants to
explain their thoughts about each part of the prototype, describing
what they thought it represented and what aspects of the infor-
mation conveyance they liked or found dicult, including their
reasons. Given that the prototypes were set up in three separate
rooms, we arranged for three participants to interact with the proto-
types simultaneously while the remaining two participants waited.
Each prototype was operated by one of the moderators. During the
sessions, we briey explained the prototype’s interaction strategies
and let the participants interact with the prototype while thinking
aloud.
After the prototype sessions, we gathered the participants again
in a group discussion to ask them about their positive and negative
experiences with the prototypes and their interaction strategies
(phase four). By rst collecting individual impressions during the
prototype sessions, we ensured that the feedback remained honest
and unbiased from the other participants, facilitating diverse view-
points from the group. The participants were compensated with 30
Euros for their time during the three-hour workshop.
4.2 Analysis
Four authors conducted a reexive, inductive thematic analysis,
following the approach of Braun and Clarke
[13
,
14]
. We analyzed
audio and video recordings from the workshop, focusing on both
group discussions and think-aloud sessions of the prototypes. The
codes generated from this analysis were organized on a digital
whiteboard, sorted by feedback for each prototype and the group
discussions. We then grouped these codes into thematic clusters
before moving to the third phase of thematic analysis: searching
for themes. This was done in a group meeting among the authors.
In cases of disagreement, we engaged in discussions to resolve any
discrepancies. In total, we generated 396 codes from the interactive
workshop, which were clustered into 8 subclusters and three main
themes.
4.3 Results and Implications
Our ndings are divided into three main sections based on the
identied themes: (1) current situations to exit a vehicle, (2) the
information needs of participants when exiting HAVs, and (3) meth-
ods for eectively conveying this information. The rst section
mainly derives from the open group discussion about information
needs. The second section derives from the individual interactive
prototype sessions and the subsequent group discussion (see Ta-
ble 1). However, before we dive into these two key areas, we rst
provide an overview of the participants’ current experiences when
exiting a vehicle.
To correlate participants’ visual acuity with our ndings, we used
blue highlighting with dierent levels of transparency. The trans-
parency level reects each participant’s visual acuity: participants
with lower visual acuity, like P3 and P4 (0%), had more transparent
highlighting, while those with higher visual acuity, like P2 (1%),
P5 (3%), and P1 (5%), had darker, less transparent highlighting. For
more detailed demographic information, please refer to Appendix B.
4.3.1 Current Situation to Exit a Vehicle. All participants consis-
tently mentioned their reliance on the assistance of others, such
as taxi drivers, when exiting the vehicle. For example, P2 shared,
“I rely on the taxi driver to guide me until I am familiar with my
surroundings again”. P4 agreed, adding that she also asks if it is safe
to open the car door before exiting. During the exit, she holds the
cane with her right hand while using her left hand for support. The
participants generally relied on more assistance to exit the vehicle
in an unfamiliar environment, as mentioned by P1. Further, P5
emphasized the value of communicating his visual impairment in
an unfamiliar vehicle. He explained that sharing information about
his condition enhances his perceived safety and ensures that others
are mindful of his needs. Likewise, both P5 and P3 mentioned that
being in the company of acquaintances increases their perceived
safety, as these people are already familiar with their needs. Many
of the insights align with Brewer and Ellison
[15]
whose partici-
pants stated that “they asked drivers to drop them o at convenient
locations that made it easier to nd doors” [15, p. 3].
Recognizing the current need for assistance in exiting the vehicle
to gain situation awareness of the environment is essential. The
potential increasing independence of BVIPs with the introduction
of HAVs [
52
] highlights the need for interfaces that support BVIPs
in exiting future HAVs. By exploring and understanding the specic
information needs when exiting HAVs, we can contribute to the
design of future HAVs that promote not only accessibility but also
independence and perceived safety.
4.3.2 Information Needs When Exiting HAVs. P5 summarized that
when exiting a potential HAV “it is important to nd out immedi-
ately what [obstacle] it is, and then I can decide whether it is relevant
for me”. Echoing this statement, P1 and P3 acknowledged that while
technology can aid them in gaining situation awareness, they still
feel responsible for their actions and strive to maintain their sense
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
of control, as already suggested by Brewer and Kameswaran
[16]
.
However, they highlighted the critical need for direct communica-
tion in potentially dangerous situations, such as cyclists passing
in front of the HAV before exiting. Once their situation awareness
needs are met, participants noted no further information require-
ments after leaving the vehicle. P1 claried, “As soon as I leave the
car, that’s my concern, but I know which way I’m going.” Reecting a
similar sentiment, P4 and P2 mentioned their preferred reliance on
traditional mobility aids, such as canes or guide dogs, immediately
after exiting the HAV. Diving into the concrete information needs,
we categorized the participants’ needs into ve categories: (1) static
obstacles, (2) dynamic obstacles, (3) the condition of the ground, (4)
information needs about the nal destination, and (5) the spatial
orientation.
Static Obstacles. Our workshop participants mentioned mul-
tiple static objects they would need to be informed about when
exiting HAVs, such as trees in front of the pedestrian path, road
signs, garbage cans on the road, road bollards, or parking vehicles.
Further, for P3 and P1, the information about a safe pedestrian path
is crucial. Opinions on the need for information about the distances
to these obstacles were mixed. P1, who has relatively high visual
acuity among the group, expressed that he could independently
estimate these distances and did not require explicit information.
Conversely, P5, despite having similar visual acuity, preferred to
have distances explicitly communicated, aligning with the other
participants’ preferences.
Dynamic Obstacles. All participants agreed that information
about dynamic obstacles, such as cyclists passing in front of the
vehicle, is crucial. P1 specically noted, “Very fast cyclists are fright-
ening; they don’t take any care of me”. P3 added that knowing the
direction of these dynamic obstacles is essential. She would also
appreciate information about when an obstacle has passed. Fur-
ther, P4 emphasized the importance of receiving updates about
dynamic obstacles just before exiting the vehicle, as this timing is
most critical for her situation awareness.
Terrain Perturbations. In addition to static and dynamic obsta-
cles, participants highlighted the importance of understanding the
ground conditions around the vehicle. In particular, they noted that
awareness of potentially dangerous surfaces, such as slippery ice
or wet grass, is critical given the increased risk of injury from such
conditions. However, P5 mentioned that while this information
might be important for people with total blindness, he would not
require this kind of information.
Information Needs About the Final Destination. All par-
ticipants expressed the need for detailed information about their
nal walking destination. P3 specied the importance of knowing
the approximate distance and direction to the nal destination. P1
expanded on this, highlighting its particular signicance in unfa-
miliar environments. He stressed that understanding which side of
the vehicle to exit from and the route to the nal destination are
his highest priorities among all information needs.
Spatial Orientation. All participants emphasized acquiring spa-
tial orientation before exiting the vehicle to improve their situation
awareness. In this context, P3 pointed out that “if you become blind
later in life, your spatial perception diers from someone who has
been blind since birth”. This aligns with the deciency model by
Von Senden
[83]
arguing that visual experience is critical for accu-
rate spatial orientation. Accordingly, the lack of visual experience
slows down and reduces the accuracy of situation awareness for
BVIPs, leading to less spatial orientation compared to sighted in-
dividuals [
83
]. More recent studies however indicate that BVIPs
are able to gain the same spatial orientation as sighted people
when enough information is provided [
58
,
59
]. Thus, to increase
the amount of information to gain spatial orientation, P4, P3, and
P5 all agreed on the importance of using the ego vehicle as a ref-
erence point to contextualize other objects in the environment.
Additionally, P3 suggested that information should be organized
in a structured manner (e.g., arranged in a circle) to support her
spatial orientation.
4.3.3 Information Conveyance. The insights on how to convey the
information needs discussed previously were mainly derived from
individual and group feedback on the initial low-delity prototypes
(see Figure 2).
Active and Passive Interaction. In general, participants pre-
ferred receiving crucial information passively rather than seeking
it out actively, as with the smartphone prototype. For instance, P5
noted his discomfort with actively scanning the surroundings. He
also pointed out that relying solely on auditory feedback would be
insucient in scenarios where other passengers are talking within
the HAV, thus expressing a preference for the tactile bars prototype
in this situation.
Feedback on the auditory and tactile modalities varied among
participants. P5 found the tactile bars prototype helpful for gaining
a broad initial overview of the environment, though he noted it was
insucient for detailed information. He explained, “I often drive
with noisy children. Tactile output would let me [actively] sort out
important details like necessary precautions by myself,” supporting
the ndings of Di Campli San Vito et al
. [24]
that tactile feedback
is less distracting and bothersome than other interfaces. P2 and P3
suggested integrating voice output similar to the window touch
and smartphone prototype to enhance the tactile bars prototype.
This suggestion echoes the ISANA system from Li et al
. [57]
to en-
hance BVIPs’ navigation tasks. Nevertheless, there was a consensus
among all participants that critical information, such as cyclists
should primarily be conveyed passively through voice. Addition-
ally, P4 preferred that voice output be as concise as possible. This
requirement underscores the importance of delivering clear and
succinct information to avoid overwhelming the passengers with
excessive details. Further, many found the smartphone prototype
cumbersome and inconvenient. For example, P4 mentioned, “I found
that a bit stupid with the smartphone; I don’t have enough hands for
it when I get out of the car.” Most participants (4 of 5) shared this
sentiment, indicating discomfort with not having their hands free.
Completeness of Information. All participants emphasized
the importance of being informed when all relevant information
were conveyed to them. This requirement was well met by the
tactile prototype, as P2 and P3 could physically sense when they
had explored all available information with their hands. However,
the smartphone prototype presented some challenges; for example,
P2 criticized the absence of a physical boundary or frame to guide
the smartphone’s movement. Similarly, P3 expressed diculties in
eectively scanning the environment, remarking, “I have to scan the
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
environment, but I’m imperfect.” These issues were also reected in
using the window touch prototype, where P4 was uncertain about
whether she had touched all relevant objects on the window. To
overcome these challenges, P5 suggested implementing a standard-
ized output of information to ensure passengers are consistently
aware when all relevant data has been communicated.
Variations in Dierent Visual Acuities Participants’ responses
to the initial prototypes varied signicantly based on their visual
acuity. P1, who retains 5% visual acuity, expressed discomfort with
being overwhelmed by excessive information. In contrast, P2 (1%
acuity) advocated for providing more information rather than less,
allowing passengers the autonomy to determine which details are
relevant to their needs. Furthermore, she proposed that the amount
of information should be adjustable by the participants themselves,
allowing for a customized experience based on individual needs
and preferences.
5PathFinder
Based on the insights of the interactive workshop, we developed
PathFinder, a multimodal interface that considers the individual
needs of BVIPs and assists them in exiting HAVs (see Figure 3).
This section will describe the design rationale and features of
PathFinder.
5.1 General Design
In general, our participants preferred combining tactile feedback
with auditory cues. Consequently, tactile feedback should provide a
broad overview of the surrounding environment, serving as a foun-
dational layer of information, while verbal feedback adds detailed
information. Additionally, PathFinder employs a clean and simple
interface design to prevent users from becoming overwhelmed and
used high contrasts in color (black and white) to enhance visibility
for those with residual vision, as suggested in prior work exploring
accessible technology for BVIPs [
40
,
48
,
61
]. As described below,
PathFinder consists of four main components positioned on an
oval-shaped plate (30x24cm): the initial audio announcement, the
compass needle, the ve obstacle buttons, and the vehicle button. All
electronic components of PathFinder were controlled by an Ar-
duino Mega microcontroller [
8
] and powered by an external power
source. The following subsection gives a brief overview of the con-
struction of each component. In addition, to reproduce PathFinder,
we have provided all the construction les, including blueprints,
3D les, and laser-cut les, in a git repository (see section: Open
Science).
5.2 Initial Audio Announcement
To clearly distinguish between the HAV stopping at a trac light [
61
]
and the destination, we created an initial audio announcement that
indicates that the HAV has parked at the destination. This announce-
ment also provides directions and distances for the passengers to
reach their nal destination (e.g., a coee shop) and informs them
if pedestrians or cyclists are expected to pass by. A detailed descrip-
tion of the audio announcements can be found in Appendix A. This
verbal audio is played automatically once the HAV stops. According
to the participant’s feedback, this audio announcement is as concise
as possible.
5.3 Five Obstacle Buttons
To represent the vehicle’s surroundings, we divided the area into
ve sectors corresponding to the direction of exit. These sectors
are represented by ve buttons that can extend following the initial
audio announcement. If a static obstacle is detected in a particular
sector, the corresponding button extends; otherwise, it remains
retracted. For blind participants, this dierence in height can be
sensed by touch. For those with residual vision, the extended but-
tons also blink to attract attention, as Holloway et al
. [46]
recom-
mended to “use blinking pins to direct attention to important areas
[...]”[
46
, p. 12]. This approach was also supported by Ivanchev et al
.
[49]
, who discovered that blinking interactive elements were bene-
cial for navigation tasks among BVIPs.
Based on participants’ feedback, we enhanced the tactile feed-
back system by adding audio announcements. We made all ve
buttons pressable, regardless of whether they were extended or
retracted. Due to the resistance in our design, pressing a button
does not cause it to move; instead, it maintains its current state.
A short press provides concise information, while a long press de-
livers detailed information, including distances to the respective
obstacles. This dual approach was implemented to meet the par-
ticipants’ ambiguity in the need for comprehensive but concise
information. It also ensures that participants, including those with
no remaining vision, have access to detailed information.
Our participants preferred a structured and organized approach
that ensured they received all relevant information, giving them
condence that nothing was missed. This aligns with the ndings
of Brewer and Kameswaran
[16]
, whose participants emphasized
the importance of an interface that provides feedback in a clear and
organized manner. Thus, by pressing each button, they can gain a
complete understanding of the vehicle’s surroundings. Additionally,
embedding audio announcements for sector-specic obstacles into
the button allows participants to actively seek out information
rather than passively receive it, as already highlighted by Arditi and
Tian
[7]
. Nevertheless, crucial information, such as passing dynamic
obstacles (e.g., cyclists or pedestrians), is automatically announced
as they approach the HAV, as mentioned in our workshop.
The mechanism enabling the buttons to be extendable was achieved
using a camshaft system powered by a servo motor for each button,
which raises the button. At the tip of each button, a white LED
was embedded beneath a frosted acrylic glass plate, providing the
ashing.
5.4 The Compass Needle
The compass needle provides directional guidance toward the nal
destination both during the ride and once the HAV has stopped.
This concept was inspired by feedback from participants in a work-
shop conducted by Brewer and Kameswaran
[16]
and was further
validated by our participants, who emphasized the importance of
knowing the direction of the nal destination and the side of the
vehicle to exit.
We constructed the compass needle using a stepper motor that
moves a timing belt and a metal sled. The 3D-printed compass
needle is mounted on this sled, allowing it to traverse an oval-
shaped rail covering approximately 180°. If the nal destination is
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
Figure 3: Interface design of PathFinder, featuring a compass needle, ve obstacle buttons, and a vehicle button. Each obstacle
button can extend to indicate an obstacle in the corresponding section, and pressing it triggers additional details via audio
announcements. The extended buttons also ash to enhance visibility. The compass needle moves along a rail to continuously
point toward the nal destination.
to the vehicle’s left during the ride, the needle will point as far left
as possible within its limited range of motion.
5.5 Vehicle Button
Meinhardt et al
. [61]
noted that participants required a reference
point, such as the ego vehicle, to contextualize all other information
and gain spatial orientation. Hence, we constructed a button shaped
like a vehicle as a reference point to the ve obstacle buttons and
the compass needle. Further, by pressing this button, the initial au-
dio announcement (see subsection 5.2) could be repeated to ensure
participants could actively seek the information [
7
]. Like the obsta-
cle buttons, a detailed announcement would be played if pressed
longer, including information about the terrain perturbations.
6 User Study
To investigate the capabilities of PathFinder in assisting BVIPs to
exit HAVs, we conducted a user study with N=16 BVIPs. We com-
pared this multimodal interface with an auditory baseline inspired
by the ATLAS system from Brinkley et al
. [18]
, chosen because
the auditory output is currently the predominant modality used
in interfaces for BVIPs [
20
]. The auditory cues provided compre-
hensive environmental information equivalent to that oered by
PathFinder, ensuring a fair comparison by delivering all detailed
information upon the vehicle’s arrival at its intended destination.
The detailed description of the audio announcement of PathFinder
and the auditory baseline, including their durations, can be found
in Appendix A.
To enhance the generalizability of our study, we assessed both the
auditory baseline and PathFinder systems in two distinct scenar-
ios: a complex urban environment and a simpler rural one. Based
on ndings by Meinhardt et al
. [61]
, which indicate that visual
acuity aects how BVIPs engage with a system, we included vi-
sual acuity as a factor in our analysis. This led to a three-factor
design in our study: System (auditory/PathFinder) and Scenario
(urban/rural) were the within-subject factors, while visual acuity
was the between-subject factor.
The participants of the user study partly overlapped with the
participants of the prior interactive workshop, as all workshop par-
ticipants also took part in the user study (indicated in Appendix B).
Their average age was M = 59.06, SD = 15.00 (nine female, seven
male and no non-binary). Their visual accuracy varied from total
blindness (0%) to 14% with M = 4.88, SD = 4.67. Detailed information
about the visual impairment of each participant can be found in
Appendix B.
6.1 Study Setup
Aiming for high realism in our study, we utilized three 55" monitors
positioned side by side to simulate the surroundings of the HAV,
as illustrated in Figure 1. For participants with residual vision,
this setup provided a 180°view of the right side of the HAV’s
surroundings. The scenarios were created using Unity [
82
] version
2020.3.15f2, incorporating various Unity assets (e.g., [3, 5, 67]).
Both scenarios featured simulated pedestrians and cyclists pass-
ing by the vehicle, with dierent complexities reecting typical
environmental variations. Hence, dynamic obstacles were more
frequent in the urban scenario, averaging four pedestrians and two
cyclists per minute. In contrast, the rural scenario averaged two
pedestrians and one cyclist per minute. Additionally, obstacles in
the urban scenario were distributed across four sections, while the
rural scenario had them in two sections. The atmospheric audio also
diered: the urban scenario featured bustling city sounds, including
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
passing vehicles and people talking, while the rural scenario had
a quieter ambiance with forest sounds, such as birds singing. The
vehicle’s surroundings for the urban and rural scenarios can be
seen in Figure 4. Further, the corresponding audio announcements
for each obstacle button can be found in Appendix A.
We reused the car door and seat used in the workshop (see
subsection 3.2) and positioned this setup in front of the center
monitor. We ensured alignment with the virtual car window in the
simulation from the passenger’s point of view. Further, we mounted
PathFinder between the car seat and the window, speculating a
plausible position for this kind of future interface. The study setup
also included a camera facing the participant and a microphone to
record qualitative feedback.
6.2 Procedure
For each participant, we described the study setup in the room
and sought the participants’ consent to record the session. We en-
sured they comprehended all aspects and encouraged them to ask
questions. We then read the consent form aloud, adhering to the
research institute’s ethical guidelines, highlighting their right to
withdraw from the study at any time. The procedure also guaran-
teed privacy protection, anonymity, fair compensation, and risk
aversion. Acknowledging the unique needs of our participants, we
went beyond standard ethical practices by oering personalized
support, such as assistance with transportation, to maintaining
high ethical standards in accessibility research.
After obtaining their consent, the BVIPs were seated next to
the car door and asked to imagine themselves as passengers in
an HAV traveling to their desired destination without any need
for intervention (SAE level 4 to 5) [
75
]. Before starting the four
main conditions (i.e., urban and rural scenario, auditory baseline,
and PathFinder), participants were introduced to the study proce-
dure through an introductory suburban scenario, where the vehicle
drove for 10 seconds before reaching its destination. During this
scene, we explained the functionalities of PathFinder and the au-
ditory cues while asking them to understand the surroundings of
the surroundings. Participants were encouraged to repeat this in-
troductory scenario as often as necessary to explore the interfaces
until they felt comfortable with their features. While most partici-
pants completed the introductory scenario once, four participants
requested to repeat it a second time. This introductory scenario was
entirely dierent from the main scenarios to prevent any overlap
of information and bias towards the main scenarios.
The four main conditions were then presented in a counterbal-
anced order. Each scenario included a 30-second ride before the HAV
reached its destination, after which participants were told they had
as much time as they needed to explore the vehicle’s surroundings
as best as they could using either PathFinder or the auditory-only
baseline. For the auditory condition, PathFinder was covered with
a wooden lid to prevent interaction with the system. Participants
were also allowed to repeat the audio announcements as often as
they wished. The simulation concluded once participants indicated
they had obtained sucient information to exit the vehicle. No-
tably, participants did not physically open the car door during the
simulation.
For the urban scenario, participants were informed that the HAV
would take them to a coee shop in an unfamiliar area. In the rural
scenario, they were told that the HAV would transport them to
a friend’s house, also in an unfamiliar area. After completing the
four conditions, we collected demographic information, including
age, gender, and visual acuity. We then engaged in a qualitative
conversation asking the participants to compare both interfaces in
relation to the insights from the workshop (see subsubsection 4.3.2
and subsubsection 4.3.3). Specically, we asked about the clarity in
conveying dynamic and static obstacles, the conveyance of terrain
perturbations, the information provided about the nal destination,
the spatial orientation, and the overall completeness of informa-
tion necessary for a comprehensive understanding of the HAV’s
surroundings.
The participants were compensated for the 1.5h session with 18
Euros.
6.3 Measurements
After each condition, participants were asked to rate both PathFinder
and the auditory baseline as experienced within the respective sce-
nario. We utilized the System Usability Scale (SUS) [
51
] to assess
usability. Additionally, we measured the participants’ mental de-
mand using the NASA-TLX scale [
44
]. To assess perceived situation
awareness [
26
,
27
], we employed the Situation Awareness Rating
Technique (SART) [
79
]. We also evaluated the participants’ per-
ceived safety through a set of four 7-point semantic dierential
scales, ranging from -3 (anxious/agitated/unsafe/timid) to +3 (re-
laxed/calm/safe/condent) [29].
Finally, we used the Immersion subscale of the Technology Usage
Inventory (TUI) [
53
] to ensure that participants were suciently
immersed during the study. This measurement helps us determine
if the study’s ndings are comparable to those in a potential real-
world scenario. All questionnaires were read aloud to ensure they
were accessible to all participants.
6.4 Results
During our user study, we collected qualitative and quantitative
results, which will be reported in the following two sections. After
all conditions of the user study, participants rated their perceived
immersion via the TUI [
53
] during the simulation as medium-high
M = 17.06, SD = 6.16 (minimum: 4, maximum: 28), indicating a
reasonable approximation to potential real-world scenarios. On
average, the time between the HAV stopping at the destination and
participants indicating that they had sucient information to exit
the vehicle was 1 min 46 sec (SD = 59 sec) for PathFinder and 1 min
57 sec (SD = 55 sec) for the auditory baseline. Refer to Appendix C
for detailed descriptive data.
6.4.1 antitative Data. To ensure our quantitative data met the
assumptions necessary for statistical analysis, we rst used the
Shapiro-Wilk test [
77
] to check for normality. For data that followed
a normal distribution, we performed a repeated measures ANOVA.
When the data did not meet the normality assumption, we applied
the aligned rank transformation (ART) method, which is suited for
non-parametric factorial analysis of repeated measures [
85
]. The
WHO categorizes visually impaired individuals into two groups:
legally blind and visually impaired, with visual acuity of 5% or
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
(a) Sectors of the Obstacle Buttons in the Rural Scenario (b) Sectors of the Obstacle Buttons in the Urban Scenario
(c) Participant’s perspective of the Rural scenario (d) Participant’s perspective of the Urban scenario
Figure 4: Sectors for the ve Obstacle Buttons of PathFinder for the Urban and Rural Scenario and the Participants’ Perspective
of the Study Setup. See Appendix A for the concrete audio announcements
less classied as legally blind [
39
]. Following this approach, we
categorized participants into these two groups due to the limited
data range available for participants’ visual acuity. In our analysis,
the system and scenario were treated as within-subject factors,
while BVIPs’ visual acuity was treated as the between-subject factor.
This categorization resulted in ten participants being classied as
legally blind, while the other six participants were categorized as
visually impaired. We conducted our analyses using R software
version 4.4.1.
Mental Demand. The ART found a large signicant main eect
on mental demand [
44
] for System (F(1) = 11.03, p = 0.005;
𝜂2
= 0.44,
95% CI [0.11, 1.00]). Hence, PathFinder (M = 7.75, SD = 5.16) yielded
a signicantly lower mental demand than the auditory Baseline (M
=11.09, SD = 5.63). Refer to Figure 5a for the plotting. Further, the
ART found a trend towards signicance, suggesting an interaction
between Scenario and Visual Acuity (p=0.063). While not signicant,
the interaction’s 𝜂2= 0.23 eect size is dened as large [22].
Usability. The ART did not identify any signicant main or inter-
action eects on Usability [
51
]. The usability ratings for PathFinder
(M = 64.84, SD = 16.30) were similar to those for the Auditory Base-
line (M = 62.42, SD = 14.07).
Situation Awareness. An ANOVA did not nd signicant dier-
ences in situation awareness [
79
]. Yet, the rated situation awareness
p = 0.005
5
10
15
20
Auditive PathFinder
Mental Demand
(a) Signicant main eect on
mental demand [44] for System
-1
-3
1
3
Auditive PathFinder
Perceived Safety
Rural Urban
(b) Signicant interaction eect
on perceived safety [29]
Figure 5: Signicant eects of the quantitative data for men-
tal demand and perceived safety
for both PathFinder (M = 19.09, SD = 5.51) and the auditory base-
line (M = 17.91, SD = 6.25) was rated medium to high on a scale
from -20 to 40.
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
Perceived Safety. The ART found a large signicant interaction
eect between System and Scenario (F(1) = 6.47, p = 0.023;
𝜂2
= 0.32,
95% CI [0.03, 1.00]). Hence, Perceived Safety ratings for PathFinder
were consistent across the Urban (M = 1.44, SD = 1.86) and Rural
scenarios (M = 1.50, SD = 2.00). In contrast, the ratings for the
auditory baseline showed a divergence between the urban (M =
1.25, SD = 1.25) and rural (M = 1.61, SD = 2.13) scenario. No main
eects were found for either System,Scenario, or Visual Acuity.
Refer to Figure 5b for the plotting.
6.4.2 alitative Feedback. After completing all four conditions,
we conducted brief interviews with participants, focusing on their
experiences with both interfaces in relation to the ndings from
the previous workshop (see subsubsection 4.3.2 and subsubsec-
tion 4.3.3). Unlike the workshop, we did not perform a thematic anal-
ysis of this feedback. Instead, we present anecdotal feedback and
participants’ opinions, organized by the specic questions asked
during the interviews. Therefore, the clusters presented below are
based directly on the specic questions (see subsection 6.2) asked
during the interviews rather than being the result of a formal the-
matic analysis.
Spatial Orientation. The feedback on spatial orientation varied
across participants, but the majority of participants were satised
with the conveyed information of PathFinder with regards to
the spatial orientation. P1 found the PathFinder particularly com-
pelling, stating, “I was impressed because I could get an overview by
pressing dierent symbols”. This opinion was echoed by P5, who
also preferred the multimodal interface, noting that it allowed her
to “create a mental map”, whereas listening to the auditory feedback
led to higher mental demand. However, P3 expressed some confu-
sion, stating that “using ‘left’ and ‘right’ instead of ‘in the direction of
travel’ was disorienting”. P15 indicated that he was able to achieve
sucient spatial orientation through the auditory baseline, whereas
the tactile components of PathFinder proved particularly distract-
ing, necessitating concentration on the individual elements, which
in turn impeded his ability to listen to the audio announcements
carefully.
Navigation to the Final Destination Four participants stated
that the information how to navigate to the nal destination was
very similar between both interfaces. Particularly, P8 mentioned
that “the information was the same but the method was dierent”. P2
appreciated the tactile compass needle of PathFinder to navigate
to the nal destination, stating, “The triangle [compass needle] in the
multimodal interface was better for me. I knew where I was and where
I needed to go”. Conversely, P15 stated that the compass needle
was not needed as the initial audio announcement was already
sucient to navigate to the nal destination. However, P5 found
the navigation through touch more intuitive than listening to the
auditory baseline. But she appreciated the combination of both
interfaces, saying, “Both should be combined, but I prefer touch for
navigation”.
Terrain Perturbations. Most participants felt that the infor-
mation about the terrain perturbations was clear and sucient
across both interfaces. For instance, P4 mentioned that “it was clear
whether the ground was paved or not”. However, P15 criticized that
with PathFinder, he had to search for terrain information, whereas
the auditory baseline provided it automatically. Further, P11 noted
that some information was excessive, such as the grass on the
sidewalk.
Dynamic and Static Obstacles. Participants generally agreed
that the identication of dynamic and static objects was essential
but diered in their preferred modality of receiving this information.
P1 preferred PathFinder for recalling details of the static obstacles.
P2 and P3 emphasized the importance of combining both interfaces,
with P2 suggesting, “A pin that rises when a pedestrian is present
would be helpful.” This suggestion is particularly notable, as the
workshop’s ndings indicated that dynamic obstacle information
was preferred to be conveyed verbally for quicker understanding.
P8 and P10 highlighted the challenge of predicting the presence
of dynamic objects like pedestrians or cyclists. They noted the
absence of information indicating when these obstacles had passed,
which would signal that it is safe to exit the HAV. However, P16
appreciated that both interfaces announced the presence of these
obstacles, stating that knowing they are nearby is more important
to her than precisely when they pass by the vehicle. P7 also voiced
concern about the lack of continuous updates on dynamic objects,
expressing a desire for a system that “always informs me when the
situation changes.”
Completeness of Information. Overall, participants felt that
both systems provided comprehensive information, though seven
participants, including P6, P8, and P9, mentioned that the auditory
information was overwhelming, with P9 specically stating, “It
says too much, and I have to concentrate hard”. P7 criticized that
both interfaces only covered the area immediately around the HAV,
leaving users without further guidance once they moved beyond
a few meters. However, this concern contrasts with insights from
the prior workshop, where participants preferred using traditional
mobility aids, such as canes or guide dogs, after their immediate
situation awareness needs were satised (see subsubsection 4.3.2).
7 Discussion
This research was driven by the need for BVIPs to gain assistance
when exiting HAVs in unfamiliar environments [
15
]. In an interac-
tive workshop (N=5), we found that BVIPs currently rely on acquain-
tances to gain situation awareness of the vehicle’s surroundings.
However, with the introduction of HAVs, BVIPs may gain more
independence [
52
], but they likely face situations alone without
human assistance. To investigate the information needs of BVIPs
when exiting HAVs, we presented three low-delity prototypes
to the participants. Feedback from the workshop indicated a pref-
erence for a multimodal approach to convey information about
the environment in an organized and structured manner. Based on
this feedback, we developed PathFinder, which integrates visual,
tactile, and auditory cues to assist BVIPs. Using the Participatory
Design approach [
65
], we involved BVIPs from the outset, ensuring
that PathFinder’s nal design met the diverse needs of users with
varying degrees of visual impairment. This approach aligns with the
recommendations of Bradley and Dunlop
[12]
and Albouys-Perrois
et al
. [4]
, who emphasized the importance of designing audio and
tactile cues based on specic user needs and preferences for naviga-
tional tools for BVIPs. We subsequently conducted a three-factorial
within-between-subject user study (N=16), simulating an HAV ride.
Our study assessed PathFinder against an auditory-only baseline
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
in both complex urban and simpler rural scenarios. PathFinder
yielded a signicantly lower mental demand than the auditory
baseline and maintained high perceived safety in both scenarios,
while the auditory baseline led to lower perceived safety in urban
scenarios compared to rural ones.
7.1 Multimodal Approaches to Convey
Environmental Information
Our ndings indicate that the multimodal PathFinder interface
is eective in conveying information about the HAV’s surround-
ings, enabling participants to create "mental maps" and gain sit-
uation awareness. This nding is important because developing
accurate cognitive maps of the transportation environment is es-
sential for BVIP independence and mobility [
34
,
35
,
71
] and aligns
with broader evidence supporting the eectiveness of multimodal
interfaces; for instance, Papadopoulos et al
. [72]
highlighted that
audio-tactile maps enhance BVIP’s spatial orientation, especially
in unfamiliar environments. Further, the reduced mental demand
found for PathFinder supports the multiple-resource theory [
84
],
which posits that cross-modal distribution of information reduces
competition for cognitive resources, thereby reducing mental de-
mand. This aligns with participants’ statements, appreciating the
combination of the modalities, especially the auditory and tactile
ones. The broad implications of our multimodal approach to improv-
ing mental mapping and reducing mental demands can be realized
in eorts to increase independent mobility for BVIP passengers.
Just as our workshop participants reported that they often rely on
drivers or acquaintances to help them understand the environment
and exit safely, it stands to reason that interfaces like PathFinder
can help improve safety and independent travel for BVIPs in future
HAV.
Interestingly, while our qualitative results clearly prefer PathFinder
over the auditory baseline concerning spatial orientation, the quan-
titative analysis revealed no signicant dierences in situation
awareness between the two systems. This discrepancy may arise
from participants’ challenges in accurately self-reporting their sit-
uation awareness, as suggested by Endsley et al
. [28]
, or because
both systems convey the same information, resulting in similar sit-
uation awareness ratings. The latter aligns with the fact that both
systems received medium to high ratings for situation awareness,
suggesting that they are generally eective in this regard. These
ndings, however, diverge from those of Meinhardt et al
. [61]
and
Md. Yusof et al
. [60]
, who reported low ratings for situation aware-
ness with their tactile interfaces. Yet, it is important to note that
their studies focused on conveying trac information during the
HAV ride, whereas our study centered on the HAV’s surroundings
when exiting the vehicle. This dierence in the journey’s parts is
interesting, as understanding the HAV’s surroundings when exiting
into an unfamiliar area is likely more critical for BVIPs than being
immediately aware of the trac situation during the ride. While
situation awareness during the journey is also important [
35
,
61
],
it becomes essential when navigating a new environment after
exiting the vehicle. This dierence in context might explain the
variation in situation awareness ratings across dierent studies.
7.2 Inconsistency in Diverse Scenario’s
Complexities
Our study revealed a signicant interaction eect between the sce-
narios and the two systems on perceived safety (see Figure 5b).
While we expected that the more complex urban scenario would
lead to diering ratings, PathFinder consistently maintained high
perceived safety across both urban and rural settings. In contrast,
the auditory baseline showed divergence, with lower perceived
safety in the urban scenario compared to the rural one. This in-
dicates that while the auditory baseline may meet BVIPs’ safety
needs in simpler environments, it becomes less reliable in more com-
plex settings. These ndings underscore the limitations of single-
modality approaches [
35
,
89
] and suggest that multimodal systems
like PathFinder oer greater robustness across varying levels of en-
vironmental complexity. Therefore, these results support the recom-
mendations of Kuriakose et al
. [54]
, who highlight that multimodal
cues enable BVIPs to adjust their information intake according to
situation demands. This robustness is essential for ensuring safety
in demanding scenarios where situation awareness is critical. How-
ever, it is important to recognize that our study was limited to only
two scenarios. While these scenarios were designed to reect typi-
cal environments BVIPs might encounter, they do not capture the
full range of possible conditions that could aect information con-
veyance. For instance, extreme weather conditions such as heavy
rain, snow, or fog could introduce new challenges that neither the
multimodal nor the auditory-only system might handle eectively.
Further, we used simple scenarios, implying that after the initial
obstacles, the path to the nal destination is straight. While this
might not reect real-world scenarios, we based this decision on the
participants’ statement that after exiting the vehicle, they would
rely on traditional mobility aids such as canes or guide dogs (see
subsubsection 4.3.2). This decision reects the interface’s primary
purpose: providing essential initial information conveyance before
users switch to their customary navigation methods. However, in
scenarios with no obstacles, the auditory-only system might suce
and even be preferred due to its simplicity.
7.3 One System to Rule the Entire Journey
This research contributes to the growing body of work on exploring
accessible interfaces for each part of a journey using HAVs, such
as nding the vehicle [
37
] or conveying information during the
ride [
35
,
60
,
61
]. We extend this work by focusing specically on
the crucial exiting phase. Previous studies [
31
,
40
] have highlighted
BVIP frustration with using multiple apps and systems for dierent
navigation tasks. To address this problem, Giudice and Legge
[40]
suggested that integrating systems could enhance eectiveness
across dierent scenarios.
Therefore, it seems desirable to combine the PathFinder sys-
tem with other tactile or multimodal systems (e.g., [
35
,
61
]) to
ensure comprehensive accessibility throughout the entire journey.
However, integrating multiple functionalities into a single system
requires careful consideration of the form factor to maintain ease
of use. For example, simplifying PathFinder by removing the com-
pass needle—-considered unnecessary by participants-—can help
reduce its size. Additionally, leveraging existing devices like smart-
phones [
31
,
36
] and tablets [
18
] can extend the system’s capabilities
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
without increasing its size. For instance, smartphones could provide
additional vibrotactile feedback on the HAV’s location on a map [
36
].
Our workshop ndings indicate that BVIPs prefer systems that de-
liver essential information upfront, allowing them to keep their
hands free for tasks such as using canes or guide dogs. Therefore,
expanding PathFinder with the tactile elements on OnBoard [
61
],
like the rotating vehicle representation and the reason-for-stopping
button, could enhance the user’s understanding of the ongoing
trac during the ride.
7.4 Practical Implications and Future Work
While the quantitative data from our user study shows signicant
eects on mental demand and perceived safety, there are no sig-
nicant dierences between PathFinder and the auditory-only
baseline regarding usability and situation awareness. This suggests
that an auditory-only solution may be sucient for enhancing the
exiting phase for BVIPs, potentially reducing the cost and complex-
ity of adding tactile modality to the system. However, qualitative
feedback from participants highlights that for optimal eectiveness,
information should be conveyed through all available modalities.
For example, PathFinder communicated dynamic obstacles only
via audio. Yet, to enhance redundancy across modalities, these ob-
stacles could also be conveyed using a tactile approach, such as bars
that rise to indicate the presence of cyclists or pedestrians. This
would further improve the system’s robustness, providing that crit-
ical information is reliably understood by all potential passengers,
regardless of their sensory preferences or extent, etiology, or onset
of visual impairment. Furthermore, the signicantly reduced men-
tal demand observed with PathFinder, along with its consistently
high perceived safety in complex and simple scenarios, highlights
its potential as a valuable add-on feature for vehicle manufactur-
ers committed to accessibility. Additionally, future research should
look into a more seamless integration of the tactile elements of
PathFinder into vehicles, such as using textile buttons and sliders
integrated directly into the vehicle’s fabrics [
68
,
76
] or the armrest
close to the door handle. The other modalities of PathFinder could
also provide more detailed information, such as whether a dynamic
obstacle is moving fast or slow, via audio or blinking the obstacle
buttons in dierent colors to distinguish between dierent types of
obstacles for those with residual vision.
Finally, conducting real-world testing in actual vehicle environ-
ments would be essential to validate the system’s eectiveness
outside of controlled settings, ensuring that PathFinder meets the
practical needs of BVIPs in everyday use.
7.5 Limitations
Our interactive workshop included only ve BVIPs. While small
sample sizes can still provide valuable insights [
80
], there is a po-
tential for response bias [
64
]. Thus, it is important to recognize that
the views expressed by these participants may not fully represent
the broader target group. Additionally, the design of PathFinder
was partly inuenced by the subjective opinions of these ve par-
ticipants. While related work informed the development of both
the auditory interface and PathFinder, these interfaces should be
considered with caution. Another limitation is the lack of external
validity in our study, as participants did not physically exit an ac-
tual vehicle, which may aect the applicability of our ndings to
real-world scenarios. The actual process of exiting can introduce
additional challenges, such as maintaining orientation, managing
personal belongings or guide dogs, and navigating immediate haz-
ards outside the vehicle. Additionally, testing the interfaces in a
controlled environment rather than a real vehicle may have reduced
the perceived risk and mental demand associated with exiting in
real trac conditions. These factors might have inuenced partici-
pants’ feedback and limit the generalizability of our results. Despite
this, our study setup achieved a high level of perceived immer-
sion, suggesting that the simulated experimental conditions were
well-designed and eective.
Additionally, due to the specialized nature of our target group,
the user study was conducted with a relatively small sample size
of N=16 for quantitative analysis. This sample size may limit the
ndings’ applicability to a wider population. Moreover, fewer par-
ticipants increase the risk of Type II errors, where true eects may
not reach statistical signicance. Therefore, it is important not only
to consider statistical signicance but also to examine the eect
sizes. For example, although not statistically signicant, the inter-
action between the Scenario and Visual Acuity on mental demand
showed a large eect size. This suggests that there could be mean-
ingful dierences that warrant further investigation. Further, it is
worth noting that while we attempted to provide a more nuanced
perspective based on visual impairment (by highlighting qualita-
tive responses with acuity information from Appendix C) than the
typical approach of collapsing BVIPs into a single group [
30
], a
larger sample size would have also enabled comparisons for the
quantitative data. Additionally, whether participants were congeni-
tally blind or acquired their impairment later in life could inuence
their specic information needs and should be explored in future
studies.
It is also crucial to account for potential novelty eects [
81
] in
our user study as the participants experience both interfaces for
the rst time. Hence, we anticipate that, as users become famil-
iar with the interfaces over time [
62
], these novelty eects may
diminish. Specically, the auditory baseline featured longer audio
announcements compared to PathFinder, which could have bi-
ased participants towards preferring PathFinder. However, the
similar time required for participants to gather sucient informa-
tion to exit the vehicle (see Appendix C) suggests that the length
of the auditory announcements did not impact the overall task
performance.
Additionally, we were unable to counterbalance the between-
factor of visual acuity, meaning that participants with similar visual
acuity levels might have experienced the same order of conditions.
This lack of counterbalancing could introduce slight learning ef-
fects, where participants become more accustomed to the tasks,
potentially inuencing the study’s results.
8 Conclusion
This paper introduces PathFinder, a multimodal interface designed
to assist BVIPs in safely exiting HAVs by providing information
about the vehicle’s surroundings. PathFinder integrates visual,
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
tactile, and auditory cues, making it accessible to users regardless
of their visual impairment.
We conducted an interactive workshop with N=5 visually im-
paired participants to identify their information needs for safely
exiting a vehicle. The workshop revealed that BVIPs currently rely
heavily on acquaintances for assistance. However, as HAVs oer
greater mobility independence, BVIPs may increasingly face these
situations without human assistance. During the workshop, we pre-
sented three low-delity prototypes (a smartphone, a window touch
prototype, and tactile bars), each employing dierent modalities
and interaction strategies to assist with vehicle exit. Participants
expressed a strong preference for a multimodal interface, favor-
ing tactile cues as a foundation, supplemented by auditory cues
for critical information, such as the presence of dynamic obstacles
like cyclists. Based on these insights, we developed PathFinder,
a multimodal interface tailored to the unique needs of BVIPs. The
system includes a compass needle that points to the nal desti-
nation, ve extendable, ashing obstacle buttons that represent
dierent sections of the vehicle’s surroundings and provide audio
announcements for additional information, and a vehicle button
that serves as a reference point.
In a subsequent three-factorial, within-between-subject user
study (N=16), we evaluated PathFinder against an auditory-only
baseline in both complex urban and simpler rural scenarios. The
results showed that PathFinder signicantly reduced mental de-
mand compared to the baseline and consistently maintained high
perceived safety in both scenarios. In contrast, the auditory baseline
resulted in lower perceived safety in the urban scenario compared
to the rural one. Further, the qualitative feedback indicated a clear
preference for multimodal information conveyance to enhance
spatial orientation and situation awareness. However, to increase
robustness and ensure that critical information is reliably under-
stood by all passengers, regardless of their sensory preferences
or visual impairments, it is recommended that all information be
conveyed across all modalities.
Open Science
The source code and construction les, including blueprints, 3D-
printing les, and laser-cutting les for both the three initial low-
delity prototypes and PathFinder have been made publicly avail-
able. These resources can be accessed at the following link:
https://github.com/luca-maxim/light_my_way.
Acknowledgments
This research was funded by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) through the project “Non-
Visual Interfaces to Enable the Accessibility of Highly Automated Vehi-
cles for People with Vision Impairments” (Project number: 536409562).
The rst author would like to thank Max Rädler for the support
during the interactive workshop and Gertrud Vaas for being an
esteemed contact person throughout this research journey.
We also wish to acknowledge the Scientic Workshop of Ulm
University for their assistance in constructing PathFinder and the
low-delity prototypes. Special thanks go to Alex Vogel, Wolfgang
Rapp, and Manfred Kley for their practical support and dedication.
We also would like to thank Johannes Schöning for his valuable
mental support during the CHITogther 2024 in St. Gallen.
References
[1]
Carl Halladay Abraham, Bert Boadi-Kusi, Enyam Komla Amewuho Morny, and
Prince Agyekum. 2022. Smartphone usage among people living with severe
visual impairment and blindness. Assistive Technology 34, 5 (Sept. 2022), 611–618.
doi:10.1080/10400435.2021.1907485
[2]
Peter Ackland, Serge Resniko, and Rupert Bourne. 2017. World blindness and vi-
sual impairment: despite many successes, the problem is growing. Community eye
health 30, 100 (2017), 71. http://www.ncbi.nlm.nih.gov/pmc/articles/pmc5820628/
[3]
AGLOBEX. 2023. Urban Trac System. AGLOBEX. https://assetstore.unity.com/
packages/templates/systems/urban-trac- system-89133
[4]
Jérémy Albouys-Perrois, Jérémy Laviole, Carine Briant, and Anke M. Brock. 2018.
Towardsa Multisensor y AugmentedReality Map for Blind and Low Vision People:
a Participatory Design Approach. In Proceedings of the 2018 CHI Conference
on Human Factors in Computing Systems. ACM, Montreal QC Canada, 1–14.
doi:10.1145/3173574.3174203
[5]
ALP. 2024. Nature Package - Forest Environment. ALP. https://assetstore.unity.
com/packages/3d/vegetation/nature-package- forest-environment- 165645
[6]
Amir Amedi, Lot B. Merabet, Felix Bermpohl, and Alvaro Pascual-Leone. 2005.
The Occipital Cortex in the Blind: Lessons About Plasticity and Vision. Current
Directions in Psychological Science 14, 6 (Dec. 2005), 306–311. doi:10.1111/j.0963-
7214.2005.00387.x
[7]
Aries Arditi and YingLi Tian. 2013. User interface preferences in the design of a
camera-based navigation and waynding aid. Journal of Visual Impairment &
Blindness 107, 2 (2013), 118–129. doi:10.1177/0145482X1310700205
[8] Arduino. 2019. Arduino. Arduino. https://www.arduino.cc/
[9]
Be My Eyes. 2024. Be My Eyes - See the world together. Retrieved Mar 15, 2024
from https://www.bemyeyes.com/
[10]
Roger Bennett, Rohini Vijaygopal, and Rita Kottasz. 2020. Willingness of peo-
ple who are blind to accept autonomous vehicles: An empirical investigation.
Transportation Research Part F: Trac Psychology and Behaviour 69 (2020), 13–27.
doi:10.1016/j.trf.2019.12.012
[11]
Rupert R. A. Bourne, Jaimie Adelson, Seth Flaxman, Paul Briant, Michele Bottone,
Theo Vos, Kovin Naidoo, Tasanee Braithwaite, Maria Cicinelli, Jost Jonas, Hans
Limburg, Serge Resniko, Alex Silvester, Vinay Nangia, and Hugh R. Taylor.
2020. Global Prevalence of Blindness and Distance and Near Vision Impairment
in 2020: progress towards the Vision 2020 targets and what the future holds.
Investigative Ophthalmology & Visual Science 61, 7 (2020), 2317. doi:10.1016/S2214-
109X(20)30425-3
[12]
Nicholas A. Bradley and Mark D. Dunlop. 2005. An Experimental Investigation
into Waynding Directions for Visually Impaired People. Personal and Ubiquitous
Computing 9, 6 (Nov. 2005), 395–403. doi:10.1007/s00779-005- 0350-y
[13]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psy-
chology. Qualitative Research in Psychology 3, 2 (2006), 77–101. doi:10.1191/
1478088706qp063oa
[14]
Virginia Braun and Victoria Clarke. 2021. One size ts all? What counts as quality
practice in (reexive) thematic analysis? Qualitative Research in Psychology 18, 3
(2021), 328–352. doi:10.1080/14780887.2020.1769238
[15]
Robin Brewer and Nicole Ellison. 2020. Supporting people with vision impairments
in automated vehicles: Challenge and opportunities. Technical Report. University
of Michigan, Ann Arbor, Transportation Research Institute. https://rosap.ntl.bts.
gov/view/dot/56391
[16]
Robin N. Brewer and Vaishnav Kameswaran. 2018. Understanding the Power
of Control in Autonomous Vehicles for People with Vision Impairment. In Pro-
ceedings of the 20th International ACM SIGACCESS Conference on Computers
and Accessibility (Galway, Ireland) (ASSETS ’18). Association for Computing
Machinery, New York, NY, USA, 185–197. doi:10.1145/3234695.3236347
[17]
Julian Brinkley, Earl W. Hu, Briana Posadas, Julia Woodward, Shaundra B.
Daily, and Juan E. Gilbert. 2020. Exploring the Needs, Preferences, and Concerns
of Persons with Visual Impairments Regarding Autonomous Vehicles. ACM
Transactions on Accessible Computing 13, 1 (2020), 1–34. doi:10.1145/3372280
[18]
Julian Brinkley, Brianna Posadas, Imani Sherman, Shaundra B. Daily, and Juan E.
Gilbert. 2019. An Open Road Evaluation of a Self-Driving Vehicle Human–
Machine Interface Designed for Visually Impaired Users. International Journal
of Human–Computer Interaction 35, 11 (2019), 1018–1032. doi:10.1080/10447318.
2018.1561787
[19]
Julian Brinkley, Brianna Posadas, Julia Woodward, and Juan E. Gilbert. 2017. Opin-
ions and Preferences of Blind and Low Vision Consumers Regarding Self-Driving
Vehicles. In Proceedings of the 19th International ACM SIGACCESS Conference on
Computers and Accessibility, Amy Hurst, Leah Findlater, and Meredith Ringel
Morris (Eds.). ACM, New York, NY, USA, 290–299. doi:10.1145/3132525.3132532
[20]
Piyush Chanana, Rohan Paul, M. Balakrishnan, and Pvm Rao. 2017. Assistive tech-
nology solutions for aiding travel of pedestrians with visual impairment. Journal
of rehabilitation and assistive technologies engineering 4 (2017), 2055668317725993.
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
doi:10.1177/2055668317725993
[21]
Francesco Chiossi, Steeven Villa, Melanie Hauser, Robin Welsch, and Lewis
Chuang. 2022. Design of On-body Tactile Displays to Enhance Situation Aware-
ness in Automated Vehicles. In 2022 IEEE 9th International Conference on Compu-
tational Intelligence and Virtual Environments for Measurement Systems and Appli-
cations (CIVEMSA). IEEE, New York, NY, USA, 1–6. doi:10.1109/CIVEMSA53371.
2022.9853690
[22]
Jacob Cohen. 1988. Statistical Power Analysis for the Behavioral Sciences (0 ed.).
Routledge. doi:10.4324/9780203771587
[23]
Mark Colley, Benjamin Eder, Jan Ole Rixen, and Enrico Rukzio. 2021. Eects
of Semantic Segmentation Visualization on Trust, Situation Awareness, and
Cognitive Load in Highly Automated Vehicles. In Proceedings of the 2021 CHI
Conference on Human Factors in Computing Systems, Yoshifumi Kitamura, Aaron
Quigley, Katherine Isbister, Takeo Igarashi, Pernille Bjørn, and Steven Drucker
(Eds.). ACM, New York, NY, USA, 1–11. doi:10.1145/3411764.3445351
[24]
Patrizia Di Campli San Vito, Gözel Shakeri, Stephen Brewster, Frank Pollick,
Edward Brown, Lee Skrypchuk, and Alexandros Mouzakitis. 2019. Haptic Nav-
igation Cues on the Steering Wheel. In Proceedings of the 2019 CHI Conference
on Human Factors in Computing Systems, Stephen Brewster, Geraldine Fitz-
patrick, Anna Cox, and Vassilis Kostakos (Eds.). ACM, New York, NY, USA,
1–11. doi:10.1145/3290605.3300440
[25]
Julie Ducasse, Anke M. Brock, and Christophe Jourais. 2018. Accessible Interac-
tive Maps for Visually Impaired Users. In Mobility of Visually Impaired People,
Edwige Pissaloux and Ramiro Velazquez (Eds.). Springer International Publishing,
Cham, 537–584. doi:10.1007/978- 3-319-54446-5_17
[26]
Mica R. Endsley. 1995. Toward a Theory of Situation Awareness in Dynamic
Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society
37, 1 (1995). https://doi.org/10.1518/001872095779049543.
[27]
Mica R Endsley, Stephen J Selcon, Thomas D Hardiman, and Darryl G Croft.
1998. A comparative analysis of SAGAT and SART for evaluations of situation
awareness. In Proceedings of the human factors and ergonomics society annual
meeting, Vol.42. SAGE Publications Sage CA: Los Angeles, CA, SAGE Publications,
Los Angeles, CA, USA, 82–86.
[28]
Mica R. Endsley, Stephen J. Selcon, Thomas D. Hardiman, and Darryl G. Croft.
1998. A Comparative Analysis of Sagat and Sart for Evaluations of Situation
Awareness. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting 42, 1 (1998), 82–86. doi:10.1177/154193129804200119
[29]
Stefanie M. Faas, Andrea C. Kao, and Martin Baumann. 2020. A Longitudinal
Video Study on Communicating Status and Intent for Self-Driving Vehicle –
Pedestrian Interaction. In Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for
Computing Machinery, New York, NY, USA, 1–14. doi:10.1145/3313831.3376484
[30]
Paul D.S. Fink, Maher Alsamsam, Justin R. Brown, Henry D. Kindler, and
Nicholas A. Giudice. 2023. Give us something to chaueur it: Exploring user
needs in traditional and fully autonomous ridesharing for people who are blind or
visually impaired. Transportation Research Part F: Trac Psychology and Behaviour
98 (2023), 91–103. doi:10.1016/j.trf.2023.09.004
[31]
Paul D.S. Fink, Stacy A. Doore, Xue (Shelley) Lin, Matthew Maring, Pu Zhao,
Aubree Nygaard, Grant Beals, Richard R. Corey, Raymond J. Perry, Katherine
Freund, Velin Dimitrov, and Nicholas A. Giudice. 2023. The Autonomous Vehicle
Assistant (AVA): Emerging Technology Design Supporting Blind and Visually
Impaired Travelers in Autonomous Transportation. International Journal of
Human-Computer Studies (2023), 103125. doi:10.1016/j.ijhcs.2023.103125
[32]
Paul D. S. Fink. 2023. Accessible Autonomy: Exploring Inclusive Autonomous Vehicle
Design and Interaction for People Who Are Blind and Visually Impaired. Ph. D.
Dissertation. University of Maine. https://digitalcommons.library.umaine.edu/
etd/3817
[33]
Paul D. S. Fink, Anas Abou Allaban, Omoruyi E. Atekha, Raymond J. Perry,
Emily S. Sumner, Richard R. Corey, Velin Dimitrov, and Nicholas A. Giudice.
2023. Expanded Situational Awareness Without Vision. In Proceedings of the
2023 ACM/IEEE International Conference on Human-Robot Interaction, Ginevra
Castellano, Laurel Riek, Maya Cakmak, and Iolanda Leite (Eds.). ACM, New York,
NY, USA, 54–62. doi:10.1145/3568162.3576975
[34]
Paul D. S. Fink, Anas Abou Allaban, Omoruyi E. Atekha, Raymond J. Perry,
Emily S. Sumner, Richard R. Corey, Velin Dimitrov, and Nicholas A. Giudice.
2023. Expanded Situational Awareness Without Vision: A Novel Haptic Interface
for Use in Fully Autonomous Vehicles. In Proceedings of the 2023 ACM/IEEE
International Conference on Human-Robot Interaction. ACM, Stockholm Sweden,
54–62. doi:10.1145/3568162.3576975
[35]
Paul D. S. Fink, Velin Dimitrov, Hiroshi Yasuda, Tiany L. Chen, Richard R. Corey,
Nicholas A. Giudice, and Emily S. Sumner. 2023. Autonomous is Not Enough:
Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People
with Visual Impairments. In Proceedings of the 2023 CHI Conference on Human
Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for
Computing Machinery, New York, NY, USA, Article 74, 13 pages. doi:10.1145/
3544548.3580762
[36]
Paul D. S. Fink, H. Milne, A. Caccese, M. Alsamsam, J. Loranger, Mark Colley,
and Nicholas A Giudice. 2024. Accessible Maps for the Future of Inclusive
Ridesharing. In 16th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ’24). ACM, New York, NY, USA.
doi:10.1145/3640792.3675736
[37]
Paul D. S. Fink, Emily Sarah Sumner, and Velin Dimitrov. 2024. Multisensory
gestural-audio interface to promote situational awareness for improved au-
tonomous vehicle control. https://patents.google.com/patent/US20240217539A1/
en
[38]
Finward Studio. 2024. Suburb Neighborhood House Pack (Modular). Finward
Studio. https://assetstore.unity.com/packages/3d/environments/urban/suburb-
neighborhood-house- pack- modular-72712
[39]
WHO Programme for the Prevention of Blindness and Deafness. 2003. Con-
sultation on development of standards for characterization of vision loss and
visual functioning : Genveva, 4-5 September 2003. WHO/PBL/03.91 pages.
https://apps.who.int/iris/handle/10665/68601
[40]
Nicholas A Giudice and Gordon E Legge. 2008. Blind navigation and the role of
technology. The engineering handbook of smart technology for aging, disability,
and independence (2008), 479–500. doi:10.1002/9780470379424.ch25
[41]
Christiane Glatz, Stas S. Krupenia, Heinrich H. Bültho, and Lewis L. Chuang.
2018. Use the Right Sound for the Right Job: Verbal Commands and Auditory
Icons for a Task-Management System Favor Dierent Information Processes
in the Brain. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing
Machinery, New York, NY, USA, 1–13. doi:10.1145/3173574.3174046
[42]
GTB - Italy. 2018. Feel the View. Retrieved Jul 19, 2024 from https://sites.wpp.
com/wppedcream/2018/healthcare/consumer-digital/feel- the-view
[43]
Anhong Guo, Saige McVea, Xu Wang, Patrick Clary, Ken Goldman, Yang Li, Yu
Zhong, and Jerey P. Bigham. 2018. Investigating Cursor-based Interactions to
Support Non-Visual Exploration in the Real World. In Proceedings of the 20th
International ACM SIGACCESS Conference on Computers and Accessibility. ACM,
Galway Ireland, 3–14. doi:10.1145/3234695.3236339
[44]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task
Load Index): Results of empirical and theoretical research. In Advances in psychol-
ogy. Vol. 52. Elsevier, Amsterdam, The Netherlands, 139–183. doi:10.1016/S0166-
4115(08)62386-9
[45]
Kevin Anthony Ho and Masooda Bashir. 2015. Trust in automation: integrating
empirical evidence on factors that inuence trust. Human factors 57, 3 (2015),
407–434. doi:10.1177/0018720814547570
[46]
Leona Holloway, Swamy Ananthanarayan, Matthew Butler, Madhuka Thisuri
De Silva, Kirsten Ellis, Cagatay Goncu, Kate Stephens, and Kim Marriott. 2022.
Animations at Your Fingertips: Using a Refreshable Tactile Display to Convey
Motion Graphics for People who are Blind or have Low Vision. In Proceedings of
the 24th International ACM SIGACCESS Conference on Computers and Accessibility.
ACM, Athens Greece, 1–16. doi:10.1145/3517428.3544797
[47]
Leona Holloway, Kim Marriott, and Matthew Butler. 2018. Accessible Maps for
the Blind: Comparing 3D Printed Models with Tactile Graphics. In Proceedings of
the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal
QC Canada, 1–13. doi:10.1145/3173574.3173772
[48]
Leona Holloway, Kim Marriott, Matthew Butler, and Samuel Reinders. 2019. 3D
Printed Maps and Icons for Inclusion. In The 21st International ACM SIGACCESS
Conference on Computers and Accessibility, Jerey P. Bigham, Shiri Azenkot, and
Shaun K. Kane (Eds.). ACM, New York, NY, USA, 183–195. doi:10.1145/3308561.
3353790
[49]
Mihail Ivanchev, Francis Zinke, and Ulrike Lucke. 2014. Pre-journey Visualization
of Travel Routes for the Blind on Refreshable Interactive Tactile Displays. In
Computers Helping People with Special Needs, Klaus Miesenberger, Deborah Fels,
Dominique Archambault, Petr Peňáz, and Wolfgang Zagler (Eds.). Vol. 8548.
Springer International Publishing, Cham, 81–88. doi:10.1007/978- 3-319-08599-
9_13 Series Title: Lecture Notes in Computer Science.
[50]
M Jaspers, T Steen, C Bos, and M Geenen. 2004. The think aloud method: a guide
to user interface design. International Journal of Medical Informatics 73, 11-12
(Nov. 2004), 781–795. doi:10.1016/j.ijmedinf.2004.08.003
[51]
Patrick W.Jordan, Ian Lyall McClelland, B. Thomas, and Bernard A. Weerdmeester
(Eds.). 1996. Usability evaluation in industry (rst edition ed.). CRC Press, an
imprint of Taylor and Francis, Boca Raton, FL. https://permalink.obvsg.at/
[52]
Celina Kacperski, Florian Kutzner, and Tobias Vogel. 2024. Comparing au-
tonomous vehicle acceptance of German residents with and without visual
impairments. Disability and rehabilitation. Assistive technology (2024), 1–11.
doi:10.1080/17483107.2024.2317930
[53]
Oswald Kothgassner, A Felnhofer, N Hauk, E Kastenhofer, J Gomm,
and I Krysprin-Exner. 2013. Technology Usage Inventory. https:
//www.g.at/sites/default/les/allgemeine_downloads/thematische%
20programme/programmdokumente/tui_manual.pdf. Manual. Wien: ICARUS 17,
04 (2013), 90. [Online; accessed: 05-JULY-2024].
[54]
Bineeth Kuriakose, Raju Shrestha, and Frode Eika Sandnes. 2022. Tools and
Technologies for Blind and Visually Impaired Navigation Support: A Review.
IETE Technical Review 39, 1 (Jan. 2022), 3–18. doi:10.1080/02564602.2020.1819893
[55]
Jaewook Lee, Jaylin Herskovitz, Yi-Hao Peng, and Anhong Guo. 2022. Image-
Explorer: Multi-Layered Touch Exploration to Encourage Skepticism Towards
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
Imperfect AI-Generated Image Captions. In Proceedings of the 2022 CHI Conference
on Human Factors in Computing Systems (CHI ’22). Association for Computing Ma-
chinery, New York, NY, USA, Article 462, 15 pages. doi:10.1145/3491102.3501966
[56]
So Yeon Lee, B Gurnani, and Fassil B Mesn. 2024. Blindness. https://www.ncbi.
nlm.nih.gov/books/NBK448182/ Updated 2024 Feb 27. In: StatPearls. Treasure
Island (FL): StatPearls Publishing; 2024 Jan..
[57]
Bing Li, Juan Pablo Munoz, Xuejian Rong, Qingtian Chen, Jizhong Xiao, Yingli
Tian, Aries Arditi, and Mohammed Yousuf. 2019. Vision-Based Mobile Indoor
Assistive Navigation Aid for Blind People. IEEE Transactions on Mobile Computing
18, 3 (March 2019), 702–714. doi:10.1109/TMC.2018.2842751
[58]
Jack M. Loomis, Roberta L. Klatzky, and Nicholas A. Giudice. 2013. Representing
3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and
Language. In Multisensory Imagery, Simon Lacey and Rebecca Lawson (Eds.).
Springer New York, New York, NY, 131–155. doi:10.1007/978- 1-4614- 5879-1_8
[59]
Jack M. Loomis, Yvonne Lippa, Roberta L. Klatzky, and Reginald G. Golledge.
2002. Spatial updating of locations specied by 3-D sound and spatial language.
Journal of Experimental Psychology: Learning, Memory, and Cognition 28, 2 (March
2002), 335–345. doi:10.1037/0278- 7393.28.2.335
[60]
Nidzamuddin Md. Yusof, J. Karjanto, J. M. B. Terken, F. L. M. Delbressine, and
G. W. M. Rauterberg. 2020. Gaining Situation Awareness through a Vibrotactile
Display to Mitigate Motion Sickness in Fully-Automated Driving Cars. Interna-
tional Journal of Automotive and Mechanical Engineering 17, 1 (2020), 7771–7783.
doi:10.15282/ijame.17.1.2020.23.0578
[61]
Luca-Maxim Meinhardt, Maximilian Rück, Julian Zähnle, Maryam Elhaidary,
Mark Colley, Michael Rietzler, and Enrico Rukzio. 2024. Hey, What’s Going On?
Conveying Trac Information to People with Visual Impairments in Highly
Automated Vehicles: Introducing OnBoard. Proc. ACM Interact. Mob. Wearable
Ubiquitous Technol. 8, 2, Article 67, 24 pages. doi:10.1145/3659618
[62]
Valerie Mendoza and David G. Novick. 2005. Usability over time. In Proceed-
ings of the 23rd Annual International Conference on Design of Communication:
Documenting & Designing for Pervasive Information (Coventry, United King-
dom) (SIGDOC ’05). Association for Computing Machinery, New York, NY, USA,
151–158. doi:10.1145/1085313.1085348
[63]
Microsoft Seeing AI. 2017. Seeing AI - Talking Camera for the Blind. Microsoft
Seeing AI. Retrieved Aug 19, 2024 from https://www.seeingai.com/
[64]
Joy Ming, Sharon Heung, Shiri Azenkot, and Aditya Vashistha. 2021. Accept or
Address? Researchers’ Perspectives on Response Bias in Accessibility Research.
In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers
and Accessibility. ACM, Virtual Event USA, 1–13. doi:10.1145/3441852.3471216
[65]
Michael J. Muller and Sarah Kuhn. 1993. Participatory Design. Commun. ACM
36, 6 (jun 1993), 24–28. doi:10.1145/153571.255960
[66]
Chihab Nadri, Sangjin Ko, Colin Diggs, Michael Winters, V. K. Sreehari, and
Myounghoon Jeon. 2021. Novel Auditory Displays in Highly Automated Vehicles:
Sonication Improves Driver Situation Awareness, Perceived Workload, and
Overall Experience. Proceedings of the Human Factors and Ergonomics Society
Annual Meeting 65, 1 (2021), 586–590. doi:10.1177/1071181321651071
[67]
noirfx. 2028. Modern City Pack. noirfx. https://assetstore.unity.com/packages/
3d/environments/urban/modern-city- pack-18005
[68]
Oliver Nowak, René Schäfer, Anke Brocker, Philipp Wacker, and Jan Borchers.
2022. Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks
for Textile Sliders. In Proceedings of the 2022 CHI Conference on Human Factors in
Computing Systems (CHI ’22). Association for Computing Machinery, New York,
NY, USA, Article 214, 14 pages. doi:10.1145/3491102.3517473
[69]
World Health Organization. 2022. Blindness and vision impairment. WHO.
Retrieved Jun 15, 2024 from https://www.who.int/news-room/fact-sheets/detail/
blindness-and- visual-impairment
[70]
Sharon Oviatt, Rachel Coulston, and Rebecca Lunsford. 2004. When do we
interact multimodally?: cognitive load and multimodal communication patterns.
In Proceedings of the 6th international conference on Multimodal interfaces. ACM,
State College PA USA, 129–136. doi:10.1145/1027933.1027957
[71]
Hari P. Palani, Paul D. S. Fink, and Nicholas A. Giudice. 2020. Design Guidelines
for Schematizing and Rendering Haptically Perceivable Graphical Elements on
Touchscreen Devices. International Journal of Human–Computer Interaction 36,
15 (Sept. 2020), 1393–1414. doi:10.1080/10447318.2020.1752464
[72]
Konstantinos Papadopoulos, Eleni Koustriava, and Marialena Barouti. 2017. Cog-
nitive maps of individuals with blindness for familiar and unfamiliar spaces:
Construction through audio-tactile maps and walked experience. Computers in
Human Behavior 75 (Oct. 2017), 376–384. doi:10.1016/j.chb.2017.04.057
[73]
Bastian Peging, Shadan Sadeghian, and Debargha Dey. 2021. User interfaces for
automated vehicles. it - Information Technology 63, 2 (2021), 73–75. doi:10.1515/
itit-2021- 0020
[74]
Parivash Ranjbar, Pournami Krishnan Krishnakumari, Jonas Andersson, and
Maria Klingegård. 2022. Vibrotactile guidance for trips with autonomous vehicles
for persons with blindness, deafblindness, and deafness. Transportation Research
Interdisciplinary Perspectives 15 (2022), 100630. doi:10.1016/j.trip.2022.100630
[75]
SAE International. 2021. SAE Levels of Driving Automation™Rened for Clarity
and International Audience. SAE. Retrieved Jul 29, 2023 from https://www.sae.
org/blog/sae-j3016- update
[76]
René Schäfer, Oliver Nowak, Lovis Bero Suchmann, Sören Schröder, and Jan
Borchers. 2023. What’s That Shape? Investigating Eyes-Free Recognition of
Textile Icons. In Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems (CHI ’23). Association for Computing Machinery, New York,
NY, USA, Article 580, 12 pages. doi:10.1145/3544548.3580920
[77]
S. S. Shapiro and M. B. Wilk. 1965. An Analysis of Variance Test for Normality
(Complete Samples). Biometrika 52, 3/4 (1965), 591–611. http://www.jstor.org/
stable/2333709
[78]
Kohei Sonoda and Takahiro Wada. 2017. Displaying System Situation Awareness
Increases Driver Trust in Automated Driving. IEEE Transactions on Intelligent
Vehicles 2, 3 (2017), 185–193. doi:10.1109/TIV.2017.2749178
[79]
R.M. Taylor. 2017. Situational Awareness Rating Technique (Sart): The Devel-
opment of a Tool for Aircrew Systems Design. In Situational Awareness (1 ed.),
Eduardo Salas (Ed.). Routledge, 111–128. doi:10.4324/9781315087924- 8
[80]
Jean Toner. 2009. Small is not too small: Reections concerning the validity of
very small focus groups (VSFGs). Qualitative Social Work 8, 2 (2009), 179–192.
doi:10.1177/1473325009103374
[81]
Endel Tulving and Neal Kroll. 1995. Novelty assessment in the brain and long-
term memory encoding. Psychonomic Bulletin & Review 2, 3 (1995), 387–390.
doi:10.3758/BF03210977
[82] Unity Technologies. 2023. Unity. Unity Technologies. https://unity.com/
[83]
Marius Von Senden. 1960. Space and sight: the perception of space and shape in
the congenitally blind before and after operation. (1960).
[84]
Christopher D. Wickens, Diane L. Sandry, and Michael Vidulich. 1983. Compati-
bility and Resource Competition between Modalities of Input, Central Processing,
and Output. Human Factors: The Journal of the Human Factors and Ergonomics
Society 25, 2 (April 1983), 227–248. doi:10.1177/001872088302500209
[85]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011.
The aligned rank transform for nonparametric factorial analyses using only
anova procedures. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems. ACM, Vancouver BC Canada, 143–146. doi:10.1145/1978942.
1978963
[86]
Jacob O. Wobbrock and Julie A. Kientz. 2016. Research Contributions in Human-
Computer Interaction. Interactions 23, 3 (apr 2016), 38–44. doi:10.1145/2907069
[87]
Marcel Woide, Mark Colley, Nicole Damm, and Martin Baumann. 2022. Eect of
System Capability Verication on Conict, Trust, and Behavior in Automated
Vehicles. In Proceedings of the 14th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications, Yong Gu Ji and Myounghoon
Jeon (Eds.). ACM, New York, NY, USA, 119–130. doi:10.1145/3543174.3545253
[88]
Jiawei Yang, Xinyue Yu, Mengge Wang, Zhenhao Chen, and Hao Tan. 2022. Novel
Tactile Feedback Research for Situation Awareness in Autonomous Vehicles. In
With Design: Reinventing Design Modes, Gerhard Bruyns and Huaxin Wei (Eds.).
Springer Nature Singapore, Singapore, 2874–2887. doi:10.1007/978- 981-19- 4472-
7_186
[89]
Koji Yatani, Nikola Banovic, and Khai Truong. 2012. SpaceSense: Representing
Geographical Information to Visually Impaired People Using Spatial Tactile
Feedback. In Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, Joseph A. Konstan, Ed H. Chi, and Kristina Höök (Eds.). ACM, New York,
NY, USA, 415–424. doi:10.1145/2207676.2207734
[90]
Fang You, Xu Yan, Jun Zhang, and Wei Cui. 2022. Design Factors of Shared
Situation Awareness Interface in Human–Machine Co-Driving. Information 13, 9
(2022), 437. doi:10.3390/info13090437
[91]
Limin Zeng, Mei Miao, and Gerhard Weber. 2015. Interactive Audio-haptic Map
Explorer on a Tactile Display. Interacting with Computers 27, 4 (2015), 413–429.
doi:10.1093/iwc/iwu006
[92]
Yu Zhong, Pierre J. Garrigues, and Jerey P. Bigham. 2013. Real time object
scanning using a mobile phone and cloud-based visual search engine. In Pro-
ceedings of the 15th International ACM SIGACCESS Conference on Computers and
Accessibility. ACM, Bellevue Washington, 1–8. doi:10.1145/2513383.2513443
A Audio Announcements of PathFinder and
the Auditory Baseline
Due to the participant’s mother tongue, the audio announcements
were in German. For this appendix, we translated the audio an-
nouncements via Deepl.com.
A.1 Urban Scene
A.1.1 PathFinder.Upon Reaching the Destination, the follow-
ing audio announcement was played automatically:
"We have reached the end of the journey. The destination, Café
Good Times, can be reached via the pavement next to the road.
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
The entrance to the café is 70 meters away on the right-hand side."
Duration: 0:14 min
Vehicle Button Short. "The destination, Café Good Times, can
be reached via the pavement next to the road. The entrance to the
café is 70 meters away on the right."
Vehicle Button Detailed. "The destination, Café Good Times,
can be reached via the pavement next to the road. The entrance to
the café is 70 meters away on the right. The curb is one meter away
from the vehicle. The pavement is 3 meters wide, paved, and level.
There are many pedestrians and cyclists in front of the vehicle."
Obstacle Buttons
•Button 1: Raised
–Short. "Bus stop."
–Detailed. "Bus stop on the pavement ve meters away."
•Button 2: Raised
–Short. "Rubbish bin."
–Detailed. "Rubbish bin on the pavement three meters away."
•Button 3
–Short. "The pavement is clear."
–
Detailed. "The pavement is clear. The curb is one meter away."
•Button 4: Raised
–Short. "Tree."
–
Detailed. "Tree on the pavement three meters away. No danger
from low-hanging branches."
•Button 5: Raised
–Short. "Three bollards."
–
Detailed. "Three bollards in a row on the pavement three meters
away."
A.1.2 Auditory Baseline. Upon Reaching the Destination, the
following audio announcement was played automatically:
"We have reached the end of the journey. The destination, Café
Good Times, can be reached via the pavement next to the road. The
entrance to the café is 70 meters away on the right-hand side. The
curb is one meter away from the vehicle. The pavement is three
meters wide, paved, and level. There are many pedestrians and
cyclists in front of the vehicle. There is a bus stop ve meters to the
left of the vehicle. A rubbish bin is three meters away from the bus
stop. The pavement in front of the car door does not have obstacles.
A tree is on the pavement three meters away to the right behind
the vehicle. No danger from low-hanging branches. To the right of
the tree, there are 3 bollards in a row on the pavement at a distance
of 3 meters." Duration: 0:57 min
A.2 Rural Scene
A.2.1 PathFinder.Upon Reaching the Destination, the follow-
ing audio announcement was played automatically:
"We have reached the end of the journey. The destination, Carmen’s
House, can be reached via the dirt track right next to the road. The
entrance is 15 meters straight ahead on the left-hand side."
Duration: 0:13 min
Vehicle Button Short. "The destination, Carmen’s House, can
be reached via the eld path directly next to the road. The entrance
is 15 meters straight ahead on the left-hand side."
Vehicle Button Detailed. "The destination, Carmen’s House,
can be reached via the eld path directly next to the road. The
entrance is 15 meters straight ahead on the left-hand side. The edge
of the road is one meter from the vehicle. The eld path is 2 meters
wide, and the surface is unpaved and uneven. There are pedestrians
and cyclists in front of the vehicle."
Obstacle Buttons
•Button 1: Raised
–Short. "Trees"
–
Detailed. "Several trees next to the road on the grass ve meters
away."
•Button 2
–Short. "The area is clear."
–
Detailed. "The area is clear. A footpath crosses the eld path
three meters away."
•Button 3
–Short. "The eld path is clear."
–
Detailed. "The eld path is clear and consists of two channels.
A branch and a stone lie on the grass next to the eld path at
a distance of seven meters."
•Button 4: Raised
–Short. "A fence and a stone behind it."
–
Detailed. "There is a fence on the grass next to the road two
meters away. Behind it is a large stone three meters away."
•Button 5
–Short. "The area is clear."
–Detailed. "The area is clear. The ground is a meadow."
A.2.2 Auditory Baseline. Upon Reaching the Destination, the
following audio announcement was played automatically:
"We have reached the end of the journey. The destination, Carmen’s
House, can be reached via the dirt track right next to the road. The
entrance is 15 meters straight ahead on the left-hand side. The edge
of the road is one meter away from the vehicle. The dirt track is
2 meters wide and the surface is unpaved and uneven. There are
pedestrians and cyclists in front of the vehicle. There are several
trees ve meters to the left in front of the vehicle. Behind the trees
is a path that crosses the dirt track 3 meters from the vehicle. The
country lane begins straight ahead. The track is clear and consists
of two channels. A branch and a stone lie on the grass next to the
eld path at a distance of seven meters. To the right behind the
vehicle is a fence next to the road on the grass two meters away.
Behind it is a large stone three meters away."
Duration: 1:01 min
Developing and Exploring a Multimodal Interface to Assist BVIPs to Exit HAVs CHI ’25, April 26-May 1, 2025, Yokohama, Japan
B Participants’ Demographic Data
The alpha level of blue highlighting of the participant IDs indicates their visual acuity in the tables below.
Table 2: Table of participants’ demographic data for the interactive workshop
ID Age Gender Visual Acuity Impairment
P1 61 M 5% total blindness on the left eye, right eye blurry vision
P2 44 F 1.5% only contours visible
P3 52 F 0% total blindness
P4 67 F 0% total blindness
P5 65 M 3.5% blurry vision
Table 3: Table of participants’ demographic data from the user study and their overlapping participation with the workshop
ID Age Gender Visual Acuity Impairment Workshop Part.
P1 67 F 0% total blindness ×
P2 53 F 0% total blindness ×
P3 60 M 10% vision becomes gray when in distance
P4 72 F 0% total blindness
P5 45 F 1.5% only contours visible ×
P6 62 M 5%
total blindness on the left eye, right eye blurry vision
×
P7 65 M 3% blurry vision, black spots in the fovea
P8 29 M 14% tunnel vision
P9 53 F 2%
red. vision in the left eye, only close objects are visible
P10 68 F 10% blind spots in the fovea
P11 23 F 6% blind spots in the fovea
P12 71 M 0% total blindness
P13 62 M 10% blurry vision
P14 76 F 3.5% colors visible but blurry ×
P15 77 M 12%
total blindness in fovea but limited vision in periphery
P16 62 F 1% perception of brightness/darkness
CHI ’25, April 26-May 1, 2025, Yokohama, Japan Meinhardt et al.
C Descriptive Data of the User Study
Table 4: Table of the descriptive data of the user study
Variable System Scenario n Min Max Mean Median SD
Mental Demand [44] PathFinder Urban 16 1.00 18.00 7.56 7.50 5.15
PathFinder Rural 16 2.00 15.00 7.94 7.50 5.34
Auditory Urban 16 4.00 20.00 11.25 10.00 5.08
Auditory Rural 16 1.00 20.00 10.94 10.00 6.29
Usability (SUS) [51] PathFinder Urban 16 20.00 80.00 64.36 67.50 16.42
PathFinder Rural 16 20.00 87.50 65.31 67.50 16.71
Auditory Urban 16 32.50 80.00 62.34 62.50 13.21
Auditory Rural 16 27.50 80.00 62.50 65.00 15.33
Situation Awareness (SART) [79] PathFinder Urban 16 15.00 29.00 19.69 17.00 4.76
PathFinder Rural 16 7.00 35.00 18.50 18.00 6.27
Auditory Urban 16 10.00 30.00 19.06 19.00 5.30
Auditory Rural 16 3.00 28.00 16.75 18.00 7.06
Perceived Safety [29] PathFinder Urban 16 -1.75 3.00 1.44 1.86 1.42
PathFinder Rural 16 -0.50 3.00 1.50 2.00 1.21
Auditory Urban 16 -1.50 3.00 1.25 1.25 1.47
Auditory Rural 16 -1.50 3.00 1.61 2.13 1.50
Completion Time (in min:sec) PathFinder Urban 16 0:50 4:57 1:52 1:29 1:06
PathFinder Rural 16 0:35 4:37 1:41 1:32 0:51
Auditory Urban 16 1:02 3:55 1:56 1:59 0:45
Auditory Rural 16 1:03 4:09 1:57 1:18 1:03