Content uploaded by Roshan Peiris
Author content
All content in this area was uploaded by Roshan Peiris on Mar 12, 2025
Content may be subject to copyright.
Exploring Accessibility of Voice-Controlled Interfaces and
Gesture Interactions in Vehicles for Secondary Driving Tasks for
Deaf and Hard-of-Hearing Drivers
Sanskriti Kumar
School of Information
Rochester Institute of Technology
Rochester, New York, USA
sk6776@rit.edu
Tae Oh
School of Information
Rochester Institute of Technology
Rochester, New York, USA
thoics@rit.edu
Abstract
Voice Control Interfaces (VCIs) have become an ubiquitous technol-
ogy, including in interacting with in-vehicle infortainment systems.
However, research has revealed unique challenges VCIs pose for
Deaf and hard of hearing (DHH) users due to reliance on voice as
input and output. Despite extensive research on VCIs in home set-
tings, there’s a gap in understanding the specic challenges of VCIs
in vehicles. To address this, we conducted a survey with 56 DHH
participants, followed by interviews with 8 participants to explore
their experiences and preferences regarding alternative interaction
methods for in-vehicle VCIs focusing on secondary driving tasks.
Results reveal technical, linguistic, and accessibility challenges af-
fecting VCI eciency in vehicles for DHH users, highlighting the
need for alternative approaches. A follow-up brainstorming ses-
sion with DHH participants suggests using gesture-based input
and one-handed sign language as viable alternatives for in-vehicle
VCIs, and exploring feasible gesture and sign language datasets for
interaction with the in-vehicle systems.
CCS Concepts
• Human-centered computing → Empirical studies in acces-
sibility.
Keywords
voice control interfaces, infotainment system, Deaf and Hard of
Hearing, gesture control, accessibility, driving, vehicles
ACM Reference Format:
Sanskriti Kumar, Wendy Dannels, Tae Oh, and Roshan L Peiris. 2025. Explor-
ing Accessibility of Voice-Controlled Interfaces and Gesture Interactions in
Vehicles for Secondary Driving Tasks for Deaf and Hard-of-Hearing Drivers.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
Wendy Dannels
NTID Deaf Health Care and Biomedical Science Hub
Rochester Institute of Technology
Rochester, New York, USA
w.dannels@rit.edu
Roshan L Peiris
School of Information
Rochester Institute of Technology
Rochester, New York, USA
roshan.peiris@rit.edu
In Extended Abstracts of the CHI Conference on Human Factors in Computing
Systems (CHI EA ’25), April 26–May 01, 2025, Yokohama, Japan. ACM, New
York, NY, USA, 6 pages. https://doi.org/10.1145/3706599.3719843
1 Introduction
Voice Control Interfaces (VCIs) have become ubiquitous across a
range of technologies, including digital assistants like Google Home,
Alexa, and Siri. However, they pose signicant challenges for deaf
and hard of hearing (DHH) users, primarily due to their reliance on
speech for input and output [
19
]. These challenges are further en-
hanced in in-vehicle systems, where VCIs are increasingly adopted
to enhance usability and safety [
3
,
5
,
23
]. Despite the potential ad-
vantages of hands-free, eyes-free interaction, VCIs often suer from
poor speech recognition accuracy, slow response times, and the
need for users to remember specic commands [
6
,
18
], making them
inaccessible to DHH drivers. Additionally, current automatic speech
recognition (ASR) technologies exhibit high error rates for Deaf
speech due to insucient and unrepresentative training data [7].
Research has explored alternative interaction methods, such as
gestures for input and visual interfaces for feedback [
9
,
13
], yet
these studies largely overlook the specic needs of DHH drivers in
in-vehicle contexts. Several studies have discussed communication
and driving habits of DHH drivers [
2
,
10
] that have identied unique
issues, such as diculties with phone calls and conversations with
passengers while driving [
12
,
24
], but they do not address the acces-
sibility of voice or gesture-controlled systems in vehicles. Existing
gesture-based systems often lack the accuracy, eciency, and feed-
back mechanisms needed for real world usability, particularly for
DHH users [15, 20].
To address these gaps, this research investigates the accessibility
and usability challenges of in-vehicle VCIs for DHH drivers. Addi-
tionally, we explore feasible alternatives sign language gestures for
input as an alternative for VCIs inspired by current gesture sens-
ing systems in vehicles [
11
]. Specically, we explore the following
research questions:
CHI EA ’25, Yokohama, Japan •
RQ1: What are the experiences of DHH drivers interacting
© 2025 Copyright held by the owner/author(s).
with voice controlled in-vehicle systems for secondary driv-
ACM ISBN 979-8-4007-1395-8/25/04
https://doi.org/10.1145/3706599.3719843 ing tasks?
CHI EA ’25, April 26–May 01, 2025, Yokohama, Japan Kumar, Dannels, Oh and Peiris
•
RQ2: What are the preferences of DHH drivers for feedback
from in-vehicle systems?
•
RQ3: How would a DHH driver interact with a gesture con-
trolled in-vehicle system?
This work adopts a multi-method approach, surveys and inter-
views to address RQ1 and RQ2 and a gesture elicitation study [
22
]
to explore RQ3. By examining these aspects, this research aims to
identify practical solutions for enhancing accessibility and usability,
paving the way for inclusive in-vehicle interaction systems that
cater to DHH drivers.
2 Methods
The survey aimed at gaining a preliminary quantitative understand-
ing of how DHH drivers use in-vehicle systems for secondary driv-
ing tasks and also to recruit potential participants for the follow up
interview and brainstorming sessions. Follow up interviews were
conducted to get insights on the experience during these interac-
tions. Both the survey and interview questionnaire were approved
by the institute’s Institutional Review Board.
2.1 Survey Method.
The online survey was designed using the Qualtrics software and
included 24 questions (single choice, multiple choice, Likert scale,
open-ended) that took at most 10 minutes to complete. The ques-
tionnaire was distributed in online DHH groups, and yers in the
National Technical Institute for the Deaf to recruit DHH partici-
pants. The survey aimed to understand the DHH drivers’ method of
completing secondary driving tasks such as interacting with the in-
fotainment system in their vehicles, potential issues and challenges
they have faced (particulars if/when using VCIs), the experience of
feedback from in-vehicle VCIs, and their opinions on alternative
methods of car interactions. Participants who did not use VCIs
were asked why they chose not to use them. As compensation for
participating in the survey, all respondents who consented were
included in a lottery for 20 dollars.
2.1.1 Survey Participants. The survey was posted online for ap-
proximately 2 months. The survey collected 60 responses of which
54 were considered (6 were removed due to incomplete answers).
54 responses were from DHH drivers who either drive daily or oc-
casionally. 22 were Deaf, 4 were deaf, and 27 were hard of hearing.
Based on Carol Padden and Tom Humphries [
16
], for the survey
and interview, ’Deaf’ refers to individuals who are culturally deaf
and ’deaf’ refers to the physical condition of hearing loss. The re-
maining 3 participants had poor and moderate hearing and suered
from tinnitus. 48.3% of drivers use spoken language and 45.1% use
sign language to communicate. Of the participants who use sign
language, 65.2% use sign language for all conversations, 26% com-
municate using sign language with other sign language users, and
8.7% use it on a per-need basis.
2.2 Interview Method.
The interviews were conducted individually that followed a semi-
structured format and started with learning about how the partici-
pants interact with in-car systems such as the infotainment system
and the challenges they faced, if any. They were asked to elabo-
rate on the issues faced while using voice-controlled functions for
both input and output. Additionally, the participants inquired about
their experience with the feedback of the system and changes in
infotainment systems that would benet them. In the interview, a
discussion on their experiences and perceptions of gesture control
as an alternative interaction method, their expectations, and con-
cerns was conducted. Lastly in the follow-up brainstorming session,
each participant was asked to come up with gestures that they
would use to control music, calls, and navigation in their vehicles.
Finishing the interview, the participants were asked to share their
opinions on gesture control after the brainstorming session.
2.2.1 Interview Participants. At the end of the survey, participants
were asked if they would like to participate in a follow-up interview.
Out of the 54 participants from the survey responses, 30 agreed
to a follow-up interview. 8 participants who have had previous
experience using in-vehicle VCIs were recruited for the interview
and follow-up brainstorming session. 2 Deaf, 2 deaf, and 4 hard of
hearing individuals were recruited for the interviews.
2.3 Data Analysis
The quantitative data from the valid responses from the survey were
analyzed using suitable parametric and non-parametric methods.
Next, the open-ended questions and the transcripts of the interviews
were analyzed for themes in the participant feedback using Braun
and Clarke’s thematic analysis approach [
4
]. Due to the preliminary
nature of the study and the repetitive themes identied in both
survey and interview studies, we present the ndings of both studies
collectively. Here, note that the P1-P8 participanted in both survey
and interview studies (they were recruited from the survey) while
the rest of the participants only participated in the survey study.
3 Findings
3.1
Current Usage Patterns and Experience with
In-vehicle Systems
3.1.1 VCIs usage and interest. From the survey ndings, it was
identied that only 32% use or have tried using VCIs before. Out
of those, 63.6% connect their smartphones to access the voice con-
trolled personal assistants and 36.4% use the vehicle inbuilt VCIs.
To wake up the VCIs, 52.5% use physical buttons on the steering
wheel or the touch screen of their infotainment systems while 47.5%
use voice commands. 61.1% of all participants stated that they do
not use VCIs during their drive at all. It includes all of the Deaf
participants, 66.7% of deaf participants, and 58.3% of hard of hear-
ing participants. Among the interview participants, 2 participants
use VCIs in vehicles frequently and 6 participants do not use VCI.
Participants shared their interest in using VCIs but the system did
not work the way they expected it to. P1 shared they have seen
other drivers use it -" I want to use it. I would like to tell the car,
play the next song, or call my mom, call my dad. I would like them
to recognize it, but I don’t know how to use it." Of those who used
VCIs in vehicles, only 1 participant found it to be less distracting
to the driving task. Some participants shared that typically, using
physical buttons on the steering wheel is easy after they remember
the functionality of each button.
Exploring Accessibility of Voice Controlled Interfaces in Vehicles for DHH Drivers CHI EA ’25, April 26–May 01, 2025, Yokohama, Japan
3.1.2 VCI purposes. The primary tasks performed with the VCIs
included accessing the infotainment systems in their car for three
major functions: making calls (36.1%); inputting addresses for nav-
igation (30.5%); and music (22.2%). The other functions included
checking weather information and other tasks such as sending and
reading texts, etc. Based on the survey, 44.9% of the drivers’ primar-
ily use physical buttons to interact with the infotainment system,
followed by the touchscreen display (37.7%), and 10.2% use VCI.
3.1.3 Preferences towards the use of touchscreen and/or buons.
While participants had experience with VCIs, some participants
preferred the use of tangible interfaces for interacting with the
infotainment and in-car systems. In the interview P4 noted that they
prefer the touchscreen because of it’s haptic feedback, "...it gives
me instant feedback when I touch it." While the touch screen gives
instant feedback, it may require drivers to move their eyes from the
road and focus on the screen to interact. In contrast, participants
noted that using buttons was accessible but not convenient for all
functions. For example, P5 who found buttons to be helpful said
"I think it’s easier with music because I know where all the buttons
are for that. So I know what I’m pressing." In contrast, P7 shared,
"The buttons are not descriptive sometimes, there are some options I
still don’t know what they are for." and P1 shared "I don’t know what
button to press on the steering wheel. There are too many buttons on
the wheel.” Additionally, the display options and button options
are far laid out concerning the driver’s seat making it dicult for
drivers to reach menu options while driving. Participants shared
that it is often distracting and is a risk they take.
3.2 Challenges of using VCIs in Vehicles
Participants who used or have tried to use VCIs shared a common
sentiment of wanting to use it for all functions but face dierent
issues. From the survey, 75% of those who use VCI have faced issues
while interacting with voice-controlled infotainment systems.
3.2.1 VCIs incapability to recognize DHH individuals’ accents. 23 of
the survey participants who did not use VCIs to access the in-vehicle
systems, were deaf or Deaf and they either did not use spoken
language or the system did not have the ability to comprehend deaf
accent. Deaf participants relied on the touch screen and buttons
and did not use VCIs. In the interview, P3 shared "I don’t use that
because I don’t ever speak with my voice. I only communicate in sign
language." and P5 shared that the system does not recognize their
voice- "I tried to use it once, it didn’t interpret or read my voice. So
I’ve got a deaf accent. It might sound a little dierent than a hearing
user, but I tried it once. It didn’t go so well, I don’t use it." Drivers
with deaf accents had the same experience every time. P8:"It doesn’t
recognize what I’m trying to say because I don’t. I guess maybe I don’t
speak very well." A hard of hearing participant with English as their
second language had an added hurdle in using VCIs, P2:"English is
my second language. Even though I educated here (in the U.S.), but I
didn’t speak until age seven. So that’s why maybe it’s dicult." The
drivers either don’t trust the system to understand their commands
or blame their speech for the system’s inability to understand. Two
hard of hearing participants shared that the system had called
the wrong person without conrming with the driver. This led to
embarrassment and the driver feeling that they were at fault. The
confusion and distraction in such situations can be dangerous while
the car is on the road.
3.2.2 Auditory feedback is inadequate for DHH drivers. Feedback
to voice commands is crucial for user interaction with a VCI. The
survey revealed that 46% of drivers nd system feedback inadequate.
P1 shared that they want to use VCI but are unable to understand
its auditory response, "Even if I use it (VCIs), I don’t know what
they say." Participants appreciate conrmation of commands, but
inconsistency can lead to confusion. 9 participants said that the
system does not give them correct feedback to a command they
give through VCIs. Issues arise especially when participants are un-
aware of command status, leading to distraction. Similarly, another
participant drove to the wrong location due to the misinterpretation
of the destination they had input using voice. They shared their
frustration that the system did not auditorily conrm the exact
address and executed it in the navigation application. Some par-
ticipants face execution problems due to lack of conrmation or
incorrect recognition, impacting their focus while driving.
3.3 Alternative accessible interactions to
infotainment system
When the above-mentioned interaction methods don’t work, partic-
ipants resort to other ways like using their phone screens or asking
their passengers to control the system. Sometimes, participants
have to halt the vehicle and make the changes or execute the func-
tions they need to which can waste their time. P1: "I have to stop
the car, do what I got to do (add location to navigation application),
and then drive."
3.3.1 Visually clear output is preferred. Participants expressed the
need for visual responses and updates on the screen as they did
not rely on auditory responses from the system. A deaf participant
shared that they understand the safety risk involved in using a
device while driving but that is the convenient interaction and
interface for them, P8 shared, "I’m trying to focus on driving, and I
know we’re not supposed to be on our devices while we’re driving, so I
try as hard as I can just to quickly glance over, select what I need to, and
get my eyes back on the road." Additionally, participants preferred a
completely visual-based feedback that would include icons, clear
text menus, subtitles for audio responses, and colored lights and
haptic for emergencies. P3: "My suggestion is to make everything
visual, have it in text." They preferred the dark mode to view the
text as that was visually cleaner and faster to read. All participants
emphasized they have better peripheral vision stating that they
were comfortable with glancing at the screen while driving. Some
suggested colors and noticeable changes for alerts, P5 suggested,
"I’ll recognize in my peripheral vision that something’s changed on
the screen and that gives me the alert to look at it, but I don’t always
catch it. It would be nice if there was more colorful, more obvious,
more something visually obvious that there was a change." A hard of
hearing participant shared that the option to control the pitch or
tone of the auditory output would make VCIs accessible since they
are able to hear specic pitches along with a visual display.
3.3.2 Transcription or subtitles for conversation and emergency
alerts. Drivers preferred their text messages to be displayed on the
screen, navigation options to be displayed on the screen to select
CHI EA ’25, April 26–May 01, 2025, Yokohama, Japan Kumar, Dannels, Oh and Peiris
from, and in the case of audio output, responses to be transcribed
and shown as subtitles on the screen. Hard of hearing drivers (P1
and P5) expressed that they wanted the system to transcribe spoken
language conversations happening in the car so that they can be
a part, as they currently feel left out when they are in the car as a
driver or a passenger. 3 DHH drivers shared their concern about
not being able to listen to emergency radio alerts or messages. They
wanted the radio announcements to be either transcribed or dis-
played with lights on the screen. In case of emergency, 4 DHH
drivers expressed that they wanted blinking and color-coded light
alerts which would indicate situations of an emergency vehicle
approaching, another car in the blind spot of the car, navigation
alerts, or even text notications. P6: "When there’s a siren, if there
could be an alert that would interpret that sound so that we knew
something was coming."
3.4 Gesture or Sign Language as an Interaction
method
Participants expressed willingness to explore gestures as an alterna-
tive interaction method to voice for secondary driving tasks. 54.9%
of participants (19 hard of hearing, 14 Deaf, and all 4 deaf partici-
pants) preferred using gestures to control the infotainment systems
in their vehicles.
3.4.1 Preferences to use sign language for interaction. Most drivers
stated that using sign language and/or gestures will be an easier
and faster interaction medium that would keep their eyes on the
road. All interviewees felt that gestures would be helpful and were
excited about using sign language as a natural language input as
an alternative to VCIs or typing commands in their phones or
vehicle displays. P8 shared "It could just recognize my signing a lot
faster and then maybe it would pull up the options. Probably quicker
and easier than me trying to have to like type stu in and do these
things kind of have that physical interaction with the technology."
They suggested using one-handed signing and discussed it to be
an intuitive interaction method. Signs in American Sign Language
(ASL) were also suggested as a well-established dataset on which
the system could be trained on for better recognition eciency.
Participants were interested in the possibility of signing and/or
ngerspelling to enter addresses and destinations in navigation
applications which will remove the need to stop the car or type
information while driving.
3.4.2 Concerns about complex gestures, gesture/sign similarities
and regional sign language dierenes. Participants emphasized that
gestures should not be too complicated but not too general as well.
Complicated gestures can be dicult for drivers to remember and
general gestures can cause accidental recognition or execution of a
function. 5 participants highlighted the need for conrmation after
recognition of any gesture.
While gestures or sign language were discussed as easy and
time-saving, participants had concerns that were similar to their
VCIs experience. Misrecognition, misinterpretation, and acciden-
tal activation were issues they anticipated in the gesture control
interaction despite its potential benets.
Although they thought ASL would be an easy input method,
some participants felt the regional dierences in ASL and the sign
language diering internationally could pose challenges which
is why the system should be trained to be able to recognize the
dierences. P5: "sign language isn’t universal. Each country or region
has their own sign language, just like spoken languages. But gestures
can be similar, like this gesture." P8: "instead of just having the system
recognize one sign, if it could account for those regional dialects, too,
just to provide more options, that would be great."
3.5 Brainstorming gestures or signs for input
All interview participants were asked to participate in a brainstorm-
ing session where they were given commands and they had to
come up with gestures or signs to control the car infotainment sys-
tem. All participants used ASL in this study. Here, the participants
discussed various aspects of using gestures/signs for interaction
with in-vehicle systems such as the location for signing, wake up
commands and signs for interaction.
3.5.1 Display and sensors closer to the driver makes gesture inter-
action easier. All participants in the interview study were right-
handed signers. Thus, 5 participants preferred the camera or sen-
sors to be on the right of the steering wheel and 3 participants
preferred it on and in the center of the steering wheel to avoid
moving their eyes. Participants also discussed the feasibility of in-
tegrating a sign detection conrmation (that the system is able to
see the sign/gesture) to ensure that the command is being read.
3.5.2 Waking up the system and confirming input. To start their in-
teractions, participants had to think of a gesture or sign they would
use to wake up the system and signal it to start recognizing gestures
as commands. Three participants suggested using an ASL sign for
wake-up, and two preferred waving to the camera. Two participants
preferred a physical button or a tap to activate the system. Beyond
the wake-up commands, two participants wanted a conrmation
when the system recognized a gesture before executing a command
to avoid misrecognition.
3.5.3 Natural language interaction with ASL for commands. Partic-
ipants emphasized on using one-handed signs for commands for
safe driving. The participants were asked to command, ’play’ and
’pause’ the music along with increasing or decreasing the volume.
Here, participants used ASL signs for ’play’ using one hand (Fig. 1)
and improvised one-handed sign of ’stop’ to play/stop the music.
When asked for possible signs for phone calls (Fig. 2), three partici-
pants used the ASL sign for ‘pick’, three signed ’phone’ in ASL and
others either signed ‘yes’ in ASL (1) or a thumbs up (1). Similar to
which ASL sign for ‘hang up’ was suggested for ending a phone
call. Another common suggestion was the idea to preset frequent
locations like home or work addresses in the navigation applica-
tion and access them using ASL (e.g. signing ‘home’) or signing
numbers to their corresponding option. All participants preferred
using ASL or ngerspelling the location or address they wanted to
input. Similarly, the suggestion to sign ‘gas’ or ‘food’ would show
the nearest gas station or restaurant respectively. 3 participants
wanted to give commands as a hearing individual would do with a
VCI i.e. sign the complete commands in ASL but with one hand.
3.5.4 Easy and intuitive gestures. Among gestures, to control music
and command ‘play’, participants’ ideas varied from waving a hand,
Exploring Accessibility of Voice Controlled Interfaces in Vehicles for DHH Drivers CHI EA ’25, April 26–May 01, 2025, Yokohama, Japan
Figure 1: Participant using a combination of ASL and ges-
tures to control music a: signing ’play’ (shaking Y-handshape
downward) with one hand to start playing the music b: palm
facing gesture to command ’pause’ the music.
Figure 2: Participant using ASL to give commands for phone.
a: sign for ’approval’ to pick an incoming call b: sign for ’hang
up’ to end an ongoing call.
snapping ngers, tapping, or nger pointing, some participants
agreed on a palm facing the camera to command ‘pause’ (Fig. 1).
To control the volume, all participants suggested either a toggle of
thumbs up and down or a hand up and down motion to increase
or decrease the volume respectively. 2 participants came up with
a slithering (like a snake) movement to indicate a route or a road.
Lastly, some participants suggested nger-pointing, swipes, and
pinching gestures near the steering wheel to avoid having to reach
all the way to touch the screen.
4 Discussion
The study investigated the current use of and experience with
voice control interfaces. In answering RQ1, a prominent theme
that emerged was the interest of DHH drivers in using VCIs which
was based on the ease of functionality it provides to them or the
drivers around them. However, a majority of the DHH population
does not use voice control due to the inaccessibility of the input
method. Those who did make an eort to use VCIs, they often
found themselves repeating commands because the system did not
interpret deaf accents resulting in frustration, distracted driving,
and nally resorting to other interaction methods. While some
of these challenges have been observed in previous research on
VCIs in home settings [
8
], we highlight the additional issues such
as cognitive load while driving, communication with passengers,
etc. in this research in the driving context. Hence, drivers were
completing secondary driving tasks like using the infotainment
system through conventional interaction methods of buttons or
touch screens which also came with the issue of overwhelming
number of options causing distraction while driving.
In answering RQ2, concerns regarding the lack of feedback for
DHH individuals were highlighted in the interviews. Alternative
feedback systems emerged as a crucial aspect [
14
], with participants
discussing personalized modalities based on their needs. DHH dri-
vers emphasized clear visual outputs in their range of vision while
driving to minimize distractions (such as on heads-up displays[
21
]).
Suggestions included subtitles for audio responses, the choice to
change the pitch and tone of audio or transcription of spoken lan-
guage in the vehicle, and color-coded light alerts for emergencies.
Subtitles and a sign language interpretation to voice for phone calls
have been suggested in previous studies as technologies for com-
munication between professional drivers and passengers [
12
] but
it surfaced as a need in the interviews for personal conversations
among DHH drivers and their passengers.
While research has explored interest of drivers for using gesture
while driving [
11
,
17
,
20
], in answering RQ3, signing as an input
methods garnered signicant interest among DHH participants
as an accessible alternative to VCIs. Through the interview and
follow-up brainstorming session, it was found that DHH drivers
preferred socially used and toggle gestures for commands like in-
crease/decrease and switching on/o functions. As the commands
became specic like opening an application or adding an address,
participants regardless of their hearing status preferred to use sign
language to input commands or ngerspelling names and addresses.
While sign language were perceived as a convenient alternative to
voice, they present other challenges such as more nuanced input
requires recognizing gaze and facial expressions [
1
]. Thus, concerns
regarding recognition accuracy were noted, mirroring challenges
experienced with VCIs and sign language recognition systems.
5 Limitations and Future Works
Our focus was on understanding the experience of DHH drivers
with VCIs, and more participants in the interview would allow us
to gain elaborate insights. Contextual inquiries and simulations to
identify further accessibility challenges could inform improvements
in the interaction between drivers and in-vehicle systems. System-
atic research could explore improving voice recognition algorithms,
which would involve training models on a broader dataset to better
comprehend diverse accents, including those of DHH individuals
and individuals who have English as their second language.
Accessibility and cognitive load of hand gestures: Our study did
not evaluate the potential use of gestures or one-handed signs in
actual driving scenarios. For a standard set of gestures, the cognitive
load to remember and use them should be considered for diverse
drivers who may have dierent cognitive, auditory, and/or physical
needs.
CHI EA ’25, April 26–May 01, 2025, Yokohama, Japan
Sign language recognition technology: Investigating the feasibility
and eectiveness of sign language or gesture recognition tech-
nologies as alternative input methods needs further exploration.
Future research could focus on developing ecient sign recogni-
tion systems for varying car environments, capable of accurately
interpreting sign languages from dierent regions and countries.
6 Conclusion
In this research, we conducted a survey, interviews and follow-up
brainstorming session with DHH drivers to understand the unique
usability and accessibility characteristics of VCIs in vehicles. The
study underlines the inaccessibility of VCIs and ineciency of other
interaction methods for DHH drivers. The highlight is the need and
interest in an accessible alternative i.e. sign language for a natural
interaction method for secondary driving tasks along with visual
outputs. Further research is needed to develop recognition systems
and the input gesture dataset considering the needs of dierent
users.
References
[1]
Chanchal Agrawal and Roshan L Peiris. 2021. I See What You’re Saying: A
Literature Review of Eye Tracking Research in Communication of Deaf or Hard
of Hearing Users. In Proceedings of the 23rd International ACM SIGACCESS Con-
ference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). As-
sociation for Computing Machinery, New York, N Y, USA, Article 41, 13 pages.
https://doi.org/10.1145/3441852.3471209
[2]
Admira Beha. 2022. Self-Assessment of Driving Abilities of Deaf and Hearing
Drivers. European Journal of Humanities and Social Sciences 2, 6 (Nov. 2022),
70–75. https://doi.org/10.24018/ejsocial.2022.2.6.357 Number: 6.
[3]
Laura-Bianca Bilius and Radu-Daniel Vatavu. 2021. A multistudy investigation
of drivers and passengers’ gesture and voice input preferences for in-vehicle
interactions. Journal of Intelligent Transportation Systems 25, 2 (2021), 197–220.
https://doi.org/10.1080/15472450.2020.1846127
[4]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology.
Qualitative research in psychology 3, 2 (2006), 77–101.
[5]
Chris Carter and Robert Graham. 2000. Experimental Comparison of Manual
and Voice Controls for the Operation of in-Vehicle Systems. Proceedings of the
Human Factors and Ergonomics Society Annual Meeting 44, 20 (July 2000), 3–286.
https://doi.org/10.1177/154193120004402016 Num Pages: 3-289 Publisher: SAGE
Publications Inc.
[6]
Chun-Cheng Chang. 2016. Assessing Cognitive Workload of In-Vehicle Voice Control
Systems. Thesis. University of Washington. https://digital.lib.washington.edu:
443/researchworks/handle/1773/38159 Accepted: 2017-02-14T22:40:02Z.
[7]
Abraham Glasser, Kesavan Kushalnagar, and Raja Kushalnagar. 2017. Deaf,
Hard of Hearing, and Hearing Perspectives on Using Automatic Speech Recog-
nition in Conversation. In Proceedings of the 19th International ACM SIGAC-
CESS Conference on Computers and Accessibility (Baltimore, Maryland, USA) (AS-
SETS ’17). Association for Computing Machinery, New York, N Y, USA, 427–432.
https://doi.org/10.1145/3132525.3134781
[8]
Abraham Glasser, Vaishnavi Mande, and Matt Huenerfauth. 2020. Accessibility for
Deaf and Hard of Hearing Users: Sign Language Conversational User Interfaces.
In Proceedings of the 2nd Conference on Conversational User Interfaces. ACM,
Bilbao Spain, 1–3. https://doi.org/10.1145/3405755.3406158
[9]
Abraham Glasser, Vaishnavi Mande, and Matt Huenerfauth. 2021. Understand-
ing deaf and hard-of-hearing users’ interest in sign-language interaction with
personal-assistant devices. In Proceedings of the 18th International Web for All Con-
ference. ACM, Ljubljana Slovenia, 1–11. https://doi.org/10.1145/3430263.3452428
[10]
Pierce T. Hamilton. 2013. Communicating through Distraction: A Study of Deaf
Drivers and Their Communication Style in a Driving Environment. Master’s
thesis. Rochester Institute of Technology, United States – New York. https://
www.proquest.com/docview/1755942591/abstract/41570D81D0444011PQ/1 ISBN:
9781339352763.
[11]
Marie Lee, Ziming Li, Wendy Dannels, Tae Oh, and Roshan L. Peiris. 2025. Explor-
ing One Handed Signing During Driving for Interacting with In-vehicle Systems
for Deaf and Hard of Hearing Drivers (CHI EA ’25). Association for Computing
Machinery, New York, N Y, USA, 7 pages. https://doi.org/10.1145/3706599.3719868
[12]
Sooyeon Lee, Bjorn Hubert-Wallander, Molly Stevens, and John M. Carroll. 2019.
Understanding and Designing for Deaf or Hard of Hearing Drivers on Uber. In
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
ACM, Glasgow Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300759
Kumar, Dannels, Oh and Peiris
[13]
Roshan Mathew, Garreth W. Tigwell, and Roshan L. Peiris. 2024. Deaf and Hard
of Hearing People’s Perspectives on Augmented Reality Interfaces for Improving
the Accessibility of Smart Speakers. In Universal Access in Human-Computer
Interaction, Margherita Antona and Constantine Stephanidis (Eds.). Springer
Nature Switzerland, Cham, 334–357.
[14] Erika E. Miller, Linda Ng Boyle, James W. Jenness, and John D. Lee. 2018. Voice
Control Tasks on Cognitive Workload and Driving Performance: Implications
of Modality, Diculty, and Duration. Transportation Research Record 2672, 37
(Dec. 2018), 84–93. https://doi.org/10.1177/0361198118797483 Publisher: SAGE
Publications Inc.
[15]
Pavlo Molchanov, Shalini Gupta, Kihwan Kim, and Kari Pulli. 2015. Multi-sensor
system for driver’s hand-gesture recognition. In 2015 11th IEEE International
Conference and Workshops on Automatic Face and Gesture Recognition (FG), Vol. 1.
IEEE International Conference and Workshops on Automatic Face and Gesture
Recognition (FG), Ljubljana, Slovenia, 1–8. https://doi.org/10.1109/FG.2015.
7163132
[16]
Carol Padden and Tom Humphries. 1990. Deaf in America. https://www.hup.
harvard.edu/books/9780674194243
[17]
Carl A. Pickering, Keith J. Burnham, and Michael J. Richardson. 2007. A Re-
search Study of Hand Gesture Recognition Technologies and Applications for
Human Vehicle Interaction. In 2007 3rd Institution of Engineering and Technology
Conference on Automotive Electronics. 1–15.
[18]
Thomas A. Ranney, Joanne L. Harbluk, and Y. Ian Noy. 2005. Eects of Voice
Technology on Test Track Driving Performance: Implications for Driver Dis-
traction. Human Factors 47, 2 (June 2005), 439–454. https://doi.org/10.1518/
0018720054679515 Publisher: SAGE Publications Inc.
[19]
Jason Rodolitz, Evan Gambill, Brittany Willis, Christian Vogler, and Raja Kushal-
nagar. 2019. Accessibility of voice-activated agents for people who are deaf or
hard of hearing. Journal on Technology and Persons with Disabilities 7 (2019),
144–156.
[20]
Gözel Shakeri, John H. Williamson, and Stephen Brewster. 2017. Novel Mul-
timodal Feedback Techniques for In-Car Mid-Air Gesture Interaction. In Pro-
ceedings of the 9th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ’17). Association for Computing
Machinery, New York, NY, USA, 84–93. https://doi.org/10.1145/3122986.3123011
[21]
Robert Tscharn, Marc Erich Latoschik, Diana Löer, and Jörn Hurtienne. 2017.
“Stop over there”: natural gesture and speech interaction for non-critical sponta-
neous intervention in autonomous driving. In Proceedings of the 19th ACM Interna-
tional Conference on Multimodal Interaction (ICMI ’17). Association for Computing
Machinery, New York, NY, USA, 91–100. https://doi.org/10.1145/3136755.3136787
[22]
Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Ja-
cob O. Wobbrock. 2020. A Systematic Review of Gesture Elicitation Studies:
What Can We Learn from 216 Studies?. In Proceedings of the 2020 ACM Designing
Interactive Systems Conference (DIS ’20). Association for Computing Machinery,
New York, NY, USA, 855–872. https://doi.org/10.1145/3357236.3395511
[23]
Jiarui Wu, Chun-Cheng Chang, Linda Ng Boyle, and James Jenness. 2015. Impact
of In-vehicle Voice Control Systems on Driver Distraction: Insights From Contex-
tual Interviews. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting 59, 1 (Sept. 2015), 1583–1587. https://doi.org/10.1177/1541931215591342
Publisher: SAGE Publications Inc.
[24]
Jason J Zodda, Shawn S Nelson Schmitt, Anna E Crisologo, Rachael Plotkin,
Michael Yates, Gregory A Witkin, and Wyatte C Hall. 2012. Signing While
Driving: An Investigation of Divided Attention Resources Among Deaf Drivers.
JADARA 45, 3 (2012), 314 – 329. https://nsuworks.nova.edu/jadara/vol45/iss3/4