Interview and Think Aloud Accessibility for Deaf and Hard of
Hearing Participants in Design Research
School of Information
Rochester Institute of Technology
Rochester, NY, USA
Garreth W. Tigwell
School of Information
Rochester Institute of Technology
Rochester, NY, USA
School of Information
Rochester Institute of Technology
Rochester, NY, USA
In interaction or user-centered design practices, it is common to
employ interviews and think-aloud techniques to gather data about
user behavior. These techniques enable researchers to learn about
how users think and use technologies during the design and user
testing process. However, such techniques involve accessing audio
feedback, which may require workarounds if the researcher identi-
es as deaf or hard of hearing (DHH). We report on a project led by
a DHH researcher in which workarounds to audio access resulted
in methodological changes. We discuss the implications of these
•Human-centered computing →Accessibility.
accessibility, research methods, design methods
ACM Reference Format:
Becca Dingman, Garreth W. Tigwell, and Kristen Shinohara. 2021. Interview
and Think Aloud Accessibility for Deaf and Hard of Hearing Participants in
Design Research. In The 23rd International ACM SIGACCESS Conference on
Computers and Accessibility (ASSETS ’21), October 18–22, 2021, Virtual Event,
USA. ACM, New York, NY, USA, 3 pages. https://doi.org/10.1145/3441852.
In user-centered design, interview and think aloud methods are
popular ways to learn about users’ thoughts and preferences when
using technologies [
]. Gathering this information from users
can help designers and researchers to know how users respond
to and use technologies, which in turn can help designers and
researchers to envision new and improved designs. Using these
methods, we investigated the experiences of DHH podcast users [
Our aims were to understand personal interest, accessibility and
future design considerations. Although podcasts are a popular way
for people to get information about a specic topic (the average
American podcast listener averages 8 podcasts per week) [
podcasts provide transcripts, and podcast-hosting platforms are not
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
ASSETS ’21, October 18–22, 2021, Virtual Event, USA
©2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8306-6/21/10.
designed in a way that allows users to access transcripts. Through-
out our work, we made methodological adjustments to account
for the inaccessibility of our research process. Specically, DHH
participants who’s primary language is ASL adjust to think-aloud
protocols by describing their actions rst, then completing the ac-
tion, rather than simultaneously speaking and doing. Additionally,
DHH participants vary widely in communication preferences, re-
quiring nuanced adjustments for interview technique. We reect
and report on these experiences in the context of completing a de-
sign investigation into the accessibility of podcast technologies for
DHH users. We provide suggestions for conducting similar work
and for future research.
2 ADAPTING RESEARCH METHODS
We make use of a multitude of research methods in HCI, and must
select the most suitable method to answer our research questions [
For example, conducting interviews when we want to understand
why people react in a particular way to new technologies [
ploying diary studies to capture data in-the-wild and overcome the
associated memory biases with interviews and questionnaires [
or running controlled lab-based studies to accurately compare in-
teraction methods [
]. In HCI research, the think aloud method has
provided design researchers with a way to capture what technology
users are thinking as they test designs [
insights into why people do what they do when using technology.
Although eort is directed toward selecting the best research
methods, we must consider whether those methods are appropriate
for the participant sample. For example, little work has focused
on adapting design process methods and tools for cultural dier-
], despite data indicating cultural background can alter
response behaviors in specic usability settings [
], and the
accessibility implications of this are unknown .
Although the think aloud protocol is useful for gaining insight
into users’ thinking in-the-moment [
], it is yet unclear how such
a protocol ought to be adjusted (if at all) for users who do not
employ verbal communication. For example, DHH people have
their own culture and language system (e.g., ASL in the US) [
which has generated interest in adapting traditional methods we
used in HCI. Previous work investigated the feasibility of think
aloud methods with DHH people nding the method appropriate,
and providing guidance for working with interpreters during think
aloud sessions [
]. However, such work did not evaluate the
eectiveness of the method, or investigate adjustments for DHH
researchers. In other work, researchers successfully translated the
System Usability Scale into ASL nding it maintained criterion
validity and internal reliability during an evaluation study [
ASSETS ’21, October 18–22, 2021, Virtual Event, USA Dingman et al.
there are opportunities to further reect on whether we need to
dene new processes for other data collection methods for DHH
Our investigation into accessible podcast design involved user as-
sessment interviews and feedback on prototypes [
]. The rst au-
thor, who identies as DHH, conducted the semi-structured inter-
views and prototype feedback sessions to understand the experi-
ences DHH users have with podcasts and their desires for features
to include on podcast platforms. Participants were recruited for the
interviews, and then invited back to give feedback on the proto-
types designed from the interview ndings. A second prototype
session with a dierent group of participants elicited feedback on a
nal design. See [
] for more details on the DHH podcast platform.
In the interviews, participants were asked questions about their
experience with podcasts, their desires to change the current plat-
form they use, their ideas on how transcripts should appear on
the platform, and other features that could enhance their experi-
ence while listening to podcasts [
]. We discussed using automatic
speech recognition (ASR) to auto-transcribe podcasts, including a
threshold for potential errors.
Interview participants were recruited through community social
media groups and email lists, including the “NTID Community”
listserv. The “RIT Cross-Registered Community” are individuals
who are DHH students supported by NTID but are taking RIT
classes. NTID support students receive access support including:
captioning services, ASL interpreters, and note-taking services.
We recruited participants who were at least 18 years of age, who
identied as deaf or hard of hearing, and who followed a podcast,
or wanted to follow a podcast. Interviews were 20 to 45 minutes
and were conducted on a video-conferencing software, Zoom, to
limit in-person interactions due to the COVID-19 global pandemic.
The participants were entered into a $25 rae as compensation.
All interviews were conducted in dierent communication meth-
ods based on participants’ preferences. For example, interview
preferences included simultaneous communication of Spoken Eng-
lish and American Sign Language (ASL), also known as “simcom,”
through a real-time captionist, with Google’s ASR transcription
app, Live Transcribe, as well as in ASL. The interviews were video
recorded with participant consent on Zoom.
Seven participants were interviewed. Their ages ranged between
21 to 27. Four identied as male, one as female, and two as non-
binary. All except for one participant had some experience with
listening to a podcast. We show participant hearing status, lan-
guages used daily, and preferred communication method for inter-
views in Table 1. While some participants preferred Simcom, some
participants preferred to use Signed Exact English, or SEE.
3.2 Think Aloud With Prototypes
Initial low-delity prototypes were created using Figma based on
the information gathered from the interviews. The prototypes were
then shared remotely with four of the participants from the original
interviews who identied as DHH. Due to the small number of
participants for this phase of the project, we refrain from more
specically identifying preferred communication methods.
Feedback sessions were 20 to 30 minutes each and were con-
ducted on Zoom. Participants were asked to interact with the low-
delity prototype through the link provided on Figma, and to share
their screens and click through the prototype using their mouse
or trackpad. Participants were given one of seven scenarios at
a time and asked to work through the prototype thinking aloud
]. Some participants conducted the think aloud in ASL
while some did so in spoken English. Those who used ASL signed
what they planned to do then interacted with the prototype.
Based on participant feedback on the low-delity prototypes,
high-delity prototypes were created. To assess these high-delity
prototypes, a new group of participants was recruited to give feed-
back. Four new participants were recruited who had not previ-
ously been interviewed and had not previously interacted with the
low-delity prototype. All participants identied as DHH, three
identied as female, and one as male. Again, we refrain from pro-
viding specic participant communication preferences due to the
low number of participants.
The high-delity prototype feedback sessions ranged from 20 to
30 minutes each and were conducted on Zoom. During the feedback
session, participants were asked to rst explore the prototype, and
then to complete a set of tasks using a Figma rendition of the
high-delity prototype. Participants completed the same seven
tasks as in the low-delity feedback session. During the feedback
sessions, participants were asked to use the think aloud protocol.
Specically, they were instructed to think aloud in ASL or English
when completing the tasks. Again, we observed that participants
who used ASL signed what they planned to do before interacting
with the prototype.
4 REFLECTIONS AND DISCUSSION
We highlight that we encountered some unexpected interactions
when conducting this work. First, was that participants preferred
a variety of dierent approaches to the interview process. Second,
we observed that some participants conducted their think aloud in
ASL by signing what they planned to do before interacting with
4.1 Variation in Interview Preferences
As this work was led by a DHH researcher, they were able to rec-
ognize dierent communication preferences of individual partici-
pants and respond accordingly. Thus, we were able to collect addi-
tional information about participant communication preferences.
We highlight that the variation in communication preferences per-
haps should be taken into account in such interviews, such as the
DHH researcher did in this case. In addition, because the inter-
views were about podcast accessibility—and in some cases, about
the clarity of podcast speakers—knowing communication prefer-
ences could help the researcher better understand the participant
preferences when dealing with podcast audio.
Interview and Think Aloud Accessibility for Deaf and Hard of Hearing Participants in Design Research ASSETS ’21, October 18–22, 2021, Virtual Event, USA
Table 1: Interview Participant Hearing Status and Preferred Communication
ID Hearing Status Languages Used Daily Preferred Communication
P1 hard of hearing Simcom, English, ASL Simcom, Spoken English
P2 deaf Simcom, English, SEE Simcom
P3 hard of hearing English, Hindi, Japanese Spoken English
P4 deaf/hard of hearing English Spoken English, ASL
P5 deaf English, Thai Written English
P6 hard of hearing Simcom, English, ASL Spoken English
P7 hard of hearing Simcom, English, ASL Spoken English
4.2 ASL Signers and the Think Aloud Protocol
Whereas prior work found gestural think aloud protocols for DHH
participants comparatively feasible to sessions conducted with hear-
ing participants in terms of comment quantity [
], we observed
in both feedback sessions that some participants signed their think-
aloud portion of the task using ASL before commencing with the
task. Although we did not observe issues as a result throughout
the feedback sessions (we did not observe participants changing
their “think-aloud” statements once they completed their tasks),
we consider that the sequential order of steps slightly alters the
“in-the-moment” benet of the think aloud protocol. Specically,
this adjustment enabled us to gain insight into what participants
planned to do, but it did not allow us to take advantage of the
think aloud protocol to its full potential because on-the-y actions
or thoughts may not be captured. Larger studies, or studies that
engage more complex user tasks may disadvantage ASL signers
using think-aloud by not suciently capturing their in-the-moment
reactions. In addition, there may be wide variation in how it might
be addressed. For example, participants in our study signed rst,
and then completed the tasks. However, for more complex tasks,
participants might pause mid-task to add their thoughts, or add
more thoughts post-task. Finally, whereas other work also provided
recommendations for working with interpreters [
], such as to
address prompts or comments that may be lost in translation, this
work was led by a DHH researcher, ameliorating at least some
potential communication issues.
We conducted a user assessment and prototyping study with DHH
participants about the accessibility of podcasts. In doing the work,
we observed some slight adjustments from audio/verbal interview-
ing and think-aloud protocols to better suit the needs of DHH
participants with varying communication preferences. For our lim-
ited study, we did not observe that these adjustments impacted data
collection or analysis. However, we reect that larger studies or
more complex usability testing may warrant additional consider-
ation to ensure these procedures are accessible and inclusive. We
provide here some examples of how these adjustments were made
and accounted for in our work. Future work may investigate other
ways to ensure accessibility of these and similar procedures when
including DHH researchers and participants.
Chadia Abras, Diane Maloney-Krichmar, Jenny Preece, et al
design. Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand
Oaks: Sage Publications 37, 4 (2004), 445–456.
Ted Boren and Judith Ramey. 2000. Thinking aloud: Reconciling theory and
practice. IEEE transactions on professional communication 43, 3 (2000), 261–278.
Apala Lahiri Chavan. 2005. Another culture, another method. In Proceedings of
the 11th International Conference on Human-Computer Interaction, Vol. 21.
Becca Dingman, Garreth W. Tigwell, and Kristen Shinohara. 2021. Designing a
Podcast Platform for Deaf and Hard of Hearing Users. In The 23rd International
ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA)
(ASSETS ’21). Association for Computing Machinery, New York, NY, USA. https:
Karen Emmorey, Stephen M Kosslyn, and Ursula Bellugi. 1993. Visual imagery
and visual-spatial language: Enhanced imagery abilities in deaf and hearing ASL
signers. Cognition 46, 2 (1993), 139–181.
Tiago João Vieira Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves.
2010. Assessing Mobile Touch Interfaces for Tetraplegics.In Proceedings of the 12th
International Conference on Human Computer Interaction with Mobile Devices and
Services (Lisbon, Portugal) (MobileHCI ’10). Association for Computing Machinery,
New York, NY, USA, 31–34. https://doi.org/10.1145/1851600.1851608
Matt Huenerfauth, Kasmira Patel, and Larwan Berke. 2017. Design and Psy-
chometric Evaluation of an American Sign Language Translation of the Sys-
tem Usability Scale. In Proceedings of the 19th International ACM SIGACCESS
Conference on Computers and Accessibility (Baltimore, Maryland, USA) (AS-
SETS ’17). Association for Computing Machinery, New York, NY, USA, 175–184.
Reed Larson and Mihaly Csikszentmihalyi. 2014. The experience sampling
method. In Flow and the foundations of positive psychology. Springer, 21–34.
Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser.2017. Research methods
in human-computer interaction. Morgan Kaufmann.
Jung-Joo Lee and Kun-Pyo Lee. 2007. Cultural Dierences and Design Methods
for User Experience Research: Dutch and Korean Participants Compared. In
Proceedings of the 2007 Conference on Designing Pleasurable Products and Interfaces
(Helsinki, Finland) (DPPI ’07). Association for Computing Machinery, New York,
NY, USA, 21–34. https://doi.org/10.1145/1314161.1314164
Clayton Lewis. 1982. Using the" thinking-aloud" method in cognitive interface
design. IBM TJ Watson Research Center Yorktown Heights, NY.
Clayton Lewis, John Rieman, and Task-Centered User Interface Design. 1993. A
Practical Introduction. University of Colorado, Boulder, Department of Computer
Ang Li, Alice Wang, Zahra Nazari, Praveen Chandar, and Benjamin Carterette.
2020. Do podcasts and music compete with one another? Understanding users’
audio streaming habits. In Proceedings of The Web Conference 2020. 1920–1931.
Carol Padden, Tom Humphries, and Carol Padden. 2009. Inside deaf culture.
Harvard University Press.
Vera Roberts and Deborah Fels. 2002. Methods for inclusion: employing think
aloud protocol with individuals who are deaf. In International Conference on
Computers for Handicapped Persons. Springer, 284–291.
Vera Louise Roberts and Deborah I Fels. 2006. Methods for inclusion: Employing
think aloud protocols in software usability studies with individuals who are deaf.
International Journal of Human-Computer Studies 64, 6 (2006), 489–501.
Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011. Interaction design: beyond
human-computer interaction. John Wiley & Sons.
Garreth W. Tigwell, Kristen Shinohara, and Laleh Nourian. 2021. Accessibility
Across Borders. In CHI ’21 Workshop: Decolonizing HCI Across Borders (CHI
Workshop ’21). 1–4. https://arxiv.org/abs/2105.01488
Sarah J. Tracy. 2013. Qualitative Research Methods: Collecting Evidence, Crafting
Analysis, Communicating Impact. Wiley-Blackwell.
Helma Van Rijn, Yoonnyong Bahk, Pieter Jan Stappers, and Kun-Pyo Lee. 2006.
Three factors for contextmapping in East Asia: Trust, control and nunchi. CoDe-
sign 2, 3 (2006), 157–177. https://doi.org/10.1080/15710880600900561
MW Van Someren, YF Barnard, and JAC Sandberg. 1994. The think aloud method:
a practical approach to modelling cognitive. London: AcademicPress (1994).