Content uploaded by Suzanne Menzel
Author content
All content in this area was uploaded by Suzanne Menzel on Apr 21, 2021
Content may be subject to copyright.
The Making of an Accessible
Intensive Research Experience
Suzanne Menzel
Indiana University – Bloomington
menzel@iu.edu
Katie A. Siek
Indiana University – Bloomington
ksiek@iu.edu
David Crandall
Indiana University – Bloomington
djcran@iu.edu
Abstract—An IRE is an Intensive Research Experience com-
pressed into a small number of consecutive days, during which
students receive highly-focused mentoring from practicing re-
searchers. We describe our experience with two specific IREs at
the HelloResearch workshop in 2018. Students in each IRE met as
a group (of sizes 13 and 9, respectively) for nine research sessions,
totaling about 14 contact hours, over three days. The participants
were selected from a diverse (in terms of race, ethnicity, and
socioeconomic status) group of female undergraduates attending
Indiana University and schools across the country. We describe
our experiences mentoring a blind student in the “Creating
Custom Technology to Improve One’s Quality of Life” project
and a deaf student in the “Activity Recognition Through Deep
Learning with Sound and Video” project. We share general
lessons we learned about accessibility and announce plans for
a second HelloResearch workshop in 2020.
Index Terms—Accessibility, Undergraduate Research
I. INTRODUCTION
HelloResearch [1] delivers Intensive Research Experiences
(IREs) to undergraduates in a three-day workshop. Like the
OurCS [2] workshop at Carnegie Mellon University, HelloRe-
search provides opportunities for students in computing-related
fields to work on exploratory problems in teams led by world-
class researchers [3]. The aim is to broaden the ranks of
women leading their field by increasing the number and suc-
cess of undergraduate women in the U.S. who enroll in Ph.D.
programs. The first HelloResearch, in October 2018, offered
a dozen IRE projects to 92 participants. Subsequent iterations
at this scale are planned for 2020 and yearly thereafter.
HelloResearch recruits a diverse group of future researchers
from underserved populations by proactively targeting female
students from specific underrepresented groups [4]. Individuals
with disabilities are an often overlooked component of the
diversity equation [5]. In 2018, six of our participants self-
identified as people with disabilities. In this paper, we describe
actions we took before and during the workshop to ensure
full participation by a blind student and a deaf student. We
report on the experiences of the mentors who worked directly
with these students on research projects. In hindsight, there
are things we wish we had done differently — we disclose
our missteps and propose measures for future improvement.
HelloResearch is supported by Google, Oracle Academy, AccessComputing,
Beckman-Coulter, IU’s School of Informatics, Computing, and Engineering,
and IU’s Office of the VP for Diversity, Equity & Multicultural Affairs.
978-1-7281-0821-6/19/$31.00 ©2019 IEEE
II. ACCESSIBILITY PREPARATIONS
Three months before the workshop, we surveyed our par-
ticipants about project interests. We assigned students with
disabilities to their first choice of research project and con-
nected them with their project mentors. We sought advice
from a variety of sources. After consulting with the director
of our Disabled Student Services (DSS) office, we asked her
to speak in the opening session of the workshop. She, in turn,
invited her blind colleague to co-present — they discussed
and demonstrated appropriate ways to interact with and offer
assistance to people with disabilities. This introduction by
knowledgeable professionals provided an opening for mentors
to emphasize design for user empowerment, i.e., the concept
that including people with disabilities in the design phase
results in more accessible technology [6].
1) Planning for a blind participant: We arranged for the
blind student’s hotel room to be near the elevator and for her to
room with someone who requested the same research project.
We hired a mobility specialist to meet the student on the
first workshop day to help her orient herself to the campus
and the large maker space where all the research sessions
for her project would take place. She let us know that she
uses a screen reader exclusively to access her computer. To
our disappointment, the software tools the leaders planned to
use (Adobe Illustrator, Autodesk’s Tinkercad, and the Arduino
IDE) are not screen-reader accessible. After brainstorming
with the student, we came up with a plan that allowed her
to use an application on her laptop to draw designs which we
then rastered on the laser cutter. The student further suggested
she could use Emacspeak [7] to write code and then paste it
into the Arduino IDE.
2) Planning for a deaf participant: The deaf student re-
quested American Sign Language (ASL) interpretation during
the workshop. We applied for and received a $3.5K mini-grant
from AccessComputing [8] to provide two interpreters for three
days. We prepared a detailed list of expected technical terms
and their definitions, so that the interpreters could be familiar
with them ahead of time and practice how to interpret them.
III. LEADER EXPERIENCES DURING THE WORKSHOP
1) IRE in Sociotechnical Systems (Katie A. Siek): This
project began with an accelerated iteration process where we
summarized our peer-reviewed, published qualitative needs
assessment work with pregnant women, people with rare
Authorized licensed use limited to: IUPUI. Downloaded on April 21,2021 at 19:03:00 UTC from IEEE Xplore. Restrictions apply.
Photo credit: Chris Kowalczyk
Fig. 1: Building a smart cubby to monitor levels of donated clothing.
diseases, low socioeconomic status children in school envi-
ronments, and rural older adults. Afterwards, the students self-
organized into subgroups of 2-5 people based on their shared
interest to address a need in one of the target populations.
The blind student’s subgroup created a smart cubby (see
Figure 1) for a public school. Light sensors and sound
components integrated into the cubby allow a person to see,
hear, or feel how full the cubby is, at the press of a button.
Additionally, the team used the laser printer to create a braille
label for the cubby showing the size and type of clothing
it contains (e.g., Youth L T-shirts). After seeing the smart
cubby during one of the “report out” sessions with the larger
research team, the other subgroups started asking, “How is our
project accessible to someone who’s vision impaired? Hearing
impaired?” Then, they too started integrating tactile, auditory,
and visual indicators to communicate information.
To program the Arduino microcontrollers, the blind student
wrote code in Emacspeak, all the while conferring with her
partner, and then copied it to the Arduino IDE for testing. After
a time, they decided to add a buzzer to their circuit, making
it vibrate twice after a successful transfer to alert the blind
student. The blind student’s partner then narrated what behav-
iors were happening based on the current coding challenge.
The programming pair continued to add in extra vibrations or
sound to ensure the blind student could understand what the
circuit was doing and assist in debugging.
2) IRE in Computer Vision and Audio Processing (David
Crandall): This project investigated integrating visual and
audio perception into home virtual assistants like Amazon’s
Alexa. Project goals included brainstorming ways that visual
and audio perception could improve the devices’ awareness of
the environment and better help and interact with people, then
implementing and testing prototype machine learning models
for carrying out these perceptual tasks.
The deaf student contributed immensely by suggesting ways
a virtual assistant could improve the safety and security of a
household. For example, she explained that a hearing-impaired
household member may be unable to detect sounds of danger
or distress (a child’s crying, a burglar alarm, a dog barking, an
elderly relative falling, etc.), whereas a virtual assistant could
perceive these sounds and signal the person accordingly.
Technical jargon was often misinterpreted by the sign lan-
guage interpreters, who were not CS experts. It was difficult
for the interpreters to keep up with the conversation, especially
when terms were finger-spelled letter by letter. We decided to
open a shared Google Document, projected on an overhead
screen, that everyone could edit. We summarized our discus-
sion with bullet points as we went along, which had the side
benefit of generating a set of detailed notes. This also provided
an easy way for us to share URLs, source code snippets, etc.
With time, we learned to resist the urge to look at and
speak to the interpreters, instead of to the student herself. The
student also (informally) taught the group about the experience
of hearing-impaired individuals, including, for example, social
customs around the use of sign language (e.g., that different
regions of the country have different dialects and slang signs).
IV. LES SO NS LE AR NE D
We provided a template presentation to the research teams to
aid them in preparing their slides for the presentation session
at the end of the workshop. We wish we had specified, in
the template, that all presentations must be accessible, and
illustrated simple measures to do so, e.g., alt-text for images
(a phrase or sentence tag that a person using a screen reader
hears when they encounter the image), subtitles on video,
captions summarizing what the presenters are saying, and
uncluttered slides with simple bullet points. For teams with
blind participants, we needed to collect advance copies of
notes, papers, and slide presentations from mentors and submit
them to our DSS office in plenty of time to be converted for
use with a screen reader.
As a cost-saving measure, we contracted the ASL inter-
preters to start work each day after breakfast. We quickly
learned this was a mistake, especially during the crucial
moments at the start of the workshop, when a deaf participant
explained that she felt isolated and uncomfortable because she
was not able to communicate with others at her table. Although
we remedied this for next day, the damage had been done.
Initially, we planned for the interpreters to be on stage
during the talks, but the deaf participant asked that we seat
them directly at her table. We learned that deaf people do
not generally view themselves as disabled and that having
highly visible interpreters draws unwanted attention to them.
In the future, interpreters will be present during meals, tours,
and the poster session. We will attempt to find interpreters
with CS expertise to more accurately and naturally translate
technical terms — although we had provided an advance list of
terms and definitions, the interpreters struggled with relatively
common phrases such as “machine learning”. We need to
consider ways to make sure the deaf participant always feels
engaged in the research conversations, perhaps by giving her
more of a leadership role.
We would like to give the students with disabilities a voice
by including them as authors in future experience reports and,
finally, we recognize the need to involve researchers with
disabilities as speakers and mentors throughout the workshop.
Authorized licensed use limited to: IUPUI. Downloaded on April 21,2021 at 19:03:00 UTC from IEEE Xplore. Restrictions apply.
REFERENCES
[1] (2018) HelloResearch at Indiana University. [Online]. Available:
http://helloresearch.sice.indiana.edu
[2] (2017) OurCS at Carnegie Mellon University. [Online]. Available:
http://www.cs.cmu.edu/ourcs
[3] S. Menzel and C. Frieze, “Experiencing research through OurCS:
Opportunities for undergraduate research in computer science,” in 2018
Research on Equity and Sustained Participation in Engineering, Comput-
ing, and Technology (RESPECT), vol. 00, Feb. 2018, pp. 1–4. [Online].
Available: doi.ieeecomputersociety.org/10.1109/RESPECT.2018.8491693
[4] S. Menzel, K. A. Siek, and D. Crandall, “Hello Research! Developing
an intensive research experience for undergraduate women,” in ACM
Technical Symposium on Computer Science Education (SIGCSE), 2019.
[5] B. Blaser, R. E. Ladner, and S. Burgstahler, “Including
disability in diversity,” in 2018 Research on Equity and
Sustained Participation in Engineering, Computing, and Technology
(RESPECT), vol. 00, Feb. 2018, pp. 1–4. [Online]. Available:
doi.ieeecomputersociety.org/10.1109/RESPECT.2018.8491717
[6] R. E. Ladner, “Design for user empowerment,” Interactions,
vol. 22, no. 2, pp. 24–29, Feb. 2015. [Online]. Available:
http://doi.acm.org/10.1145/2723869
[7] Emacspeak – the complete audio desktop. [Online]. Available:
http://emacspeak.sourceforge.net
[8] (2018) AccessComputing mini-grant. [Online]. Available:
https://www.washington.edu/accesscomputing/apply-accesscomputing-
minigrant
Authorized licensed use limited to: IUPUI. Downloaded on April 21,2021 at 19:03:00 UTC from IEEE Xplore. Restrictions apply.