About
106
Publications
50,430
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,985
Citations
Introduction
Skills and Expertise
Current institution
Additional affiliations
August 2017 - present
February 2016 - June 2017
June 2015 - August 2015
Publications
Publications (106)
Autonomous navigation robots can increase the independence of blind people but often limit user control, following what is called in Japanese an "omakase" approach where decisions are left to the robot. This research investigates ways to enhance user control in social robot navigation, based on two studies conducted with blind participants. The fir...
People who are blind perceive the world differently than those who are sighted, which can result in distinct motion characteristics. For instance, when crossing at an intersection, blind individuals may have different patterns of movement, such as veering more from a straight path or using touch-based exploration around curbs and obstacles. These b...
Object recognition technologies hold the potential to support blind and low-vision people in navigating the world around them. However, the gap between benchmark performances and practical usability remains a significant challenge. This paper presents a study aimed at understanding blind users' interaction with object recognition systems for identi...
Blind people are often called to contribute image data to datasets for AI innovation with the hope for future accessibility and inclusion. Yet, the visual inspection of the contributed images is inaccessible. To this day, we lack mechanisms for data inspection and control that are accessible to the blind community. To address this gap, we engage 10...
High-precision virtual environments are increasingly important for various education, simulation, training, performance, and entertainment applications. We present HoloCamera, an innovative volumetric capture instrument to rapidly acquire, process, and create cinematic-quality virtual avatars and scenarios. The HoloCamera consists of a custom-desig...
To ensure that AI-infused systems work for disabled people, we need to bring accessibility datasets sourced from this community in the development lifecycle. However, there are many ethical and privacy concerns limiting greater data inclusion, making such datasets not readily available. We present a pair of studies where 13 blind participants engag...
Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the...
Beginning in 1999, the Trace Center partnered with the Technology Access Program (TAP) at Gallaudet University to propose the creation of the first Telecommunication Rehabilitation Engineering Research Center.
Even before Mosaic revolutionized the Internet and the Web by providing the first modern Web browser, the Trace Center was providing information via Gopher servers that used a text-based protocol that looked essentially like a searchable file directory mounted on the Internet.
With the advent of the personal computer, the Trace Center’s primary focus shifted from communication to computer access.
As noted in the origin story, the Trace Center started out working on a communication aid for a single individual, Lydell, a young boy whose only means of communication was a wooden communication board. This device developed with him became the Auto-Monitoring Communication Board or AutoCom.
For the first 25 years of the Trace Center’s life, there was no “Internet” or Web as it exists today. It is increasingly hard to imagine what that was like. Think about trying to find information about assistive technologies, or the nearest clinical program, or who is working on augmentative communication or any related topic without using the Inte...
Interestingly, the Trace Center owes its inception to a benevolent deception.
In August 2016, the Trace Center moved from the University of Wisconsin-Madison, where it had begun and flourished for 45 years, to the University of Maryland, College Park. The move was inspired by an invitation from Ben Shneiderman to consider bringing the Trace Center out to the University of Maryland (UMD)—or at least come visit the iSchool at...
Trace’s first efforts in the area of kiosks and ITM access occurred in the late 1990s.
Trace is now entering its 6th decade with an expanded team of researchers and a new home. The basic mission and values remain the same—but the capabilities have expanded. The current program scope of activities is briefly summarized here.
These are lessons that needed to be, or were serendipitously, learned by the Trace team over the years. They are shared here as reminders for more experienced researchers, and to younger researchers in hopes that they will not need to all be learned through personal experience. They are loosely grouped by topic.
In 1992, the Trace Center drew from its work around computer and software access, its work with consumer product companies, its work with students in its accessible design courses, and with discussions with colleagues to create Accessible Design of Consumer Products: Guidelines for the Design of Consumer Products to Increase Their Accessibility to...
Over the first 50 years, there have been a number of elements that have defined the Trace Center and are believed to be responsible for its successes. Many of these were developed as the result of lessons learned along the way around what did and didn’t work. They are captured here for others interested in what drove the Trace Center program and wh...
While wrapping up the development of the WCAG 2.0 guidelines in 2008, it became clear that the only way to write guidelines was to assume that consumers would be able to secure assistive technologies that were powerful enough to access the new generation of web technologies.
As data-driven systems are increasingly deployed at scale, ethical concerns have arisen around unfair and discriminatory outcomes for historically marginalized groups that are underrepresented in training data. In response, work around AI fairness and inclusion has called for datasets that are representative of various demographic groups. In this p...
Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in...
Iteration of training and evaluating a machine learning model is an important process to improve its performance. However, while teachable interfaces enable blind users to train and test an object recognizer with photos taken in their distinctive environment, accessibility of training iteration and evaluation steps has received little attention. It...
As data-driven systems are increasingly deployed at scale, ethical concerns have arisen around unfair and discriminatory outcomes for historically marginalized groups that are underrepresented in training data. In response, work around AI fairness and inclusion has called for datasets that are representative of various demographic groups.In this pa...
Current activity tracking technologies are largely trained on younger adults' data, which can lead to solutions that are not well-suited for older adults. To build activity trackers for older adults, it is crucial to collect training data with them. To this end, we examine the feasibility and challenges with older adults in collecting activity labe...
Welcome to Including Disability: A Journal Addressing A Very Long-term Problem
Researchers have adopted remote methods, such as online surveys and video conferencing, to overcome challenges in conducting in-person usability testing, such as participation, user representation, and safety. However, remote user evaluation on hardware testbeds is limited, especially for blind participants, as such methods restrict access to obser...
Egocentric vision holds great promise for increasing access to visual information and improving the quality of life for blind people. While we strive to improve recognition performance, it remains difficult to identify which object is of interest to the user; the object may not even be included in the frame due to challenges in camera aiming withou...
Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years....
Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potenti...
The majority of online video contents remain inaccessible to people with visual impairments due to the lack of audio descriptions to depict the video scenes. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily-available. We investigate the feasibility of creating more...
Negative attitudes shape experiences with stigmatized conditions such as dementia, from affecting social relationships to influencing willingness to adopt technology. Consequently, attitudinal change has been identified as one lever to improve life for people with stigmatized conditions. Though recognized as a scaleable approach, social media has n...
Iteratively building and testing machine learning models can help children develop creativity, flexibility, and comfort with machine learning and artificial intelligence. We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers. Children trained image classifiers and tested each oth...
Iteratively building and testing machine learning models can help children develop creativity, flexibility, and comfort with machine learning and artificial intelligence. We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers. Children trained image classifiers and tested each oth...
Iteratively building and testing machine learning models can help children develop creativity, flexibility, and comfort with machine learning and artificial intelligence. We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers. Children trained image classifiers and tested each oth...
The spatial behavior of passersby can be critical to blind individuals to initiate interactions, preserve personal space, or practice social distancing during a pandemic. Among other use cases, wearable cameras employing computer vision can be used to extract proxemic signals of others and thus increase access to the spatial behavior of passersby f...
The spatial behavior of passersby can be critical to blind individuals to initiate interactions, preserve personal space, or practice social distancing during a pandemic. Among other use cases, wearable cameras employing computer vision can be used to extract proxemic signals of others and thus increase access to the spatial behavior of passersby f...
Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years....
Impacted by the disruptions due to the pandemic as students, teaching assistants, and faculty, in this paper we employ a reflexive self-study to share our perspectives and experiences of engaging in an HCI course on Inclusive Design. We find that we were able to overcome some of the anticipated challenges of transitioning in-person experiential lea...
Curation and sharing of datasets are crucial for innovation, benchmarking, bias mitigation, and understanding of real-word scenarios, where AI-infused applications are deployed. This is especially the case for datasets from underrepresented populations typically studied in wellness, accessibility, and aging. However such datasets are scarce and in...
Audio descriptions can make the visual content in videos accessible to people with visual impairments. However, the majority of the online videos lack audio descriptions due in part to the shortage of experts who can create high-quality descriptions. We present ViScene, a web-based authoring tool that taps into the larger pool of sighted non-expert...
Datasets and data sharing play an important role for innovation, benchmarking, mitigating bias, and understanding the complexity of real world AI-infused applications. However, there is a scarcity of available data generated by people with disabilities with the potential for training or evaluating machine learning models. This is partially due to s...
Social media platforms are deeply ingrained in society, and they offer many different spaces for people to engage with others. Unfortunately, accessibility barriers prevent people with disabilities from fully participating in these spaces. Social media users commonly post inaccessible media, including videos without captions (which are important fo...
Speech input is a primary method of interaction for blind mobile device users, yet the process of dictating and reviewing recognized text through audio only (i.e., without access to visual feedback) has received little attention. A recent study found that sighted users could identify only about half of automatic speech recognition (ASR) errors when...
Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians. With advances in computer vision, wearable cameras can provide equitable access to such information. However, the always-on nature of these assistive technolog...
Egocentric vision holds great promises for increasing access to visual information and improving the quality of life for people with visual impairments, with object recognition being one of the daily challenges for this population. While we strive to improve recognition performance, it remains difficult to identify which object is of interest to th...
Egocentric vision holds great promises for increasing access to visual information and improving the quality of life for people with visual impairments, with object recognition being one of the daily challenges for this population. While we strive to improve recognition performance, it remains difficult to identify which object is of interest to th...
Teachable interfaces can empower end-users to attune machine learning systems to their idiosyncratic characteristics and environment by explicitly providing pertinent training examples. While facilitating control, their effectiveness can be hindered by the lack of expertise or misconceptions. We investigate how users may conceptualize, experience,...
Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture. Despite the need for deep interdisciplinary knowledge, existing research occurs in se...
Social media platforms are deeply ingrained in society, and they ofer many diferent spaces for people to engage with others. Unfortunately, accessibility barriers prevent people with disabilities from fully participating in these spaces. Social media users commonly post inaccessible media, including videos without captions (which are important for...
For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest i...
Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture. Despite the need for deep interdisciplinary knowledge, existing research occurs in se...
Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blin...
Teachable interfaces can enable end-users to personalize machine learning applications by explicitly providing a few training examples. They promise higher robustness in the real world by significantly constraining conditions of the learning task to a specific user and their environment. While facilitating user control, their effectiveness can be h...
Independent navigation in unfamiliar and complex environments is a major challenge for blind people. This challenge motivates a multi-disciplinary effort in the CHI community aimed at developing assistive technologies to support the orientation and mobility of blind people, including related disciplines such as accessible computing, cognitive scien...
Twitter continues to be used increasingly for communication related advocacy, activism, and social change. This is also the case for the disability community. In light of the recently proposed ADA Education and Reform in the United States, we investigate factors for effectiveness of sharing or retweeting messages about topics affecting the rights o...
Indoor localization technologies can enhance quality of life for blind people by enabling them to independently explore and navigate indoor environments. Researchers typically evaluate their systems in terms of localization accuracy and user behavior along planned routes. We propose two measures of path-following behavior: deviation from optimal ro...
Assistive applications for orientation and mobility promote independence for people with visual impairment (PVI). While typical design and evaluation of such applications involves small-sample iterative studies, we analyze large-scale longitudinal data from a geographically diverse population. Our publicly released dataset from iMove, a mobile app...
How can accessibility research leverage advances in machine learning and artificial intelligence with limited data? In this article, we argue that teachable machines can empower accessibility research by enabling individuals with disabilities to personalize a data-driven assistive technology. By significantly constraining the conditions of the mach...
Blind people often need to identify objects around them, from packages of food to items of clothing. Automatic object recog nition continues to provide limited assistance in such tasks because models tend to be trained on images taken by sighted people with different background clutter, scale, viewpoints, occlusion, and image quality than in photo...
Software for automating the creation of linguistically accurate and natural-looking animations of American Sign Language (ASL) could increase information accessibility for many people who are deaf. As compared to recording and updating videos of human ASL signers, technology for automatically producing animation from an easy-to-update script would...
In the field of assistive technology, large-scale user studies are hindered by the fact that potential participants are geographically sparse and longitudinal studies are often time consuming. In this contribution, we rely on remote usage data to perform large scale and long duration behavior analysis on users of iMove, a mobile app that supports t...
We investigate a method for selecting recordings of human face and head movements from a sign language corpus to serve as a basis for generating animations of novel sentences of American Sign Language (ASL). Drawing from a collection of recordings that have been categorized into various types of non-manual expressions (NMEs), we define a method for...
Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf and have low English literacy skills. State-of-art sign language animation tools focus...
Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. Researchers evaluate their sign language animation syste...
To support our research on ASL animation synthesis, we have adopted and enhanced a new virtual human animation platform that provides us with greater fine-grained control of facial movements than our previous platform. To determine whether this new platform is sufficiently expressive to generate understandable ASL animations, we analyzed responses...
Advancing the automatic synthesis of linguistically accurate and natural-looking American Sign Language (ASL) animations from an easy-to-update script would increase information accessibility for many people who are deaf by facilitating more ASL content to websites and media. We are investigating the production of ASL grammatical facial expressions...
Animations of American Sign Language (ASL) can make information accessible for many signers with lower levels of English literacy. Automatically synthesizing such animations is challenging because the movements of ASL signs often depend on the context in which they appear, e.g., many ASL verb movements depend on locations in the signing space the s...
Automatic synthesis of linguistically accurate and natural-looking American Sign Language (ASL) animations would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. Based on several years of studies, we identify best practices for conducting experimental evaluations of...
Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. We are investigating the synthesis of ASL facial express...
We introduce and compare three approaches to calculate structureand
content-based performance metrics for user-based evaluation of math audio
rendering systems: Syntax Tree alignment, Baseline Structure Tree alignment,
and MathML Tree Edit Distance. While the first two require “manual” tree
transformation and alignment of the mathematical expressio...
Audio rendering of mathematical expressions has accessibility benefits for people with visual impairment. Seeking a systematic way to measure participants’ perception of the rendered formulae with audio cues, we investigate the design of performance metrics to capture the distance between reference and perceived math expressions. We propose EAR-Mat...
Our lab has conducted experimental evaluations of ASL animations, which can increase accessibility of information for signers with lower literacy in written languages. Participants watch animations and answer carefully engineered questions about the information content. Because of the labor-intensive nature of our current evaluation approach, we se...
Video captioning can increase the accessibility of information for people who are deaf or hard-of-hearing and benefit second language learners and reading-deficient students. We propose a caption editing system that harvests crowdsourced work for the useful task of video captioning. To make the task an engaging activity, its interface incorporates...
We have developed a collection of stimuli (with accompanying comprehension questions and subjective-evaluation questions) that can be used to evaluate the perception and understanding of facial expressions in ASL animations or videos. The stimuli have been designed as part of our laboratory's on-going research on synthesizing ASL facial expressions...
Animations of American Sign Language (ASL) have accessibility benefits for signers with lower written-language literacy. Our lab has conducted prior evaluations of synthesized ASL animations: asking native signers to watch different versions of animations and answer comprehension and subjective questions about them. Seeking an alternative method of...
Many researchers internationally are studying how to synthesize computer animations of sign language; such animations have accessibility benefits for people who are deaf and have lower literacy in written languages. The field has not yet formed a consensus as to how to best conduct evaluations of the quality of sign language animations, and this ar...
Braille code, employing six embossed dots evenly arranged in rectangular letter spaces or cells, constitutes the dominant touch reading or typing system for the blind. Limited to 63 possible dot combinations per cell, there are a number of application examples, such as mathematics and sciences, and assistive technologies, such as braille displays,...
Facial expressions and head movements communicate essential information during ASL sentences. We aim to improve the facial expressions in ASL animations and make them more understandable, ultimately leading to better accessibility of online information for deaf people with low English literacy. This paper presents how we engineer stimuli and questi...