About
79
Publications
37,956
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
3,088
Citations
Introduction
Skills and Expertise
Current institution
Cornell Tech
Publications
Publications (79)
Blind and Low Vision (BLV) people have adopted AI-powered visual interpretation applications to address their daily needs. While these applications have been helpful, prior work has found that users remain unsatisfied by their frequent errors. Recently, multimodal large language models (MLLMs) have been integrated into visual interpretation applica...
Social VR has increased in popularity due to its affordances for rich, embodied, and nonverbal communication. However, nonverbal communication remains inaccessible for blind and low vision people in social VR. We designed accessible cues with audio and haptics to represent three nonverbal behaviors: eye contact, head shaking, and head nodding. We e...
As social VR applications grow in popularity, blind and low vision users encounter continued accessibility barriers. Yet social VR, which enables multiple people to engage in the same virtual space, presents a unique opportunity to allow other people to support a user's access needs. To explore this opportunity, we designed a framework based on phy...
With the increasing adoption of social virtual reality (VR), it is critical to design inclusive avatars. While researchers have investigated how and why blind and d/Deaf people wish to disclose their disabilities in VR, little is known about the preferences of many others with invisible disabilities (e.g., ADHD, dyslexia, chronic conditions). We fi...
Teachers of the visually impaired (TVIs) regularly present tactile materials (tactile graphics, 3D models, and real objects) to students with vision impairments. Researchers have been increasingly interested in designing tools to support the use of tactile materials, but we still lack an in-depth understanding of how tactile materials are created a...
Teachers of the visually impaired (TVIs) regularly present tactile materials (tactile graphics, 3D models, and real objects) to students with vision impairments. Researchers have been increasingly interested in designing tools to support the use of tactile materials, but we still lack an in-depth understanding of how tactile materials are created a...
Older adults are using voice-based technologies in a variety of different contexts and are uniquely positioned to benefit from smart speakers' handsfree, voice-based interface. In order to better understand the ways in which older adults engage with and learn how to use smart speakers, we conducted qualitative, semi-structured interviews with four...
Scholars have recently drawn attention to a range of controversial issues posed by the use of computer vision for automatically generating descriptions of people in images. Despite these concerns, automated image description has become an important tool to ensure equitable access to information for blind and low vision people. In this paper, we inv...
Although some technology companies have made significant strides towards the accessibility of their products, most consumer-facing technology products still pose access barriers to people with disabilities. Prior research has established that accessibility expertise is limited to a small number of practitioners in companies, but we do not know how...
Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for Deaf users. However, unlike many other AI-enabled technologies, these systems provide information that is already availa...
Wayfinding is a critical but challenging task for people who have low vision, a visual impairment that falls short of blindness. Prior wayfinding systems for people with visual impairments focused on blind people, providing only audio and tactile feedback. Since people with low vision use their remaining vision, we sought to determine how audio fee...
Recent advances in head-mounted displays (HMDs) present an opportunity to design vision enhancement systems for people with low vision, whose vision cannot be corrected with glasses or contact lenses. We aim to understand whether and how HMDs can aid low vision people in their daily lives. We designed ForeSee, an HMD prototype that enhances people’...
To help both designers and people with tetraplegia fully realize the benefts of voice assistant technology, we conducted interviews with fve people with tetraplegia in the home to understand how this population currently uses voice-based interfaces as well as other technologies in their everyday tasks. We found that people with tetraplegia use voic...
Students with specific learning disabilities (SLD) typically struggle in their K-12 math classes, limiting the likelihood of success in STEM fields. Private tutoring is reported to be effective at helping them succeed in math, but it is not a scalable solution. While many recent e-learning tools have aimed at personalizing math support in ways that...
Tactile maps are important tools for people with visual impairments (VIs). Teachers and orientation and mobility (O&M) specialists often design tactile maps to help their VI students and clients learn about geographic areas. To design these maps, a designer must use modeling software applications, which require professional training and rely on vis...
Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for d/Deaf users. However, unlike many other AI-enabled technologies, these systems provide information that is already avai...
Navigating stairs is a dangerous mobility challenge for people with low vision, who have a visual impairment that falls short of blindness. Prior research contributed systems for stair navigation that provide audio or tactile feedback, but people with low vision have usable vision and don't typically use nonvisual aids. We conducted the first explo...
Students with visual impairments struggle to learn various concepts in the academic curriculum because diagrams, images, and other visual are not accessible to them. To address this, researchers have design interactive 3D printed models (I3Ms) that provide audio descriptions when a user touches components of a model. In prior work, I3Ms were design...
Tactile models are important learning materials for visually impaired students. With the adoption of 3D printing technologies, visually impaired students and teachers will have more access to 3D printed tactile models. We designed Talkit++, an iOS application that plays audio and visual content as a user touches parts of a 3D print. With Talkit++,...
Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed naviga...
Knocking is a way of interacting with everyday objects. We introduce BeatIt, a novel technique that allows users to use passive, everyday objects to control a smart environment by recognizing the sounds generated from knocking on the objects. BeatIt uses a BeatSet, a series of percussive sound samples, to represent the sound signature of knocking o...
Like sighted people, visually impaired people want to share photographs on social networking services, but find it difficult to identify and select photos from their albums. We aimed to address this problem by incorporating state-of-the-art computer-generated descriptions into Facebook's photo-sharing feature. We interviewed 12 visually impaired pa...
Recognizing others is a major challenge for people with visual impairments (VIPs) and can hinder engagement in social activities. We present Accessibility Bot, a research prototype bot on Facebook Messenger, that leverages state-of-the-art computer vision and a user's friends' tagged photos on Facebook to help people with visual impairments recogni...
Like sighted people, visually impaired people want to share photographs on social networking services, but find it difficult to identify and select photos from their albums. We aimed to address this problem by incorporating state-of-the-art computer-generated descriptions into Facebook's photo-sharing feature. We interviewed 12 visually impaired pa...
While our community has many active projects involving blind people, low vision is rarely addressed. People with low vision have functional vision, but their visual impairment adversely affects their daily life and it cannot be corrected with glasses or contact lenses. Over the last few years, we have been conducting research with this understudied...
The emergence of personal computing devices offers both a challenge and opportunity for displaying text: small screens can be hard to read, but also support higher resolution. To fit content on a small screen, text must be small. This small text size can make computing devices unusable, in particular to low-vision users, whose vision is not correct...
As three-dimensional printers become more available, 3D printed models can serve as important learning materials, especially for blind people who perceive the models tactilely. Such models can be much more powerful when augmented with audio annotations that describe the model and their elements. We present Markit and Talkit, a low-barrier toolkit f...
Three-dimensional printed models have the potential to serve as powerful accessibility tools for blind people. Recently, researchers have developed methods to further enhance 3D prints by making them interactive: when a user touches a certain area in the model, the model speaks a description of the area. However, these interactive models were limit...
Computer-based interactions increasingly pervade our everyday environments. Be it on a mobile device, a wearable device, a wall-sized display, or an augmented reality device, interactive systems often rely on the consumption, composition, and manipulation of text. The focus of this workshop is on exploring the problems and opportunities of text int...
People with low vision have a visual impairment that affects their ability to perform daily activities. Unlike blind people, low vision people have functional vision and can potentially benefit from smart glasses that provide dynamic, always-available visual information. We sought to determine what low vision people could see on mainstream commerci...
Graphics like maps and models are important learning materials. With recently developed projects, we can use 3D printers to make tactile graphics that are more accessible to blind people. However, current 3D printed graphics can only convey limited information through their shapes and textures. We present Magic Touch, a computer vision-based system...
Low vision is a pervasive condition in which people have difficulty seeing even with corrective lenses. People with low vision frequently use mainstream computing devices, however how they use their devices to access information and whether digital low vision accessibility tools provide adequate support remains understudied. We addressed these ques...
As small displays on devices like smartwatches become increasingly common, many people have difficulty reading the text on these displays. Vision conditions like presbyopia that result in blurry near vision make reading small text particularly hard. We design multiple different scripts for displaying English text, legible at small sizes even when b...
Visual impairments encompass a range of visual abilities. People with low vision have functional vision and thus their experiences are likely to be different from people with no vision. We sought to answer two research questions: (1) what challenges do low vision people face when performing daily activities and (2) what aids (high- and low-tech) do...
Visual search is a major challenge for low vision people. Conventional vision enhancements like magnification help low vision people see more details, but cannot indicate the location of a target in a visual search task. In this paper, we explore visual cues---a new approach to facilitate visual search tasks for low vision people. We focus on produ...
Three-dimensional models are important learning resources for blind people. With advances in 3D printing, 3D models are becoming more available. However, unlike visual or tactile graphics, there is no standard accessible way to label components in 3D models. We present a labeling toolkit that enables users to add and access audio labels to 3D print...
In this paper, we explore blind people's motivations, challenges, interactions, and experiences with visual content on Social Networking Services (SNSs). We present findings from an interview study of 11 individuals and a survey study of 60 individuals, all with little to no functional vision. Compared to sighted SNS users, our blind participants f...
Most low vision people have functional vision and would likely prefer to use their vision to access information. Recently, there have been advances in head-mounted displays, cameras, and image processing technology that create opportunities to improve the visual experience for low vision people. In this paper, we present ForeSee, a head-mounted vis...
Navigating indoors is challenging for blind people and they often rely on assistance from sighted people. We propose a solution for indoor navigation involving multi-purpose robots that will likely reside in many buildings in the future. In this report, we present a design for how robots can guide blind people to an indoor destination in an effecti...
There are many educational smartphone games for children, but few are accessible to blind children. We present BraillePlay, a suite of accessible games for smartphones that teach Braille character encodings to promote Braille literacy. The BraillePlay games are based on VBraille, a method for displaying Braille characters on a smartphone. BraillePl...
O-SNAP is a mobile application that explicitly supports in-person collaborative search by enabling users to physically signal their willingness to share and by facilitating face-to-face search-related communication. The Web extra at http://youtu.be/AKoITuxB9BY is a video in which author Meredith Ringel Morris discusses scenarios that can prompt col...
Often when people search the web from their phones, they do so collaboratively. We present a mobile application that supports in-person collaborative search by allowing users to physically signal a willingness to share. While the core application provides standard mobile search functionality, users rotate their devices to landscape orientation to i...
Much recent work has explored the challenge of nonvisual text entry on mobile devices. While researchers have attempted to solve this problem with gestures, we explore a different modality: speech. We conducted a survey with 169 blind and sighted participants to investigate how often, what for, and why blind people used speech for input on their mo...
Low-vision and blind bus riders often rely on known physical landmarks to help locate and verify bus stop locations (e.g., by searching for a shelter, bench, newspaper bin). However, there are currently few, if any, methods to determine this information a priori via computational tools or services. In this paper, we introduce and evaluate a new sca...
Eyes-free input usually relies on audio feedback that can be difficult to hear in noisy environments. We present DigiTaps, an eyes-free number entry method for touchscreen devices that requires little auditory attention. To enter a digit, users tap or swipe anywhere on the screen with one, two, or three fingers. The 10 digits are encoded by combina...
A computerized system sends a series of touchscreen keyboard touch data to a touchscreen keyboard device that receives the touchscreen keyboard touch data and processes the received string of touchscreen keyboard touch data to simulate touches to a touchscreen of the touchscreen keyboard device. A touchscreen keyboard algorithm is applied to the si...
The time and labor demanded by a typical laboratory-based keyboard evaluation are limiting resources for algorithmic adjustment and optimization. We propose Remulation, a complementary method for evaluating touchscreen keyboard correction and recognition algorithms. It replicates prior user study data through real-time, on-device simulation. We hav...
Blind mobile device users face security risks such as inaccessible authentication methods, and aural and visual eavesdropping. We interviewed 13 blind smartphone users and found that most participants were unaware of or not concerned about potential security threats. Not a single participant used optional authentication methods such as a password-p...
A new non-visual method of numeric entry into a smartphone is designed, implemented, and tested. Users tap the smartphone screen with one to three fingers or swipe the screen in order to enter numbers. No buttons are used--only simple, easy-to-remember gestures. A preliminary valuation with sighted users compares the method to a standard accessible...
Text entry on smartphones is far slower and more error-prone than on traditional desktop keyboards, despite sophisticated detection and auto-correct algorithms. To strengthen the empirical and modeling foundation of smartphone text input improvements, we explore touch behavior on soft QWERTY keyboards when used with two thumbs, an index finger, and...
We present Input Finger Detection (IFD), a novel technique for nonvisual touch screen input, and its application, the Perkinput text entry method. With IFD, signals are input into a device with multi-point touches, where each finger represents one bit, either touching the screen or not. Maximum likelihood and tracking algorithms are used to detect...
We explore using vibration on a smartphone to provide turn-by-turn walking instructions to people with visual impairments. We present two novel feedback methods called Wand and ScreenEdge and compare them to a third method called Pattern. We built a prototype and conducted a user study where 8 participants walked along a pre-programmed route using...
Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit can be challenging for them. We conducted semi-structured interviews with 13 blind and deaf-blind people to understand how they use public transit and what human values were important to them in this domain. Two key values were identified: independence...
The Middle East Education Through Technology (MEET) program is a non-profit organization based in Jerusalem, that aims to empower future Israeli and Palestinian leaders by teaching them computer science and business. From the perspective of MEET's instructors, this paper describes how MEET uses computer science education to foster professional and...
Smart phones typically support a range of GPS-enabled navigation services. However, most navigation services on smart phones are of limited use to people with visual disabilities. In this paper, we present iWalk, a speech-enabled local search and navigation prototype for people with low vision. iWalk runs on smart phones. It supports speech input,...
We conducted interviews with blind and deaf-blind people to understand how they use the public transit system. In this paper, we discuss key challenges our participants faced and present a tool we developed to alleviate these challenges. We built this tool on MoBraille, a novel framework that enables a Braille display to benefit from many features...
In order to understand how a labor market for human com-putation functions, it is important to know how workers search for tasks. This paper uses two complementary meth-ods to gain insight into how workers search for tasks on Mechanical Turk. First, we perform a high frequency scrape of 36 pages of search results and analyze it by looking at the ra...