Conference Paper

Access overlays: Improving non-visual access to large touch screens for blind users

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Many touch screens remain inaccessible to blind users, and those approaches to providing access that do exist offer minimal support for interacting with large touch screens or spatial data. In this paper, we introduce a set of three software-based access overlays intended to improve the accessibility of large touch screen interfaces, specifically interactive tabletops. Our access overlays are called edge projection, neighborhood browsing, and touch-and-speak. In a user study, 14 blind users compared access overlays to an implementation of Apple's VoiceOver screen reader. Our results show that two of our techniques were faster than VoiceOver, that participants correctly answered more questions about the screen's layout using our techniques, and that participants overwhelmingly preferred our techniques. We developed several applications demonstrating the use of access overlays, including an accessible map kiosk and an accessible board game.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Few projects have been developed on tabletops. Kane, Morris, et al. (2011) designed three interaction techniques for maps displayed on large touch-screens. In a first bimanual technique called "Edge Projection", locations of map elements were projected onto the x-and y-axes on the left and lower edges of the screen. ...
... Additional feedback can be provided with the vibrations of the devices, but the vibrations are not spatialized (i.e. the whole device vibrates), and hence cannot provide accurate cutaneous feedback. Consequently, most of the existing Digital Interactive Maps based on finger exploration are quite simple, and only display a very limited number of elements (Kane et al., 2011;Pielot, Poppinga, & Boll, 2010;Simonnet, Bothorel, Maximiano, & Thepaut, 2012;Su et al., 2010;Yairi, Azuma, & Takano, 2009). ...
... As we previously mentioned, it is a challenge to find and relate specific points when the exploration is tactile, especially when it is performed with only one contact point. For instance, Kane et al. (2011) developed three interaction techniques to help the user locate, relocate and relate points of interest on a map. The Talking TMAP prototype (Miele et al., 2006) provided assistance to find a location, and calculate distances, but also provided a menu for modifying the settings (sensitivity, unity of measure, and speech rate). ...
Preprint
Tactile maps are commonly used to give visually impaired users access to geographical representations. Although those relief maps are efficient tools for acquisition of spatial knowledge, they present several limitations and issues such as the need to read braille. Several research projects have been led during the past three decades in order to improve access to maps using interactive technologies. In this chapter, we present an exhaustive review of interactive map prototypes. We classified existing interactive maps into two categories: Digital Interactive Maps (DIMs) that are displayed on a flat surface such as a screen; and Hybrid Interactive Maps (HIMs) that include both a digital and a physical representation. In each family, we identified several subcategories depending on the technology being used. We compared the categories and subcategories according to cost, availability and technological limitations, but also in terms of content, comprehension and interactivity. Then we reviewed a number of studies showing that those maps can support spatial learning for visually impaired users. Finally, we identified new technologies and methods that could improve the accessibility of graphics for visually impaired users in the future.
... Cardinal directions speech strategy uses (top, bottom, left, and right) instructions to guide people with BVI to a specific position on large 2D surfaces. This strategy has been used in touch screens and tactile graphic readers (22,34), but it is also common in other technological contexts (24,30,35,36). More refined approaches extend beyond directional cues, incorporating proximity feedback through volume adjustment (22) or subtle modifications to speech instructions, such as using "go a little left" instead of "go left" when the user is close in proximity (21). ...
... These UIs were carefully designed to improve the accuracy and efficiency of pinpointing elements, specifically tailored to meet the needs of individuals with BVI. The design choices were based on the widespread adoption of sonification and speech-based UIs in assistive technology, facilitating enhanced access to tactile graphics as supported by relevant studies (9,22,34,50,61,62). ...
... An explanation for this is that Sonoice uses more information than the other two methods, which some users saw as overwhelming, "My favourite was Sonar, but Sonoice is still a great option although it uses a lot of information which can confuse you!" (P1, VI, Sonar). Another factor that could have contributed to this may stem from the fact that assistive technology typically relies on either voice or sonification approaches (9,34,50,61,84), making a combination of these two methods less common and potentially leading to unfamiliarity or hesitation among users. ...
Article
Full-text available
Pinpointing elements on large tactile surfaces is challenging for individuals with blindness and visual impairment (BVI) seeking to access two-dimensional (2D) information. This is particularly evident when using 2D tactile readers, devices designed to provide 2D information using static tactile representations with audio explanations. Traditional pinpointing methods, such as sighted assistance and trial-and-error, are limited and inefficient, while alternative pinpointing user interfaces (UI) are still emerging and need advancement. To address these limitations, we develop three distinct navigation UIs using a user-centred design approach: Sonar (proximity-radar sonification), Voice (direct clock-system speech instructions), and Sonoice, a new method that combines elements of both. The navigation UIs were incorporated into the Tactonom Reader device to conduct a trial study with ten BVI participants. Our UIs exhibited superior performance and higher user satisfaction than the conventional trial-and-error approach, showcasing scalability to varied assistive technology and their effectiveness regardless of graphic complexity. The innovative Sonoice approach achieved the highest efficiency in pinpointing elements, but user satisfaction was highest with the Sonar approach. Surprisingly, participant preferences varied and did not always align with their most effective strategy, underscoring the importance of accommodating individual user preferences and contextual factors when choosing between the three UIs. While more extensive training may reveal further differences between these UIs, our results emphasise the significance of offering diverse options to meet user needs. Altogether, the results provide valuable insights for improving the functionality of 2D tactile readers, thereby contributing to the future development of accessible technology.
... A few researchers have studied and compared different ways to interact in audio or vibrotactile output touch location-based systems for specific tasks. For example, Kane et al. investigate invocation of commands on mobile phones for blind users [63], Tennison and Gorlewicz [101] compare techniques and strategies to perceive and follow lines, Ramôa et al. [90] and Kane et al. [64] compare ways to direct users to elements on the 2D space, with the latter also covering browsing regions and retrieving localized detail. We integrated this knowledge into the design of our system. ...
... Dwell+Tap: Once a finger dwells on a node or a link 2 , a tap by a second finger in the proximity of the dwelling finger causes the system to read (through speech synthesis) the name of the object and, for each consecutive tap, its attributes. We moved away from Kane et al.'s introduction of "Split Tap" [63,64,74], in which the second finger can tap anywhere, to enable multiple of these gestures simultaneously on several objects [DP4] and to make the system more robust against accidental taps [80]. ...
... We incorporate Ramôa et al.'s approach, which found that voice-based guidance is the most efficient method for helping people pinpoint a location, and it does not require prior training [90]. Similar techniques are utilized in ImageAssist [83] and AccessOverlays [64], but without the axial and proximity enhancements. ...
... Touch-screens possess many limitations for PVI in accessing graphical information as it requires accurate hand-eye coordination [12]. Software-based implementations, such as Apple's VoiceOver and Android's Talkback have been introduced, alongside with many prior studies [9,8,24] to address this issue. However, PVI still encounter many difficulties in interacting with touch screen interfaces. ...
... Moreover, they did not re-investigate the accessibility status of current map applications. Alternatively, a number of researchers focused on investigating on more efficient zooming operations [23,10,17,19] or other new techniques such as grid filters [9,2,26]. Unfortunately, many PVI still find the zoom function less useful than sighted peers. ...
... Multiple studies have focused on improving spatial map accessibility for touchscreen devices for PVI [9,8,24]. For example, Kane et al. [9] introduced three softwarebased access overlays (i.e., edge projection, neighborhood browsing and touch-and-speak ) to enhance accessibility of touch screen interfaces and compared them against Apple's VoiceOver. ...
Preprint
Full-text available
Recently, researchers have studied improving the accessibility of image exploration for peoplewith visual impairments. While most of the studies haveworked on making a tactile version of an image or providing audio feedback upon touch, it is still difficult to find and locate specific elements, particularly when thesize of the target is small. In this paper, we focusedon investigating how screen reader users interact withsmall items on an image on touchscreen devices. Withthis goal, we conducted a single-session user study with 12 participants who are screen reader users where theywere asked to share their prior experience with zoomfunctionality and map applications, try an existing mapapplication, and perform image exploration tasks withthree different exploration techniques. Findings suggest that providing a hint on which cell the item is locatedwhen the screen is divided into 2 × 2 grid and zoomfunction were considered helpful during the exploration.Based on the findings, we share implications for mak-ing image exploration task more accessible for screenreader users.
... One is based on submarine-radar sonification navigation (sonar-based), one streams audio solely on the coordinate plan x and y-axis of the target element (axis-based), and the last uses direct speech instruction commands (voice-based). These design choices are relevant to the BVI community since sonification, and speech-based UIs are often used in assistive technologies that enhance tactile graphics access for BVI [8][9][10][11]. The three user interfaces were implemented in the Tactonom Reader device of Inventivio GmbH and evaluated with 13 blind and visually impaired participants. Beyond comparing the strategies, we looked for interesting interactions and patterns that indicate how blind participants use audio navigation user interfaces. ...
... Others have designed software navigation user interfaces to assist BVI in locating elements in touch screens. In [11], the authors developed a user interface that uses four voice commands (top, bottom, left, right) to guide BVI to specific positions on large touch screens. Some did not develop a feature that directly guides the user to one element but a context of the user position in the whole touch screen by using stereo sounds and frequency changes to delineate the x and y position, respectively [15]. ...
... The justification for this similar rate between voice and sonar methods involves the participants' past experiences with sound-based interfaces and the favoritism of each method. A user interface design solution provides both user interfaces, sonar and voice, which has also been proposed for different navigation UIs in large touch screens [11]. Another factor was that the voice method was faster, but it was repetitive and annoying to use, as some participants pointed out. ...
Article
Full-text available
Access to complex graphical information is essential when connecting blind and visually impaired (BVI) people with the world. Tactile graphics readers enable access to graphical data through audio-tactile user interfaces (UIs), but these have yet to mature. A challenging task for blind people is locating specific elements–areas in detailed tactile graphics. To this end, we developed three audio navigation UIs that dynamically guide the user’s hand to a specific position using audio feedback. One is based on submarine sonar sounds, another relies on the target’s coordinate plan x and y-axis, and the last uses direct voice instructions. The UIs were implemented in the Tactonom Reader device, a new tactile graphic reader that enhances swell paper graphics with pinpointed audio explanations. To evaluate the effectiveness of the three different dynamic navigation UIs, we conducted a within-subject usability test that involved 13 BVI participants. Beyond comparing the effectiveness of the different UIs, we observed and recorded the interaction of the visually impaired participants with the different navigation UI to further investigate their behavioral patterns during the interaction. We observed that user interfaces that required the user to move their hand in a straight direction were more likely to provoke frustration and were often perceived as challenging for blind and visually impaired people. The analysis revealed that the voice-based navigation UI guides the participant the fastest to the target and does not require prior training. This suggests that a voice-based navigation strategy is a promising approach for designing an accessible user interface for the blind.
... There is a clear need for an intuitive, dynamic, general purpose aid device, which augments and supports the BVI user in their IADLs while not impeding their senses and movements in a signifcant way. In the area of BVI aid devices, multi-modal feedback consisting mostly of a combination of haptic and audio feedback has been proven to be successful [16,22]. As shown in prior studies [4,6], drone interfaces ofer more direct, thus easier to translate and interpret The human-drone tether connection string is highlighted as a red line. ...
... A large focus area is the locomotion navigation of impaired people using technological aids with multi-modal interface solutions [4,31,48]. With growth of touch interfaces over the last decade, tabletop [25,35,46] and touch screen interaction [16,17,22] have gained attention and play a vital role in current BVI related research [29]. A common way to make visible cues better perceivable for BVI humans is to utilize other distinct sensory inputs of the human body (e.g., hearing, touch or haptics). ...
... In addition to simple spatial audio tones, spoken and read out audio cues and directions are used for search and localization applications as well. Kane et al. [22] presented in their research diferent audio interface options to enable BVI users to interact with touch screens more easily. Some of the presented approaches (e.g. ...
... Previous research highlights several benefits of using interactive tabletops: improving collaborative learning [37,55], supporting reflection-type conversations [32], enhancing social interaction [18], fostering creativity as well as engagement [10,17]. However, people with visual impairments can struggle to engage with such large surfaces [23,39]. We argue that not being able to use interactive tabletops, for example, due to visual impairments, and consequently participate in these group activities, can be a vehicle of social exclusion. ...
... However, when considering the spatial awareness of interface elements, these accessibility services are mostly designed for smaller form-factors, rather than large collaborative surfaces where the ability to locate items and establish relationships between them is more challenging. Examples of applications for such devices include exploring maps [39] and anatomic models in educational settings, or mind maps in brainstorming sessions [53]. In these, the ability to locate artifacts without losing spatial awareness and to relate them is relevant. ...
... In these, the ability to locate artifacts without losing spatial awareness and to relate them is relevant. While there is a large body of work on touchscreen accessibility, research has been restricted to single-user interaction [15,21,23,39]. ...
Article
Interactive tabletops offer unique collaborative features, particularly their size, geometry, orientation and, more importantly, the ability to support multi-user interaction. Although previous efforts were made to make interactive tabletops accessible to blind people, the potential to use them in collaborative activities remains unexplored. In this paper, we present the design and implementation of a multi-user auditory display for interactive tabletops, supporting three feedback modes that vary on how much information about the partners' actions is conveyed. We conducted a user study with ten blind people to assess the effect of feedback modes on workspace awareness and task performance. Furthermore, we analyze the type of awareness information exchanged and the emergent collaboration strategies. Finally, we provide implications for the design of future tabletop collaborative tools for blind users.
... Researchers have actively been investigating the target acquisition task as a challenge referred to in multiple ways, including last meter, arm's reach distance, peripersonal space/reaching, and the haptic space problem; the research is often situated in mundane daily activities such as grocery shopping [13,17,44,56,94], interacting with targets of interest on a touch-based medium [31,37,38,77,78], aiming a camera to take photos [34,43,48], and simply localizing and grasping an object that is within the arm's reachable range [24,45,76,86]. Various supporting technologies have been explored, including a glove, finger-worn wearable devices, and mobile devices. ...
... Some researchers evaluated the effectiveness of these auditory types of feedback for the task of exploring and understanding 2D surfaces. The work of Kane et al. [37,38], Stearns et al. [78], Oh et al. [59], and Shilkrot et al. [72] are examples. The task of guiding a hand to a target so as to touch or grasp the object has also employed both verbal and non-verbal cues [12,23,24,39,76,82,86]. ...
Article
Full-text available
Locating and grasping objects is a critical task in people’s daily lives. For people with visual impairments, this task can be a daily struggle. The support of augmented reality frameworks in smartphones can overcome the limitations of current object detection applications designed for people with visual impairments. We present AIGuide, a self-contained smartphone application that leverages augmented reality technology to help users locate and pick up objects around them. We conducted a user study to investigate the effectiveness of AIGuide in a visual prosthetic for providing guidance; compare it to other assistive technology form factors; investigate the use of multimodal feedback, and provide feedback about the overall experience. We gathered performance data and participants’ reactions and analyzed videos to understand users’ interactions with the nonvisual smartphone user interface. Our results show that AIGuide is a promising technology to help people with visual impairments locate and acquire objects in their daily routine. The benefits of AIGuide may be enhanced with appropriate interaction design.
... Apart from screen readers, there have been different works that aimed at facilitating non-visual access to information and interaction with UI elements. The work of Kane et al. [39] revolved around aiding the user while navigating the touchscreen to perceive the spatial characteristics of the content items in a more convenient way. This technique was referred to as "Access Overlays", offering three navigation variations: edge projection, neighborhood browsing, and touch and speak [39]. ...
... The work of Kane et al. [39] revolved around aiding the user while navigating the touchscreen to perceive the spatial characteristics of the content items in a more convenient way. This technique was referred to as "Access Overlays", offering three navigation variations: edge projection, neighborhood browsing, and touch and speak [39]. In edge projection, items are placed towards the edges of the touchscreen. ...
Article
Accessibility revolves around building products, including electronic devices and digital content, so that diverse users can conveniently utilize them, irrespective of their capabilities. In recent years, the concept of touchscreen accessibility has gained a remarkable attention, especially within the considerable reliance on mobile touchscreen devices (MTDs) for information acquisition and dissemination as we witness nowadays. For users who are visually impaired, MTDs unlock different opportunities for independence and functioning. Thus, with the increasing ubiquity of MTDs and their potential extensive utility for all demographics, it becomes paramount to ensure that these devices and the content delivered on them are accessible. And while it might seem straightforward to achieve accessibility on MTDs, attaining this outcome is governed by an interplay between different elements. These involve platform (i.e., operating system) built-in support of accessibility features, content rendering modalities and structures pertaining to user needs and the peculiarities of MTDs as informed in standard accessibility guidelines, user studies uncovering preferences and best practices while interacting with MTDs, national legislations and policies, and the use of third-party devices such as assistive technologies. In this paper, mobile touchscreen accessibility for users who are visually impaired is surveyed with focus on three aspects: (1) the existing built-in accessibility features within popular mobile platforms; (2) the nature of nonvisual interaction and how users who are visually impaired access, navigate, and create content on MTDs; and (3) the studies that tackled different issues pertaining to touchscreen accessibility, such as extraction of user needs and interaction preferences, identification of most critical accessibility problems encountered on MTDs, integrating mobile accessibility in standard accessibility guidelines, and investigation of existing guidelines in terms of sufficiency and appropriateness.
... TouchCam, for example, designed and implemented a camera-based wearable device that can be worn on a finger, which is used to access one's personal touchscreen devices by interacting with their skin surface to provide extra tactile and proprioceptive feedback. Physical overlays that can be placed on the top of a touchscreen were also investigated [6,30]. For instance, TouchPlates [6] allows people with visual impairments to interact with touchscreen devices by placing tactile overlays on the top of the touch display. ...
... On the other hand, shape, size, and line-length information were conveyed to users for geometric objects. However, little study has been done about other types of images, although specific locations or spatial relationships of objects within an image such as photographs and touchscreen user interface are considered important [20,30]. To identify types of information that users are interested in for each type of image, adopting recommendation techniques [86,87] can be a solution for providing user-specific content based on users' preference, interests, and needs. ...
Article
Full-text available
A number of studies have been conducted to improve the accessibility of images using touchscreen devices for screen reader users. In this study, we conducted a systematic review of 33 papers to get a holistic understanding of existing approaches and to suggest a research road map given identified gaps. As a result, we identified types of images, visual information, input device and feedback modalities that were studied for improving image accessibility using touchscreen devices. Findings also revealed that there is little study how the generation of image-related information can be automated. Moreover, we confirmed that the involvement of screen reader users is mostly limited to evaluations, while input from target users during the design process is particularly important for the development of assistive technologies. Then we introduce two of our recent studies on the accessibility of artwork and comics, AccessArt and AccessComics, respectively. Based on the identified key challenges, we suggest a research agenda for improving image accessibility for screen reader users.
... For instance, Zhong et al. [12] generated 27 alt text for images on the web that are identified as important using crowdsourcing. On 28 the other hand, Stangl et al. [11] used natural language processing and computer vision 29 techniques to automatically extract visual descriptions (alt text) on online shopping 30 websites for clothes. Unlike tactile approaches, this software-based approaches are more 31 scalable especially with the help of crowds or advanced machine learning techniques. ...
... For example, geographic information such as building locations, direction and distance were offered for map and graph images and shape, size and line-length information for geometric objects. However, little has studied about other types of images although specific location or spatial relationships of objects within each images such as photographs, touchscreen user interface, although spatial arrangements is considered important[19,29]. Moreover, the majority of the studies have been prioritized images that contains useful information (e.g., facts, knowledge) over images that can be interpreted subjectively, different from one person to another such as artwork using touchscreen devices. ...
Preprint
Full-text available
A number of studies have been conducted to improve the accessibility of images using touchscreen devices for screen reader users. In this study, we conducted a systematic review of 33 papers to get a holistic understanding of existing approaches and to suggest a research road map given identified gaps. As a result, we identified types of images, visual information, input device and feedback modalities that were studied for improving image accessibility using touchscreen devices. Findings also revealed that little has studied how to automate the generation of image-related information, and that screen reader users play important roles during the evaluation but the design process. Then we introduce two of our recent studies on the accessibility of artwork and comics, AccessArt and AccessComics respectively. Based on the identified key challenges, we suggest a research agenda for improving image accessibility for screen reader users.
... Previous research has shown the ability of both tactile (and interactive) maps (Brock, 2013;Ducasse et al., 2018;Papadopoulos et al., 2017b;Zeng et al., 2014) and virtual navigation (Chebat et al., 2017;Lahav and Mioduser, 2008) to convey spatial knowledge of the environment to PVI. However, interactive maps usually require larger devices and/or tactile overlays (Ducasse et al., 2018;Guerreiro et al., 2015;Kane et al., 2011) and most virtual navigation solutions require specialized equipment (Kreimeier and Götzelmann, 2019;Yatani et al., 2012;Zhao et al., 2018). ...
... Tactile maps and 3-D models enable blind people to explore a map/model with their fingers and are known to provide accurate spatial representations of an environment (Herman et al., 1983;Wiener et al., 2010). Recent research has been trying to ease the access to such solutions, for instance by creating customizable 3D printed maps or tactile displays (Giraud et al., 2017;Leo et al., 2016;Taylor et al., 2016), or to make use of touchscreen devices to enable interactive map exploration (Guerreiro et al., 2015;Kane et al., 2011;Su et al., 2010), often using screen overlays (Brock et al., 2015;Ducasse et al., 2018) or special devices (Zeng et al., 2014). However, most solutions still have low resolution that difficult presenting detailed information or require larger or very specific devices. ...
Article
Independent navigation is challenging for blind people, particularly in unfamiliar environments. Navigation assistive technologies try to provide additional support by guiding users or increasing their knowledge of the surroundings, but accurate solutions are still not widely available. Based on this limitation and on the fact that spatial knowledge can also be acquired indirectly (prior to navigation), we developed an interactive virtual navigation app where users can learn unfamiliar routes before physically visiting the environment. Our main research goals are to understand the acquisition of route knowledge through smartphone-based virtual navigation and how it evolves over time; its ability to support independent, unassisted real-world navigation of short routes; and its ability to improve user performance when using an accurate in-situ navigation tool (NavCog). With these goals in mind, we conducted a user study where 14 blind participants virtually learned routes at home for three consecutive days and then physically navigated them, both unassisted and with NavCog. In virtual navigation, we analyzed the evolution of route knowledge and we found that participants were able to quickly learn shorter routes and gradually increase their knowledge in both short and long routes. In the real-world, we found that users were able to take advantage of this knowledge, acquired completely through virtual navigation, to complete unassisted navigation tasks. When using NavCog, users tend to rely on the navigation system and less on their prior knowledge and therefore virtual navigation did not significantly improve users' performance.
... Incorporating technology assistance can also make touch tables easier to use (Mendes et al., 2020). Furthermore, accessibility on the interface can also involve studying the user experience by limiting the interaction spaces, especially for users with a limited field of vision (Kane et al., 2011). Experience and familiarity with interaction methods improves the use of interfaces. ...
Article
Touch tables have become a common interface in recent decades. However, few studies explore their use in therapeutic rehabilitation. This review analyzes publications identified through a systematic search in four literature databases between 1970 and 2023. A total of 29 publications involving three therapeutic uses of touch tables are presented. 11 studies address work on physical remediation, 14 on cognitive training and 11 on collaborative social skills. Results indicate that research on touch tables in rehabilitation is limited in quantity, but that touch tables are regularly proposed into therapy. Their use has produced relatively beneficial effects in motor, cognitive and social therapy. This review highlights the essential role of accessibility for rehabilitation users, whether they are patients or health practitioners. We conclude by presenting recommendations for research and practices, particularly around inclusive approaches and adaptive techniques to further integrate touch tables into the care pathway.
... However, most of them do not support speech. The ones that support speech are usually developed for sonifcation in data visualizations such as charts [1,11,13,30,31,40,41], maps [2,4,15,16,27,37], and other graphics and diagrams [6,14,18,24,25,36,38]. These tools provide speech as a method to communicate fundamental data by directly vocalizing it, or by articulating the associated labels. ...
... In Kane, Bigham & Wobbrock (2008), slide rules were developed that enable multi-touch gestures combined with audio output when interacting with mobile devices. Further, Kane et al. (2011) have proposed a new framework to facilitate access to mobile devices using gestures on mobile touch screen devices. The BlindSight system by Li, Baudisch & Hinckley (2008) is based on the mobile phone's physical keypad and is designed to provide access to a non-visual menu during the phone call. ...
Article
Full-text available
In this study, we examine different approaches to the presentation of Y coordinates in mobile auditory graphs, including the representation of negative numbers. These studies involved both normally sighted and visually impaired users, as there are applications where normally sighted users might employ auditory graphs, such as the unseen monitoring of stocks, or fuel consumption in a car. Multi-reference sonification schemes are investigated as a means of improving the performance of mobile non-visual point estimation tasks. The results demonstrated that both populations are able to carry out point estimation tasks with a good level of performance when presented with auditory graphs using multiple reference tones. Additionally, visually impaired participants performed better on graphs represented in this format than normally sighted participants. This work also implements the component representation approach for negative numbers to represent the mapping by using the same positive mapping reference for the digit and adding a sign before the digit which leads to a better accuracy of the polarity sign. This work contributes to the areas of the design process of mobile auditory devices in human-computer interaction and proposed a methodological framework related to improving auditory graph performance in graph reproduction.
... However, most of them do not support speech. The ones that support speech are usually developed for sonifcation in data visualizations such as charts [1,11,13,30,31,40,41], maps [2,4,15,16,27,37], and other graphics and diagrams [6,14,18,24,25,36,38]. These tools provide speech as a method to communicate fundamental data by directly vocalizing it, or by articulating the associated labels. ...
Preprint
Sonification serves as a powerful tool for data accessibility, especially for people with vision loss. Among various modalities, speech is a familiar means of communication similar to the role of text in visualization. However, speech-based sonification is underexplored. We introduce SpeechTone, a novel speech-based mark for data sonification and extension to the existing Erie declarative grammar for sonification. It encodes data into speech attributes such as pitch, speed, voice and speech content. We demonstrate the efficacy of SpeechTone through three examples.
... In the last few years, some researchers have been developing accessible UI for VI people (accessibility-driven blind-friendly user interfaces) [7][8][9][10][11][12][13], and some solutions have been proposed to explore the touchscreen with the aim of improving the user experience. Those proposals can be classified into three types: screen readers, logical partitions with adaptive UIs and vibrotactile feedback. ...
Article
Full-text available
We report the results of a study on the learnability of the locations of haptic icons on smartphones. The aim was to study the influence of the use of complex and different vibration patterns associated with haptic icons compared to the use of simple and equal vibrations on commercial location-assistance applications. We studied the performance of users with different visual capacities (visually impaired vs. sighted) in terms of the time taken to learn the icons’ locations and the icon recognition rate. We also took into consideration the users’ satisfaction with the application developed to perform the study. The experiments concluded that the use of complex and different instead of simple and equal vibration patterns obtains better recognition rates. This improvement is even more noticeable for visually impaired users, who obtain results comparable to those achieved by sighted users.
... In addition to these studies in psychology, recent studies on interactive 3D printed maps [18,24] and digital interactive graphics [2,11,16,17,26] studied exploration behaviors involving one or two hands. For instance, Guerreiro et al. [17] observed various two-handed strategies including symmetrical hand movements when exploring the graphic. ...
... It pre-processes data with an alignment step [18] followed by a ranking step [8], and the resulting alignedand-ranked data can be analyzed with an omnibus test, typically an ANOVA. Since its introduction to HCI by Wobbrock et al. [46] in 2011, the ART procedure has quickly become a popular technique within HCI, and many HCI venues have published papers that use the ART in their analyses (e.g., CHI [2,13,14], ASSETS [3], UIST [21,37]). Wobbrock et al. 's ARTool [46] has also been used in publications in several other fields (e.g., cellular biology [7], dentistry [34], zoology [9], and cardiology [12]), and has been cited nearly 900 times thus far. ...
Preprint
Full-text available
Data from multifactor HCI experiments often violates the normality assumption of parametric tests (i.e., nonconforming data). The Aligned Rank Transform (ART) is a popular nonparametric analysis technique that can find main and interaction effects in nonconforming data, but leads to incorrect results when used to conduct contrast tests. We created a new algorithm called ART-C for conducting contrasts within the ART paradigm and validated it on 72,000 data sets. Our results indicate that ART-C does not inflate Type I error rates, unlike contrasts based on ART, and that ART-C has more statistical power than a t-test, Mann-Whitney U test, Wilcoxon signed-rank test, and ART. We also extended a tool called ARTool with our ART-C algorithm for both Windows and R. Our validation had some limitations (e.g., only six distribution types, no mixed factorial designs, no random slopes), and data drawn from Cauchy distributions should not be analyzed with ART-C.
... The above techniques are best suited for traditional tablet sizes (≈ 9-10 ). While larger touch surfaces have been investigated and multi-touch techniques proposed (e.g., Reference [38]), most of the larger tablets often do not contain vibrotactile feedback and are more expensive and less portable than traditional tablets. We note, however, that with the advent of Bluetooth vibration motors, as well as wearable mechanisms, the lack of vibration capabilities in larger screens may not necessarily be a problem if individuals are open to augmenting their experience with additional hardware. ...
Article
Full-text available
With content rapidly moving to the electronic space, access to graphics for individuals with visual impairments is a growing concern. Recent research has demonstrated the potential for representing basic graphical content on touchscreens using vibrations and sounds, yet few guidelines or processes exist to guide the design of multimodal, touchscreen-based graphics. In this work, we seek to address this gap by synergizing our collective research efforts over the past eight years and implementing our findings into a compilation of recommendations, which we validate through an iterative design process and user study. We start by reviewing previous work and then collate findings into a set of design guidelines for generating basic elements of touchscreen-based multimodal graphics. We then use these guidelines to generate exemplary graphics in mathematics, specifically bar charts and geometry concepts. We discuss the iterative design process of moving from guidelines to actual graphics and highlight challenges. We then present a formal user study with 22 participants with visual impairments, comparing learning performance on using touchscreen-rendered graphics to embossed graphics. We conclude with qualitative feedback from participants on the touchscreen-based approach and offer areas of future investigation as these recommendation are expanded to include more complex graphical concepts.
... Another system, called Tikisi For Maps, showed that 12 BVI teens could effectively learn and navigate layered map information and perform complex map scaling operations based on audio and speech cues given during exploration of a tablet's touchscreen (Bahram, 2013). Although search performance was varied, Kane et al. (2011) showed that 14 BVI participants could learn the spatial relations of auditory targets via bimanual exploration of a large interactive table-top touchscreen. ...
Article
Full-text available
This article starts by discussing the state of the art in accessible interactive maps for use by blind and visually impaired (BVI) people. It then describes a behavioral experiment investigating the efficacy of a new type of low-cost, touchscreen-based multimodal interface, called a vibro-audio map (VAM), for supporting environmental learning, cognitive map development, and wayfinding behavior on the basis of nonvisual sensing. In the study, eight BVI participants learned two floor-maps of university buildings, one using the VAM and the other using an analogous hardcopy tactile map (HTM) overlaid on the touchscreen. They were asked to freely explore each map, with the task of learning the entire layout and finding three hidden target locations. After meeting a learning criterion, participants performed an environmental transfer test, where they were brought to the corresponding physical layout and were asked to plan/navigate routes between learned target locations from memory, i.e., without access to the map used at learning. The results using Bayesian analyses aimed at assessing equivalence showed highly similar target localization accuracy and route efficiency performance between conditions, suggesting that the VAM supports the same level of environmental learning, cognitive map development, and wayfinding performance as is possible from interactive displays using traditional tactile map overlays. These results demonstrate the efficacy of the VAM for supporting complex spatial tasks without vision using a commercially available, low-cost interface and open the door to a new era of mobile interactive maps for spatial learning and wayfinding by BVI navigators.
... In addition to these studies in psychology, recent studies on interactive 3D printed maps [18,24] and digital interactive graphics [2,11,16,17,26] studied exploration behaviors involving one or two hands. For instance, Guerreiro et al. [17] observed various two-handed strategies including symmetrical hand movements when exploring the graphic. ...
Conference Paper
Graphics are useful in many contexts of daily life (education, mobility, etc.) and spread widely in digital media. However, accessing to digital graphical information remains a challenging work to people with visual impairments. In this study, we were interested in the transmission of vibrotactile cues allowing users to explore digital graphics more easily and faster. We have designed a vibrotactile matrix fixed on the hand for presenting directional information. Two vibrotactile displays - Spatiotemporal Vibrotactile Pattern (SVP) and Apparent Tactile Motion (ATM) - were compared. A study with sixteen blindfolded participants examined the efficiency and user preferences of proposed interaction techniques and showed that the recognition accuracy with SVP is significantly better. Final study involving six participants with visual impairments confirmed the improvement of digital graphics exploration with vibrotactile directional cues.
... To assess the performance of RoboGraphics relative to other approaches, we presented charts both via RoboGraphics and via a control condition consisting of the touch screen and passive tactile overlay, but with no robots. In this condition, the system provided spoken directions toward the nearest data point [12]. When a user placed their finger on the display, the system began to guide the user in the form of audio instructions ("up, up, left, left, ding!") to the nearest data point. ...
Conference Paper
Tactile graphics are a common way to present information to people with vision impairments. Tactile graphics can be used to explore a broad range of static visual content but aren't well suited to representing animation or interactivity. We introduce a new approach to creating dynamic tactile graphics that combines a touch screen tablet, static tactile overlays, and small mobile robots. We introduce a prototype system called RoboGraphics and several proof-of-concept applications. We evaluated our prototype with seven participants with varying levels of vision, comparing the RoboGraphics approach to a flat screen, audio-tactile interface. Our results show that dynamic tactile graphics can help visually impaired participants explore data quickly and accurately.
... Other prototypes were designed on tablets and provided access to maritime, choropleth or city maps [8,25,37]. Some projects have also been developed on large tabletops [26]. One persistent issue with DIMs based on multitouch is that feedback can be ambiguous: when multiple fingers are moving on the surface, the user does not know which finger triggers the feedback. ...
Chapter
Full-text available
Digital Interactive Maps on touch surfaces are a convenient alternative to physical raised-line maps for users with visual impairments. To compensate for the absence of passive tactile information, they provide vibrotactile and auditory feedback. However, this feedback is ambiguous when using multiple fingers since users may not identify which finger triggered it. To address this issue, we explored the use of bilateral feedback, i.e. collocated with each hand, for two-handed map exploration. We first introduced a design space of feedback for two-handed interaction combining two dimensions: spatial location (unilateral vs. bilateral feedback) and similarity (same vs. different feedback). We implemented four techniques resulting from our design space, using one or two smartwatches worn on the wrist (unilateral vs. bilateral feedback respectively). A first study with fifteen blindfolded participants showed that bilateral feedback outperformed unilateral feedback and that feedback similarity has little influence on exploration performance. Then we did a second study with twelve users with visual impairments, which confirmed the advantage of two-handed vs. one-handed exploration, and of bilateral vs. unilateral feedback. The results also bring to light the impact of feedback on exploration strategies.
Chapter
Our research has focused on improving the accessibility of mobile applications for blind or low vision (BLV) users, particularly with regard to images. Previous studies have shown that using spatial interaction can help BLV users create a mental model of the positions of objects within an image. In order to address the issue of limited image accessibility, we have developed three prototypes that utilize haptic feedback to reveal the positions of objects within an image. These prototypes use audio-haptic binding to make the images more accessible to BLV users. We also conducted the first user study to evaluate the memorability, efficiency, preferences, and comfort level with haptic feedback of our prototypes for BLV individuals trying to locate multiple objects within an image. The results of the study indicate that the prototype combining haptic feedback with both audio and caption components offered a more accessible and preferred among other prototypes. Our work contributes to the advancement of digital image technologies that utilize haptic feedback to enhance the experience of BLV users. KeywordsHapticsTouchscreensSmartphonesAccessibility
Article
Gestural interaction has evolved from a set of novel interaction techniques developed in research labs, to a dominant interaction modality used by millions of users everyday. Despite its widespread adoption, the design of appropriate gesture vocabularies remains a challenging task for developers and designers. Existing research has largely used Expert-Led, User-Led, or Computationally-Based methodologies to design gesture vocabularies. These methodologies leverage the expertise, experience, and capabilities of experts, users, and systems to fulfill different requirements. In practice, however, none of these methodologies provide designers with a complete, multi-faceted perspective of the many factors that influence the design of gesture vocabularies, largely because a singular set of factors has yet to be established. Additionally, these methodologies do not identify or emphasize the subset of factors that are crucial to consider when designing for a given use case. Therefore, this work reports on the findings from an exhaustive literature review that identified 13 factors crucial to gesture vocabulary design and examines the evaluation methods and interaction techniques commonly associated with each factor. The identified factors also enable a holistic examination of existing gesture design methodologies from a factor-oriented viewpoint and highlighting the strengths and weaknesses of each methodology. This work closes with proposals of future research directions of developing an iterative user-centered and factor-centric gesture design approach as well as establishing an evolving ecosystem of factors that are crucial to gesture design.
Article
With the rapid development of natural human-computer interaction technologies, gesture-based interfaces have become popular. Although gesture interaction has received extensive attention from both academia and industry, most existing studies focus on hand gesture input, leaving foot-gesture-based interfaces underexplored, especially in scenarios where the user’s hands are occupied for other interaction tasks such as washing the hair in smart shower rooms. In such scenarios, users often have to perform interactive tasks (e.g., controlling water volume) with their eyes closed when water and shampoo liquid flow along with their head to eyes area. One possible way to address this problem is to use eyes-free (rather than eyes-engaged), foot-gesture-based interactive techniques that allow users to interact with the smart shower system without visual involvement. Through our online survey, 71.60% of the participants (58/81) have the requirements of using foot-gesture-based eyes-free interactions during showers. To this end, we conducted a three-phase study to explore foot-gesture-based interaction to achieve eyes-free interaction in smart shower rooms. We first derived a set of user-defined foot gestures for eyes-free interaction in smart shower rooms. Then, we proposed a taxonomy for foot gesture interaction. Our findings indicated that end-users preferred single-foot (76.1%), atomic (73.3%), deictic (65.0%), and dynamic (76.1%) foot gestures, which markedly differs from the results reported by previous studies on user-defined hand gestures. In addition, most of the user-defined dynamic foot gestures involve atomic movements perpendicular to the ground (40.1%) or parallel to the ground (27.7%). We finally distilled a set of concrete guidelines for foot gesture interfaces based on observing end-users’ mental model and behaviors when interacting with foot gestures. Our research can inform the design and development of foot-gesture-based interaction techniques for applications such as smart homes, intelligent vehicles, VR games, and accessibility design.
Conference Paper
Gliding a finger on touchscreen to reach a target, that is, touch exploration, is a common selection method of blind screen-reader users. This paper investigates their gliding behavior and presents a model for their motor performance. We discovered that the gliding trajectories of blind people are a mixture of two strategies: 1) ballistic movements with iterative corrections relying on non-visual feedback, and 2) multiple sub-movements separated by stops, and concatenated until the target is reached. Based on this finding, we propose the mixture pointing model, a model that relates movement time to distance and width of the target. The model outperforms extant models, improving R2 from 0.65 for Fitts' law to 0.76, and is superior in cross-validation and information criteria. The model advances understanding of gliding-based target selection and serves as a tool for designing interface layouts for screen-reader based touch exploration.
Conference Paper
Full-text available
Smartphones are shipped with built-in screen readers and other accessibility features that enable blind people to autonomously learn and interact with the device. However, the process is not seamless, and many face difficulties in the adoption and path to becoming more experienced users. In the past, games like Minesweeper served to introduce and train people in the use of the mouse, from its left and right click to precision pointing required to play the game. Smartphone gestures (and particularly screen reader gestures) pose similar challenges to the ones first faced by mouse users. In this work, we explore the use of games to inconspicuously train gestures. We designed and developed a set of accessible games, enabling users to practice smartphone gestures. We evaluated the games with 8 blind users and conducted remote interviews. Our results show how purposeful accessible games could be important in the process of training and discovering smartphone gestures, as they offer a playful method of learning. This, in turn, increases autonomy and inclusion, as this process becomes easier and more engaging.
Chapter
Block-based programming applications, such as MIT’s Scratch and Blockly Games, are commonly used to teach K-12 students to code. Due to the COVID-19 pandemic, many K-12 students are attending online coding camps, which teach programming using these block-based applications. However, these applications are not accessible to the Blind/Low Vision (BLV) population since they neither produce audio output nor are screen reader accessible. In this paper, we describe a solution to make block-based programming accessible to BLV students using Google’s latest Keyboard Navigation and present its evaluation with four individuals who are BLV. We distill our findings as recommendations to developers who may want to make their Block-based programming application accessible to individuals who are BLV.
Chapter
Nowadays, scientific knowledge is being developed frequently to offer versatile, safer, and sound user tendencies. At this time, a large number of prevalent techniques that have been indicated by visually impaired people are intelligent tools; however, it possesses limitations. With the recent technological innovation, it isn’t easy to prolong the assistance acquire for people who have visual disabilities throughout their movability. Thus, this design provides cost-effective ultrasonic-based aids, including a hat, stick, and shoes for visually encountered people today, to achieve exclusive self-reliance and even be exempt from additional assistance. Ultrasonic sensors were able to inspect different heights, and simultaneously the user will be alerted via a buzzer. A vibrator was also implemented as a substitute gadget in minimal signal coverage areas and noisy environments. The buzzer and vibration motor are initialized when the obstacle is identified. A global positioning system (GPS) was also included in the design of aids and synchronized with google map within mounted buttons. The text message unit is also utilized by the user to send out SMS info to the saved phone numbers in the Arduino in an emergency text-request by providing the allocated longitude and altitude position. The results revealed a high accuracy of the system under different circumstances, indoor and outdoor, as feedback of different users. In conclusion, this system is considered an inexpensive, friendly user, and sensible blind assist techniques for the blind and visually impaired people.
Conference Paper
Many images on the Web, including photographs and artistic images, feature spatial relationships between objects that are inaccessible to someone who is blind or visually impaired even when a text description is provided. While some tools exist to manually create accessible image descriptions, this work is time consuming and requires specialized tools. We introduce an approach that automatically creates spatially registered image labels based on how a sighted person naturally interacts with the image. Our system collects behavioral data from sighted viewers of an image, specifically eye gaze data and spoken descriptions, and uses them to generate a spatially indexed accessible image that can then be explored using an audio-based touch screen application. We describe our approach to assigning text labels to locations in an image based on eye gaze. We then report on two formative studies with blind users testing EyeDescribe. Our approach resulted in correct labels for all objects in our image set. Participants were able to better recall the location of objects when given both object labels and spatial locations. This approach provides a new method for creating accessible images with minimum required effort.
Article
Digital fabrication technologies open new doors---and challenges---for real-world support.
Conference Paper
Full-text available
Recent advances in touch screen technology have increased the prevalence of touch screens and have prompted a wave of new touch screen-based devices. However, touch screens are still largely inaccessible to blind users, who must adopt error-prone compensatory strategies to use them or find accessible alternatives. This inaccessibility is due to interaction techniques that require the user to visually locate objects on the screen. To address this problem, we introduce Slide Rule, a set of audio- based multi-touch interaction techniques that enable blind users to access touch screen applications. We describe the design of Slide Rule, our interaction techniques, and a user study in which 10 blind people used Slide Rule and a button-based Pocket PC screen reader. Results show that Slide Rule was significantly faster than the button-based system, and was preferred by 7 of 10 users. However, users made more errors when using Slide Rule than when using the more familiar button-based system.
Conference Paper
Full-text available
Mobile devices provide people with disabilities new opportunities to act independently in the world. However, these empowering devices have their own accessibility challenges. We present a formative study that examines how people with visual and motor disabilities select, adapt, and use mobile devices in their daily lives. We interviewed 20 participants with visual and motor disabilities and asked about their current use of mobile devices, including how they select them, how they use them while away from home, and how they adapt to accessibility challenges when on the go. Following the interviews, 19 participants completed a diary study in which they recorded their experiences using mobile devices for one week. Our results show that people with visual and motor disabilities use a variety of strategies to adapt inaccessible mobile devices and successfully use them to perform everyday tasks and navigate independently. We provide guidelines for more accessible and empowering mobile device design.
Conference Paper
Full-text available
Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques.
Conference Paper
Full-text available
We present Silicone iLluminated Active Peripherals (SLAP), a system of tangible, transparent widgets for use on vision-based multi-touch tabletops. SLAP Widgets are cast from silicone or made of acrylic and include sliders, knobs, keyboards, and keypads. They add tactile feedback to multi-touch tables and can be dynamically relabeled with rear projection. They are inexpensive, battery-free, and untethered widgets combining the flexibility of virtual objects with tangible affordances of physical objects. Our demonstration shows how SLAP Widgets can augment input on multi-touch tabletops with modest infrastructure costs.
Conference Paper
Full-text available
McSig" is a multimodal teaching and learning environ- ment for visually-impaired students to learn character shapes, handwriting and signatures collaboratively with their teachers. It combines haptic and audio output to real- ize the teacher"s pen input in parallel non-visual modalities. McSig is intended for teaching visually-impaired children how to handwrite characters (and from that signatures), something that is very difficult without visual feedback. We conducted an evaluation with eight visually-impaired chil- dren with a pre-test to assess their current skills with a set of character shapes, a training phase using McSig and then a post-test of the same character shapes to see if there were any improvements. The children could all use McSig and we saw significant improvements in the character shapes drawn, particularly by the completely blind children (many of whom could draw almost none of the characters before the test). In particular, the blind participants all expressed enjoyment and excitement about the system and using a computer to learn to handwrite.
Conference Paper
Full-text available
Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to examine interaction effects. While some statistical techniques exist to handle such data, these techniques are not widely available and are complex. To address these concerns, we present the Aligned Rank Transform (ART) for nonparametric factorial data analysis in HCI. The ART relies on a preprocessing step that "aligns" data before applying averaged ranks, after which point common ANOVA procedures can be used, making the ART accessible to anyone familiar with the F-test. Unlike most articles on the ART, which only address two factors, we generalize the ART to N factors. We also provide ARTool and ARTweb, desktop and Web-based programs for aligning and ranking data. Our re-examination of some published HCI results exhibits advantages of the ART.
Conference Paper
Full-text available
We present two experiments on the use of non-speech audio at an interactive multi-touch, multi-user tabletop display. We first investigate the use of two categories of reactive auditory feedback: affirmative sounds that confirm user actions and negative sounds that indicate errors. Our results show that affirmative auditory feedback may improve one's awareness of group activity at the expense of one's awareness of his or her own activity. Negative auditory feedback may also improve group awareness, but simultaneously increase the perception of errors for both the group and the individual. In our second experiment, we compare two methods of associating sounds to individuals in a co-located environment. Specifically, we compare localized sound, where each user has his or her own speaker, to coded sound, where users share one speaker, but the waveform of the sounds are varied so that a different sound is played for each user. Results of this experiment reinforce the presence of tension between group awareness and individual focus found in the first experiment. User feedback suggests that users are more easily able to identify who caused a sound when either localized or coded sound is used, but that they are also more able to focus on their individual work. Our experiments show that, in general, auditory feedback can be used in co-located collaborative applications to support either individual work or group awareness, but not both simultaneously, depending on how it is presented.
Conference Paper
Full-text available
ABSTRACT The rapid development,of large interactive wall displays has been accompanied,by research on methods that allow people to interact with the display at a ,distance. The basic method ,for target acquisition is by ray castinga cursor from one’s pointing finger or hand,position; the problem ,is that selection ,is slow ,and error- prone with small targets. A better method,is the bubble cursorthat resizes the cursor’s activation area to effectively enlarge the target size. The catch is that this technique’s effectiveness depends ,on the proximity of surrounding ,targets: while beneficial in sparse spaces, it is less so when targets are densely packed together. Our method,is the ,speech-filtered bubble ray that ,uses speech ,to transform a dense ,target space into a sparse ,one. Our strategy builds on what ,people ,already do: people ,pointing to distant objects in a ,physical ,workspace ,typically disambiguate ,their choice through speech. For example, a person could point to a stack of books ,and say “the green one”. Gesture indicates the approximate location for the search, and speech ‘filters’ unrelated books,from ,the search. Our technique works ,the same ,way; a person specifies a property of the desired object, and only the location of objects matching,that property,trigger the bubble size. Ina controlled evaluation, people were faster and preferred using the speech-filtered bubble ray over the ,standard bubble ray and ray casting approach. Categories and Subject Descriptors H5.2 [Information interfaces and ,presentation]: User Interfaces., – Interaction Styles. General Terms
Conference Paper
Full-text available
Acquiring small targets on a tablet or touch screen can be chal- lenging. To address the problem, researchers have proposed tech- niques that enlarge the effective size of targets by extending tar- gets into adjacent screen space. When applied to targets organized in clusters, however, these techniques show little effect because there is no space to grow into. Unfortunately, target clusters are common in many popular applications. We present Starburst, a space partitioning algorithm that works for target clusters. Star- burst identifies areas of available screen space, grows a line from each target into the available space, and then expands that line into a clickable surface. We present the basic algorithm and ex- tensions. We then present 2 user studies in which Starburst led to a reduction in error rate by factors of 9 and 3 compared to tradi- tional target expansion.
Conference Paper
Full-text available
We present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface. When combined with an interactive display and touch input, it enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. We present the principles of operation and an implementation of the technology. We also report the results of three controlled psychophysical experiments and a subjective user evaluation that describe and characterize users' perception of this technology. We conclude with an exploration of the design space of tactile touch screens using two comparable setups, one based on electrovibration and another on mechanical vibrotactile actuation.
Article
Full-text available
The NavTouch navigational method enables blind users to input text in a touch-screen device by performing directional gestures to navigate a vowel-indexed alphabet.
Article
Full-text available
This paper describes the development of a new technique for touchscreen interaction, based on a single gesture-driven adaptive software button. The button is intended to substitute the software keyboard, and provides text-entry functionality. Input is accomplished through recognition of finger gestures that is comprised of movement towards the eight basic directions in any position. The target user group of such an interaction technique is primarily blind people who could benefit significantly. The adaptability of the button provides complementary help and follows the style of interaction in a natural way. The analysis of the results, collected from twelve blindfolded subjects, revealed an encouraging tendency. During blind manipulation on touch screen, three of the subjects achieved a maximal typing speed of about 12 wpm after five trials. This suggests that the technique developed is reliable and robust enough to be possibly applied to diverse application platforms, including personal device assistants.
Article
Full-text available
The research we are reporting here is part of our effort to develop a navigation sys- tem for the blind. Our long-term goal is to create a portable, self-contained system that will allow visually impaired individuals to travel through familiar and unfamiliar environments without the assistance of guides. The system, as it exists now, consists of the following functional components: (1) a module for determining the traveler's posi- tion and orientation in space, (2) a Geographic Information System comprising a de- tailed database of our test site and software for route planning and for obtaining infor- mation from the database, and (3) the user interface. The experiment reported here is concerned with one function of the navigation system: guiding the traveler along a predefined route. We evaluate guidance performance as a function of four different display modes: one involving spatialized sound from a virtual acoustic display, and three involving verbal commands issued by a synthetic speech display. The virtual dis- play mode fared best in terms of both guidance performance and user preferences.
Conference Paper
We present Silicone iLluminated Active Peripherals (SLAP), a system of tangible, translucent widgets for use on multitouch tabletops. SLAP Widgets are cast from silicone or made of acrylic, and include sliders, knobs, keyboards, and buttons. They add tactile feedback to multi-touch tables, improving input accuracy. Using rear projection, SLAP Widgets can be relabeled dynamically, providing inexpensive, battery-free, and untethered augmentations. Furthermore, SLAP combines the flexibility of virtual objects with physical affordances. We evaluate how SLAP Widgets influence the user experience on tabletops compared to virtual controls. Empirical studies show that SLAPWidgets are easy to use and outperform virtual controls significantly in terms of accuracy and overall interaction time.
Article
A solution is proposed for the important problem of testing for interaction in factorial experiments when Gaussian assumptions are violated. The proposed rank test can be implemented with existing statistical packages and provides a fix-up for the flawed rank transform procedure. Simulation results suggest that the test is valid for the small and moderate sample sizes typically found in practice when error distributions are symmetric or moderately skewed. The procedure has advantages over standard analysis of variance in the presence of outliers or when error distributions are heavy tailed.
Article
Given that the use of a tactile diagram or map usually facilitates and enhances the learning process for a blind or visually impaired user, the question we would like to address is, "Is the use of one sensory input sufficient in itself, or should other senses also be involved, and if so, how can we best make use of developing technology to assist in this multi-sensory learning process?" We believe that the development of the Talking Tactile Tablet 1 (TTT) which combines tactile input with relevant and immediate audio data, not only improves the speed and ease with which the visually impaired user can learn, but reinforces learning through dual modality. In the context of neuroscience a great deal of evidence has been presented regarding the significance of cross-modal exchange between different sensing systems by researchers such as Le Doux, Hubel, Cynader and Frost. In the education context the effectiveness of multimodal interfaces and multi-sensory learning have been promoted from Montessori, and Dewey through to more recent protagonists in the field such as Aldrich, and Ungar. Our aim is to build on this body of knowledge whilst developing this new use for technology.
Article
Simulations are used to show that the ART (Aligned Ranks Transformation) procedure,when testing for interaction, is robust and almost as powerful as the F-test when the data satisfy the classical assumptions. When these assumptions are violated the ART test is significantly more powerful than the F-test.
Article
Our paper on the use of heuristic information in graph searching defined a path-finding algorithm, A*, and proved that it had two important properties. In the notation of the paper, we proved that if the heuristic function ñ (n) is a lower bound on the true minimal cost from node n to a goal node, then A* is admissible; i.e., it would find a minimal cost path if any path to a goal node existed. Further, we proved that if the heuristic function also satisfied something called the consistency assumption, then A* was optimal; i.e., it expanded no more nodes than any other admissible algorithm A no more informed than A*. These results were summarized in a book by one of us.
Article
We introduce a geometric transformation that allows Voronoi diagrams to be computed using a sweepline technique. The transformation is used to obtain simple algorithms for computing the Voronoi diagram of point sites, of line segment sites, and of weighted point sites. All algorithms haveO(n logn) worst-case running time and useO(n) space.
Conference Paper
We demonstrate the most recent version of our system to communicate graphs and relational information to blind users. We have developed a system called exPLoring graphs at UMB (PLUMB) that displays a drawn graph on a tablet PC and uses auditory cues to help a blind user navigate the graph. This work has applications to assist blind individuals in Computer Science and other educational disciplines, navigation and map manipulation.
Conference Paper
This paper explores the prototype design of an auditory interface enhancement called the Sonic Grid that helps visually impaired users navigate GUI-based environments. The Sonic Grid provides an auditory representation of GUI elements embedded in a two- dimensional interface, giving a 'global' spatial context for use of auditory icons, ear-cons and speech feedback. This paper introduces the Sonic Grid, discusses insights gained through participatory design with members of the visually impaired community, and suggests various applications of the technique, including its use to ease the learning curve for using computers by the visually impaired. Author Keywords
Conference Paper
We present the bubble cursor - a new target acquisition technique based on area cursors. The bubble cursor improves upon area cursors by dynamically resizing its activation area depending on the proximity of surrounding targets, such that only one target is selectable at any time. We also present two controlled experiments that evaluate bubble cursor performance in 1D and 2D target acquisition tasks, in complex situations with multiple targets of varying layout densities. Results show that the bubble cursor significantly outperforms the point cursor and the object pointing technique [7], and that bubble cursor performance can be accurately modeled and predicted using Fitts' law.
Conference Paper
Despite growing awareness of the accessibility issues surrounding touch screen use by blind people, designers still face challenges when creating accessible touch screen interfaces. One major stumbling block is a lack of understanding about how blind people actually use touch screens. We conducted two user studies that compared how blind people and sighted people use touch screen gestures. First, we conducted a gesture elicitation study in which 10 blind and 10 sighted people invented gestures to perform common computing tasks on a tablet PC. We found that blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. Second, we conducted a performance study in which the same participants performed a set of reference gestures. We found significant differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Our results suggest new design guidelines for accessible touch screen interfaces.
Conference Paper
The normal effects of aging include some decline in cognitive, perceptual, and motor abilities. This can have a negative effect on the performance of a number of tasks, including basic pointing and selection tasks common to today’s graphical user interfaces. This paper describes a study of the effectiveness of two interaction techniques: area cursors and sticky icons, in improving the performance of older adults in basic selection tasks. The study described here indicates that when combined, these techniques can decrease target selection times for older adults by as much as 50°/0 when applied to the most difficult cases (smallest selection targets). At the same time these techniques are shown not to impede performance in cases known to be problematical for related techniques (e.g., differentiation between closely spaced targets) and to provide similar but smaller benefits for younger users.
Conference Paper
Many surface computing prototypes have employed gestures created by system designers. Although such gestures are appropriate for early investigations, they are not necessarily reflective of user behavior. We present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause. In all, 1080 gestures from 20 participants were logged, analyzed, and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Our findings indicate that users rarely care about the number of fingers they employ, that one hand is preferred to two, that desktop idioms strongly influence users' mental models, and that some commands elicit little gestural agreement, suggesting the need for on-screen widgets. We also present a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. Our results will help designers create better gesture sets informed by user behavior.
Conference Paper
This study examines methods for displaying distance information to blind travellers using sound, focussing on abstractions of methods currently used in, commercial Electronic Travel Aids (ETAs). Ten blind participants assessed three sound encodings commonly used to convey distance information by ETAs: sound frequency (Pitch), Ecological Distance (ED), and temporal variation or Beat Rate (BR). Re-sponse time and response correctness were chosen for measures. Pitch variation was found to be the least effective encoding, which is a surprise because most ETAs encode distance as Pitch. Tempo, or BR, encoding was found to be superior to Pitch. ED, which was simulated by filtering high frequencies and decreasing intensity with distance, was found to be best. Grouping BR and ED redundantly slightly outperformed ED. Consistent polarity across participants was found in ED and BR but not in Pitch encoding.
Conference Paper
Mapping applications on mobile devices have gained widespread popularity as a means for enhancing user mobility and ability to explore new locations and venues. Visually impaired users currently rely on computer text-to-speech or human-spoken descriptions of maps and indoor spaces. Unfortunately, speech-based descriptions are limited in their ability to succinctly convey complex layouts or spacial positioning. This paper presents Timbremap, a sonification interface enabling visually impaired users to explore complex indoor layouts using off-the-shelf touch-screen mobile devices. This is achieved using audio feedback to guide the user's finger on the device's touch interface to convey geometry. Our user-study evaluation shows Timbremap is effective in conveying non-trivial geometry and enabling visually impaired users to explore indoor layouts.
Conference Paper
Mobile devices with multi-touch capabilities are becoming increasingly common, largely due to the success of the Apple iPhone and iPod Touch. While there have been some advances in touchscreen accessibility for blind people, touchscreens remain inaccessible in many ways. Recent research has demonstrated that there is great potential in leveraging multi-touch capabilities to increase the accessibility of touchscreen applications for blind people. We have created No-Look Notes, an eyes-free text entry system that uses multi-touch input and audio output. No-Look Notes was implemented on Apple’s iPhone platform. We have performed a within-subjects (n = 10) user study of both No-Look Notes and the text entry component of Apple’s VoiceOver, the recently released official accessibility component on the iPhone. No-Look Notes significantly outperformed VoiceOver in terms of speed, accuracy and user preference.
Conference Paper
We introduce a system that allows four users to each receive sound from a private audio channel while using a shared tabletop display. In order to explore how private audio channels affect a collaborative work environment, we conducted a user study with this system. The results reveal differences in work strategies when groups are presented with individual versus public audio, and suggest that the use of private audio does not impede group communication and may positively impact group dynamics. We discuss the findings, as well as their implications for the design of future audio-based "single display privacyware" systems.
Conference Paper
We present Ripples, a system which enables visualizations around each contact point on a touch display and, through these visualizations, provides feedback to the user about successes and errors of their touch interactions. Our visualization system is engineered to be overlaid on top of existing applications without requiring the applications to be modified in any way, and functions independently of the application's responses to user input. Ripples reduces the fundamental problem of ambiguity of feedback when an action results in an unexpected behaviour. This ambiguity can be caused by a wide variety of sources. We describe the ambiguity problem, and identify those sources. We then define a set of visual states and transitions needed to resolve this ambiguity, of use to anyone designing touch applications or systems. We then present the Ripples implementation of visualizations for those states, and the results of a user study demonstrating user preference for the system, and demonstrating its utility in reducing errors.
Conference Paper
Touch-sensitive tablets and their use in human-computer interaction are discussed. It is shown that such devices have some important properties that differentiate them from other input devices (such as mice and joysticks). The analysis serves two purposes: (1) it sheds light on touch tablets, and (2) it demonstrates how other devices might be approached. Three specific distinctions between touch tablets and one button mice are drawn. These sensing and the use of templates. These distinctions are reinforced, and possible uses of touch tablets are illustrated, in an example application. Potential enhancements to touch tablets and other input devices are discussed, as are some inherent problems. The paper concludes with recommendations for future work.
Article
Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.
Universities reject Kindle over inaccessibility for the blind
  • D Reisinger
  • Reisinger D.
Reisinger, D. Universities reject Kindle over inaccessibility for the blind. CNET (2009). http://cnet.co/gSjyv9
http://moneyfactory.gov/images/ EyeNote_Press_Release_4_4-19_2_4.pdf 29. Vanderheiden, G.C. Use of audio-haptic interface techniques to allow nonvisual access to touchscreen appliances
U.S. Bureau of Engraving and Printing. EyeNote. (2011). http://moneyfactory.gov/images/ EyeNote_Press_Release_4_4-19_2_4.pdf 29. Vanderheiden, G.C. Use of audio-haptic interface techniques to allow nonvisual access to touchscreen appliances. HFES 40, (1996), 1266.
Touch-screen gadgets alienate blind
  • S Carew
  • Carew S.
Carew, S. Touch-screen gadgets alienate blind. Reuters (2009). http://reut.rs/gHji5X
Merging tactile sensory input and audio data by means of the Talking Tactile Tablet
  • S Landau
  • L Wells
Navigation System for the Blind: Auditory Display Modes and Guidance, Presence: Teleoperators and Virtual Environments
  • M Jack
  • Reginald G Loomis
  • Roberta L Golledge
  • Klatzky