Article

A Comparison of Three Nonvisual Methods for Presenting Scientific Graphs

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This study implemented three different methods for presenting scientific graphs to visually impaired people: audition, kinesthetics, or a combination of the two. The results indicate that the combination of both audio and kinesthetic modalities can be a promising representation medium of common scientific graphs for people who are visually impaired.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The literature on accessibility of mathematics [8,39,41,48] can also be applied to the formula in Excel TM . Previous research on the accessibility of graphs [13,45,55,56] can be applied to the accessibility of Excel TM charts. ...
... Many researchers [13,22,47] propose the use of a magnetic force to pull the user toward the desired virtual object. In the work of Roth et al. [45], virtual fixtures and vibrations are used to haptically render line graphs. ...
... Several researchers have leveraged on the benefits of both audio and haptic presentations, by enhancing one of the two methods with cues drawn from the other modality. In [45], the haptic presentation of graphs (using the Logitech WingMan mouse) is enhanced by speech cues, achieving a greater recognition rate. Aural cues can help the user in locating the position in the haptic space, when this cannot be identified by the touch sense alone, as illustrated in [22,45]. ...
Article
The problem of non-visual navigation of the information in Excel™ spreadsheet is that using current technologies, no overview is available for the user about different components found in the spreadsheet. Several attributes associated with spreadsheets make them not easy to navigate by individuals who are blind. The large amount of information stored in the spreadsheet, the multi-dimensional nature of the contents, and the several features it includes cannot be readily linearized by a screen reader or Braille display (e.g., charts and tables). A user-centered design paradigm is followed to build an accessible system for non-visual navigation of Microsoft Excel™ spreadsheet. The proposed system provides the user with a hierarchical overview of the navigated components in an Excel™ spreadsheet. The system is multi-modal, and it provides the user with different navigation and reading modes for the non-visual navigation of a spreadsheet. This will help the users in adapting the non-visual navigation according to the task the user needs to accomplish.
... Many researchers [11; 20; 22] use a magnetic force to pull the user towards the needed virtual object. In the work of Roth et al. [23], virtual fixture and vibrations are used to haptically render line graphs. Fritz et al. [22] use a light force to present axis's and gridlines. ...
... Multi-modal Presentations: Several researchers have leveraged on the benefits of both audio and haptic presentations, by enhancing one of the two methods with cues drawn from the other modality. In [23], the haptic presentation of graphs (using the Logitech WingMan mouse) is enhanced by speech cues, achieving a greater recognition rate. Aural cues can help the user in locating the position in the haptic space, when this cannot be identified by the touch sense alone. ...
... Nonspeech sounds can also help in providing the user with an outline of the graph. In the work of Roth et al. [23], sounds are used to warn the user if the cursor leaves the line. ...
Article
Full-text available
Several solutions, based on aural and haptic feedback, have been developed to enable access to complex on-line information for people with visual impairments. Nevertheless, there are several components of widely used software applications that are still beyond the reach of screen readers and Braille displays. This paper investigates the non-visual accessibility issues associated with the graphing component of Microsoft Excel". The goal is to provide flexible multi-modal navigation schemes which can help visually impaired users in comprehending Excel charts. The methodology identifies the need for 3 strategies used in interaction: exploratory, guided, and summarization. Switching between them supports the development of a mental model of a chart. Aural cues and commentaries are integrated in a haptic presentation to help understanding the presented chart. The methodology has been implemented using the Novint Falcon haptic device.
... Many researchers [11; 20; 22] use a magnetic force to pull the user towards the needed virtual object. In the work of Roth et al. [23], virtual fixture and vibrations are used to haptically render line graphs. Fritz et al. [22] use a light force to present axis's and gridlines. ...
... Several researchers have leveraged on the benefits of both audio and haptic presentations, by enhancing one of the two methods with cues drawn from the other modality. In [23] ...
... Nonspeech sounds can also help in providing the user with an outline of the graph. In the work of Roth et al. [23], sounds are used to warn the user if the cursor leaves the line. ...
Article
Full-text available
Several solutions, based on aural and haptic feedback, have been developed to enable access to complex on-line information for people with visual impairments. Nevertheless, there are several components of widely used software applications that are still beyond the reach of screen readers and Braille displays. This paper investigates the non-visual accessibility issues associated with the graphing component of Microsoft Excel™. The goal is to provide flexible multi-modal navigation schemes which can help visually impaired users in comprehending Excel charts. The methodology identifies the need for 3 strategies used in interaction: exploratory, guided, and summarization. Switching between them supports the development of a mental model of a chart. Aural cues and commentaries are integrated in a haptic presentation to help understanding the presented chart. The methodology has been implemented using the Novint Falcon haptic device.
... Many researchers [11; 20; 22] use a magnetic force to pull the user towards the needed virtual object. In the work of Roth et al. [23], virtual fixture and vibrations are used to haptically render line graphs. Fritz et al. [22] use a light force to present axis's and gridlines. ...
... Multi-modal Presentations: Several researchers have leveraged on the benefits of both audio and haptic presentations, by enhancing one of the two methods with cues drawn from the other modality. In [23], the haptic presentation of graphs (using the Logitech WingMan mouse) is enhanced by speech cues, achieving a greater recognition rate. Aural cues can help the user in locating the position in the haptic space, when this cannot be identified by the touch sense alone. ...
... Nonspeech sounds can also help in providing the user with an outline of the graph. In the work of Roth et al. [23], sounds are used to warn the user if the cursor leaves the line. ...
... Many researchers [Jay et al. 2008;Sjostrom et al. 2003;Fritz and Barner 1999] propose the use of a magnetic force to pull the user towards the desired virtual object. In the work of Roth et al. [Roth et al. 2002], virtual fixtures and vibrations are used to haptically render line graphs. Fritz et al. [Fritz and Barner 1999] use a light force to present axis's and gridlines. ...
... Multi-modal Presentations: Several researchers have leveraged on the benefits of both audio and haptic presentations, by enhancing one of the two methods with cues drawn from the other modality. In [Roth et al. 2002], the haptic presentation of graphs (using the Logitech WingMan mouse) is enhanced by speech cues, achieving a greater recognition rate. Aural cues can help the user in locating the position in the haptic space, when this cannot be identified by the touch sense alone, as illustrated in [Jay et al. 2008;Roth et al. 2002]. ...
... In [Roth et al. 2002], the haptic presentation of graphs (using the Logitech WingMan mouse) is enhanced by speech cues, achieving a greater recognition rate. Aural cues can help the user in locating the position in the haptic space, when this cannot be identified by the touch sense alone, as illustrated in [Jay et al. 2008;Roth et al. 2002]. ...
Article
Several solutions, based on aural and haptic feedback, have been developed to enable access to complex on-line and digital information contents for people with visual impairment. Nevertheless, there are several components of widely used software applications that are still beyond the reach of traditional screen readers and Braille displays. This article investigates the nonvisual accessibility issues associated with the graphing component of Microsoft Excel and proposes a novel approach and system. The goal is to provide flexible multi-modal presentation schemes which can help visually impaired users in comprehending the most commonly used two dimensional business charts, demonstrated within the familiar context of Excel charts. The methodology identifies the need for three distinct strategies used in the user interaction with a chart: exploratory, guided, and summarization. These methodologies have been implemented using a multimodal approach, which combines aural cues, speech commentaries, and 3-dimensional haptic feedback. The prototype implementation and the preliminary studies suggest that the multimodality can be effectively realized and users denote preferences in intertwining these methodologies to gain understanding of the content of charts. These methodologies have been implemented in a system, which makes use of the Novint Falcon haptic device and integrated as a plug-in in Microsoft Excel.
... The works of Ina (1996), Ladner et al. (2005), Miele and Marston (2005) and Watanabe et al. (2014) are some examples. Finally, other authors opt for multimodality, combining haptic solutions with data sonification and other stimuli (Kennel, 1996;Fritz and Barner, 1999;Yu et al., 2000;Roth et al., 2002;Yu and Brewster, 2003;Iglesias et al., 2004;McGookin and Brewster, 2006;Wall and Brewster, 2006;Doush et al., 2009;Goncu et al., 2010). ...
Article
Purpose Statistical charts are an essential source of information in academic papers. Charts have an important role in conveying, clarifying and simplifying the research results provided by the authors, but they present some accessibility barriers for people with low vision. This article aims to evaluate the accessibility of the statistical charts published in the library and information science (LIS) journals with the greatest impact factor. Design/methodology/approach A list of heuristic indicators developed by the authors has been used to assess the accessibility of statistical charts for people with low vision. The heuristics have been applied to a sample of charts from 2019 issues of ten LIS journals with the highest impact factor according to the ranking of the JCR. Findings The current practices of image submission do not follow the basic recommended guidelines on accessibility like color contrast or the use of textual alternatives. On the other hand, some incongruities between the technical suggestions of image submission and their application in analyzed charts also emerged. The main problems identified are: poor text alternatives, insufficient contrast ratio between adjacent colors and the inexistence of customization options. Authoring tools do not help authors to fulfill these requirements. Research limitations/implications The sample is not very extensive; nonetheless, it is representative of common practices and the most frequent accessibility problems in this context. Social implications The heuristics proposed are a good starting point to generate guidelines for authors when preparing their papers for publication and to guide journal publishers in creating accessible documents. Low-vision users, a highly prevalent condition, will benefit from the improvements. Originality/value The results of this research provide key insights into low-vision accessibility barriers, not considered in previous literature and can be a starting point for their solution.
... Una buena parte de las alternativas hápticas propuestas en la literatura combinan la generación de alternativas táctiles con la verbalización de información adicional o el uso de la sonificación como solución para mitigar las limitaciones propias de estas soluciones. Algunas propuestas en este ámbito son las de Kennel (1996), Fritz y Barner (1999) que además utilizan una fuente de luz para presentar los ejes y las líneas de cuadrícula de los gráficos; Ramloll, Yu, Brewster, Ridel, Burtor y Dimigen (2000) que combinan una representación táctil con técnicas de sonificación; y Roth, Kamel, Petrucci & Pun (2002) que analizan el uso del soporte de voz junto con la presentación háptica de gráficos; Yu & Brewster (2003) que utilizan el habla para proporcionar información acerca de los valores del gráfico; Iglesias, Casado, Gutierrez, Barbero, Avizzano, Marcheschi, & Bergamasco (2004) que introducen un entorno virtual que combina señales sonoras y hápticas, a través de una interfaz que permite el acceso a personas con discapacidad visual, a diferentes tipos de mapas y gráficos estadísticos (de líneas, de barras y circulares); McGookin & Brewster (2006) que, además del habla, incorporan dispositivos Phantom Omni -pensados para el modelado 3D-, capaces de captar el tacto, junto con el uso de esquemas de color en alto contraste para aquellos usuarios con algún resto de visión; Wall y Brewster (2006) que proponen el uso de una interfaz que proporciona una versión táctil de un gráfico circular mediante una tableta gráfica y un stylus en combinación con verbalización de información ;Doush, et al. (2009) que, a partir de los datos en formato Office Open XML extraídos de un documento Excel, identifican las diferentes instancias disponibles (tipo de gráfico, etiquetas, escalas, etc.) y generan una alternativa táctil en tres dimensiones mediante la API de OpenGL, junto con el soporte de voz proporcionado por la API de Microsoft Speech, que utilizan para proporcionar información acerca del gráfico, así como de la posición en la que se encuentra el usuario; o Goncu, Marriott y Hurst (2010) que analizan la usabilidad de las representaciones táctiles, incluidas pantallas táctiles en combinación con tablas de datos y el soporte del lector de pantalla JAWS. En este último estudio las pruebas con usuarios mostraron una preferencia por la tabla de datos, frente al gráfico táctil; por otro lado, la combinación de audio y tacto, también se demostró más eficiente para la resolución de tareas que cualesquiera de los otros dos sistemas por separado. ...
Article
Full-text available
Este primer número de la revista incluye dos secciones especiales: una sección con artículos de investigación seleccionados de acuerdo a criterios de impacto y calidad de los trabajos recibidos en el congreso “Interacción 2019” celebrado el pasado mes de junio en Donostia/San Sebastián, y otra sección de reseñas sobre Equipos de Investigación relacionados con el campo de IPO con la que se quiere contribuir a la definición de un mapa de la comunidad IPO donde se darán a conocer las actividades de investigación presentes y futuras de investigadores e investigadoras de España e Iberoamérica.Nuestro agradecimiento a los editores invitados, a todos los autores por su contribución, así como en las labores de revisión a todos los revisores implicados. [http://revista.aipo.es/index.php/INTERACCION/issue/view/1]
... In another study, Roth et al. (2002) administered three different methods for presenting scientific graphs to individuals with visual impairments. These methods included auditory, kinesthetic, or a combination of the two. ...
... In the current literature there also exists a number of related studies with a sonification aim similar to that described here, but in a variety of different contexts and with different goals. For example, some attempts have been made to sonify numerical data for people with visual impairment; in some cases this is combined and/or compared with kinaesthetic methods used to represent the same data [Kennel 1996;Stevens et al. 1997;Roth et al. 2002]. Perhaps one of the earliest studies was performed by Mansur et al. [1985] who used a sinusoidal wave, the frequency of which was used to sonify the ordinate value of a numerical function where the abscissa was mapped to time. ...
Article
In this article we present an approach that uses sound to communicate geometrical data related to a virtual object. This has been developed in the framework of a multimodal interface for product design. The interface allows a designer to evaluate the quality of a 3-D shape using touch, vision, and sound. Two important considerations addressed in this article are the nature of the data that is sonified and the haptic interaction between the user and the interface, which in fact triggers the sound and influences its characteristics. Based on these considerations, we present a number of sonification strategies that are designed to map the geometrical data of interest into sound. The fundamental frequency of various sounds was used to convey the curve shape or the curvature to the listeners. Two evaluation experiments are described, one involves partipants with a varied background, the other involved the intended users, i.e. participants with a background in industrial design. The results show that independent of the sonification method used and independent of whether the curve shape or the curvature were sonified, the sonification was quite successful. In the first experiment participants had a success rate of about 80% in a multiple choice task, in the second experiment it took the participants on average less than 20 seconds to find the maximum, minimum or inflection points of the curvature of a test curve.
... The resulting force fields are processed using either the Phantom Desktop or the CyberGrasp haptic device. An auditory-haptic system that uses force-feedback devices complemented by auditory information has been designed by [22, 98, 99]. In a first phase, a sighted person has to prepare an image to be rendered by sketching it and associating auditory information to key elements of the drawing. ...
Article
Full-text available
This paper reviews the state of the art in the field of assistive devices for sight-handicapped people. It concentrates in particular on systems that use image and video processing for converting visual data into an alternate rendering modality that will be appropriate for a blind user. Such alternate modalities can be auditory, haptic, or a combination of both. There is thus the need for modality conversion, from the visual modality to another one; this is where image and video processing plays a crucial role. The possible alternate sensory channels are examined with the purpose of using them to present visual information to totally blind persons. Aids that are either already existing or still under development are then presented, where a distinction is made according to the final output channel. Haptic encoding is the most often used by means of either tactile or combined tactile/kinesthetic encoding of the visual data. Auditory encoding may lead to low-cost devices, but there is need to handle high information loss incurred when transforming visual data to auditory one. Despite a higher technical complexity, audio/haptic encoding has the advantage of making use of all available user's sensory channels.
... In the current literature there also exists a number of related studies with a sonification aim similar to that described here, but in a variety of different contexts and with different goals. For example, some attempts have been made to sonify numerical data for people with visual impairment; in some cases this is combined and/or compared with kinaesthetic methods used to represent the same data [Kennel 1996;Stevens et al. 1997;Roth et al. 2002]. Perhaps one of the earliest studies was performed by Mansur et al. [1985] who used a sinusoidal wave, the frequency of which was used to sonify the ordinate value of a numerical function where the abscissa was mapped to time. ...
Conference Paper
Full-text available
This papers presents some of our research about the use of sound in a multimodal interface. The aim of this interface is to support product design, where the designer is able to physically interact with a virtual object. The requirements of the system include the interactive sonification of geometrical data relating to the virtual object. In this paper we present three alternative sonification approaches designed to satisfy this condition. We also outline a user evaluation strategy aimed at measuring the performance and added value of the different sonification approaches.
... The lack of statistically significant differences between auditory and haptic feedback indicates the need for further studies evaluating the relative value of each, and the value of combinations of the two. Mixed results in terms of a preferred mode of feedback were also found in [19]. Yu et al., however, did find that multilmodal (haptic and auditory) representation can enhance a user's ability to interpret graphs using a force feedback device in some cases [30]. ...
Conference Paper
Full-text available
We propose the use of a haptic touchscreen to convey graphical and mathematical concepts through aural and/or vibratory tactile feedback. We hypothesize that an important application of such a display will be in teaching visually impaired students concepts that are traditionally learned almost entirely visually. This paper describes initial feasibility studies using a commercially available haptic touchscreen to display grids, points, lines, and shapes - some of the first visual graphical entities students encounter in K-12 mathematics education, and from which more complex lessons can be constructed. We conducted user studies designed to evaluate perception of these objects through haptic feedback alone, auditory feedback alone, and combinations of the two. Our results indicate that both sensory channels can be valuable in user perception.
... In the current literature there exists a number of studies with a similar aim, but in a variety of different contexts. For example, some attempts have been made to sonify numerical data for people with visual impairment; in some cases this is combined and/or compared with kinaesthetic methods used to represent the same data [8,9,10,11]. Perhaps one of the earliest studies was made by Mansur who used a sinusoidal wave, the frequency of which is used to sonify the ordinate value of a numerical function where the abscissa is mapped to time. ...
Conference Paper
Full-text available
This paper presents a number of different sonification approaches that aim to communicate geometrical data, specifically curve shape and curvature information, of virtual 3-D objects. The system described here is part of a multi-modal augmented reality environment in which users interact with virtual models through the modalities vision, hearing and touch. An experiment designed to assess the performance of the sonification strategies is described and the key findings are presented and discussed.
... Of note, these studies examined auditory line graphs, where no more than one y-axis value was displayed for a given x-axis value. Although auditory graph researchers have investigated auditory scatterplots [Bonebright et al. 2001; Flowers et al. 1997], box-whisker plots [Flowers and Hauer 1992; Lane 2003, 2005], histograms [Flowers and Hauer 1993], and tabular data [Stockman et al. 2005], the majority of auditory graph research has examined auditory line graphs [Bonebright et al. 2001; Brewster and Murray 2000; L. M. Brown et al. 2002; L. M. Flowers and Hauer 1995; Mansur et al. 1985; Roth et al. 2002; Walker 2002, 2005; Turnage et al. 1996; Walker and Nees 2005b]. This is not surprising, considering that line graphs accounted for 72.5% of graphs appearing in academic journals and 50.1% of all graphs appearing in newspapers in the sample obtained by Zacks et al. [2002]. ...
Article
Full-text available
Auditory graphs—displays that represent quantitative information with sound—have the potential to make data (and therefore science) more accessible for diverse user populations. No research to date, however, has systematically addressed the attributes of data that contribute to the complexity (the ease or difficulty of comprehension) of auditory graphs. A pair of studies examined the role of data density (i.e., the number of discrete data points presented per second) and the number of trend reversals for both point-estimation and trend-identification tasks with auditory graphs. For the point-estimation task, more trend reversals led to performance decrements. For the trend-identification task, a large main effect was again observed for trend reversals, but an interaction suggested that the effect of the number of trend reversals was different across lower data densities (i.e., as density increased from 1 to 2 data points per second). Results are discussed in terms of data sonification applications and rhythmic theories of auditory pattern perception.
... Walker [12, 57] used magnitude estimation to determine the preferred scaling slopes for mapping 1 Some redundant mappings may be detrimental to performance with auditory graphs. One study [58] attempted to use panning and spatial elevation as well as frequency to represent data in auditory graphs, which resulted in exceptionally poor performance in the reproduction of simple linear increasing functions. frequency to a number of conceptual data dimensions; interestingly, the scaling slope for the same quantitative changes in data may be different depending upon the conceptual data dimension (e.g., temperature, size, pressure, etc.) being represented. ...
Conference Paper
Full-text available
Auditory graph design and implementation often has been subject to criticisms of arbitrary or atheoretical decision-making processes in both research and application. Despite increasing interest in auditory displays coupled with more than two decades of auditory graph research, no theoretical models of how a listener processes an auditory graph have been proposed. The current paper seeks to present a conceptual level account of the factors relevant to the comprehension of auditory graphs by human listeners. We attempt to make links to the relevant literature on basic auditory perception, and we offer explicit justification for, or discussion of, a number of common design practices that are often justified only implicitly or by intuition in the auditory graph literature. Finally, we take initial steps toward a qualitative, conceptual level model of auditory graph comprehension that will help to organize the available data on auditory graph comprehension and make predictions for future research and applications with auditory graphs
Article
During the Covid-19 pandemic, people rely on the Internet in order to obtain information that can help them understand the coronavirus crisis. This situation has exposed the need to ensure that everyone has access to essential information on equal terms. During this situation, statistical charts have been used to display data related to the pandemic, and have had an important role in conveying, clarifying and simplifying information provided by governments and health organisations. Scientific literature and the guidelines published by organizations have focused on proposing solutions to make charts accessible for blind people or people with very little visual rest. However, the same efforts are not made towards people with low vision, despite their higher prevalence in the population of users with visual impairment. This paper reviews the accessibility of the statistical charts about the Covid-19 crisis for people with low vision that were published by the Brazilian, British, Russian, Spanish, European Union, and the United States' governments and also by the World Health Organization and Johns Hopkins University, relating to the countries most severely affected by the pandemic. The review is based on specific heuristic indicators, with a mixed quantitative and qualitative approach. Overall, the reviewed charts offer a reasonable level of accessibility, although there are some relevant problems affecting many of the low vision profiles that remain to be solved. The main problems identified are: poor text alternatives in both, raster images and SVG charts; the incompatibility with a keyboard interface; insufficient non-text contrast against adjacent colours (in chart elements such as bars, lines or areas), no customization options; and the lack of an optimized print version for users for whom reading on screen is challenging.
Conference Paper
Statistical charts play a primordial role in different areas of our life, such as information, education, communication or research. However, authors and content publishers do not always follow the accessibility criteria in the design and creation of this type of content. Considering these premises, this work includes the main approaches in which the scientific literature has focused so far to improve the accessibility of statistical charts: text alternatives, sonification, tactile alternatives and multimodal alternatives, with the purpose of evaluating their suitability for people with low vision and color blindness. Finally, some solutions are suggested that seem technologically viable and that start from the use of JavaScript libraries for the creation of interactive charts, in combination with other standards such as WAI-ARIA and the use of patterns to fill areas as a strategy to differentiate visual variables.
Article
Protecting the lives and the rights of the impaired people and promoting their social participation is a paramount principle today. Especially for visually impaired people, map usage and route recognition is an important function for promoting social participation. So we have been developing a new assistive interface which they can intuitively recognize the map using audio and touch panels. The assistive interface is universal designed to enable not only the visually impaired people but also the non-impaired people to enjoy using interactive digital map contents together. This paper introduces our recent progress about the assistive interface called the One Octave Scale Interface. The effectiveness of the interface was confirmed by doing experiments of graph and map recognition and a walking experiment after presenting route guide map.
Article
Full-text available
Current methods available to represent graphical information to individuals who are blind or visually impaired are too expensive and/or cumbersome to be of practical use. Therefore, there is a needfor an affordable display device capable of rendering graphical information through stimulation of working sensory systems. To further facilitate individuals, the device must be portable, as to enable them to use it in many different settings, and highly affordable, as most individuals who are blind are also unemployed. In this paper a dynamic display haptic device is described that is both affordable (<25US)andportable(<1kg).Thedeviceusesaphotointerruptertodetectcontrastsinlightreflectivityforanimageandvibratingsolenoidmotorstoprovidetactilefeedback.Thedeviceiswornlikeaglove,sothetactilefeedbackcombineswiththebodyskinestheticsenseofpositionofthehandtoconveyahapticimage.Preliminarytestsshowthatasinglefingermodelofthedevicehasonaveragea5025 US) and portable (<1 kg). The device uses a photo-interrupter to detect contrasts in light reflectivity for an image and vibrating solenoid motors to provide tactile feedback. The device is worn like a glove, so the tactile feedback combines with the body's kinesthetic sense of position of the hand to convey a haptic image. Preliminary tests show that a single-finger model of the device has on average a 50% object identification accuracy, which is higher than the accuracy for raised-line drawings. The device ca be expanded for use of multiple fingers, while still remaining affordable (<50 US).
Article
Although intelligent interactive systems have been the focus of many research efforts, very few have addressed systems for individuals with disabilities. This article presents our methodology for an intelligent interactive system that provides individuals with sight impairments with access to the content of information graphics (such as bar charts and line graphs) in popular media. The article describes the methodology underlying the systems intelligent behavior, its interface for interacting with users, examples processed by the implemented system, and evaluation studies both of the methodology and the effectiveness of the overall system. This research advances universal access to electronic documents.
Conference Paper
Full-text available
In cluster-based sensor networks, part of the sensor nodes can be switched into a sleep state in order to conserve energy if their neighbors can provide the same or almost the same sensing coverage. However, as the number of nodes in the sleep state increases, coverage for the cluster is degraded. It is crucial to maintain high coverage of clusters in order to preserve performance. In this paper, we propose a coverage- aware sleep scheduling (CS) algorithm to improve the coverage of each cluster. Compared with two previous schemes: the randomized scheduling (RS) scheme and the distance-based scheduling (DS) scheme, the CS algorithm maintains higher coverage, while guaranteeing the same lifetime for the cluster. The CS algorithm thus improves the overall performance of the cluster-based sensor networks.
Conference Paper
Protecting the lives and the rights of the impaired people and having them able to participate more in society is an important subject we have to work on. Especially for visually impaired people, we need to support their mobility in order for them to have more opportunity in social participation. To support their mobility, map usage and route recognition are indispensable. So we have been working on developing a method which visually impaired people can intuitively recognize the universally designed interactive map and have come up with a method called the "One Octave Scale Interface (abbr. OOSI)". The method is presented with using sounds and touch panels which are recently seen in PCs and smart-phones. So we confirmed the effectiveness by conducting an experiment on recognizing the figure and walking the route guide map; however, less design criteria of figure and map in OOSI. To improve OOSI, design criteria for applying interaction map and graph is important. In this paper, we propose three design criteria for OOSI to improve figure recognition of visually impaired people. Eleven visually impaired people who use braille participate in experiments on recognizing the figure and the evaluation results have shown the relation between the braille proficiency and the spatial recognition.
Article
Visually impaired people have a lack of proper user interfaces to allow them to easily make use of modern technology. This problem may be solved with multimodal user interfaces that should be designed taking into account the type and degree of disability. The purpose of the study presented in this article was to create usable games for visually impaired children making use of low-cost vibro-tactile devices in multimodal applications. A tactile memory game using multimodal navigation support with high-contrast visual feedback and audio cues was implemented. The game was designed to be played with a tactile gamepad. Different vibrations were to be remembered instead of sounds or embossed pictures that are common in memory games for blind children. The usability and playability of the game was tested with a group of seven 12–13-year-old visually impaired children. The results showed that the game design was successful and a tactile gamepad was usable. The game got a positive response from the focus group.
Conference Paper
This paper reports on the design and the evaluation of an audio-haptic tool that enables blind com- puter users to explore digital pictures using the hearing and feeling modalities. The tool is divided in two enti- ties: a description tool and an exploration tool. The description tool allows moderators (sighted persons) to de- scribe a scene. Firstly, the scene is manually segmented into a set of objects (car, tree, house, etc.). For each ob- ject, the moderator can define a behavior, which corresponds either to an auditory rendering (i.e., using speech or non-speech sounds) and/or to a kinesthetic one. The blind person uses the exploration tool in order to obtain an audio-haptic rendering of the segmented image, as previously defined by the moderator. This interactive ex- ploration is obtained by means of a planar force feedback device. The system was experimented by a group of ten blind participants. Results revealed that the audio encoding of visual objects combined with an active kines- thetic exploration enables blind people to obtain a fairly good mental image of the explored scene.
Conference Paper
Full-text available
This paper presents one option for a research agenda for future work in auditory graphs. The main agenda items suggested are effectiveness of auditory graphs; sonification tools; role of memory and attention; real-world applications; longitudinal studies of learning; and neurophysiological research. A brief review of past research in each area is given to provide general information about relevant studies and is meant to serve as a starting point rather than as a comprehensive overview of the literature on auditory graph studies.
Article
Full-text available
In studying grasping and manipulation we find two very different approaches to the subject: knowledge-based approaches based primarily on empirical studies of human grasping and manipulation, and analytical approaches based primarily on physical models of the manipulation process. This chapter begins with a review of studies of human grasping, in particular our development of a grasp taxonomy and an expert system for predicting human grasp choice. These studies show how object geometry and task requirements (as well as hand capabilities and tactile sensing) combine to dictate grasp choice. We then consider analytic models of grasping and manipulation with robotic hands. To keep the mathematics tractable, these models require numerous simplifications which restrict their generality. Despite their differences, the two approaches can be correlated. This provides insight into why people grasp and manipulate objects as they do, and suggests different approaches for robotic grasp and manipulation planning. The results also bear upon such issues such as object representation and hand design.
Conference Paper
Full-text available
Existing drawing tools for blind users give inadequate contextual feedback on the state of the drawing, leaving blind users unable to comprehend and successfully produce graphical information. We have investigated a tactile method of drawing used by blind users that mimics drawing with a pencil and a paper. Our study revealed a set of properties that must be incorporated into drawing tools for blind users, including giving feedback for relocating important points, determining angles, and communicating the overall structure of the drawing. We describe a gridbased model that provides these properties in a primitivebased 2D graphics environment, and we introduce its use in drawing and other graphical interactions. KEYWORDS Non-visual drawing tools, GUIs for blind users, contextual inquiry, feedback, grid
Article
Full-text available
Data visualization is a technique used to explore real or simulated data by representing it in a form more suitable for comprehension. This form is usually visual since vision provides a means to perceive large quantities of spatial information quickly. However, people who are blind or visually impaired must rely on other senses to accomplish this perception. Haptic interface technology makes digital information tangible, which can provide an additional medium for data exploration and analysis. Unfortunately, the amount of information that can be perceived through a haptic interface is considerably less than that which can be perceived through vision, so a haptic environment must be enhanced to aid the comprehension of the display. This enhancement includes speech output and the addition of object properties such as friction and texture. Textures are generated which can be modified according to a characteristic or property of the object to which it is applied. For example, textures can be used as an analog to color in graphical displays to highlight variations in data. Taking all of these factors into account, methods for representing various forms of data are presented here with the goal of providing a haptic visualization system without the need for a visual component. The data forms considered include one-, two-, and three-dimensional (1-D, 2-D, and 3-D) data which can be rendered using points, lines, surfaces, or vector fields similar to traditional graphical displays. The end result is a system for the haptic display of these common data sets which is accessible for people with visual impairments
Article
Full-text available
Line graphs stand as an established information visualisation and analysis technique taught at various levels of difficulty according to standard Mathematics curricula. Blind individuals cannot use line graphs as a visualisation and analytic tool because they currently primarily exist in the visual medium. The research described in this paper aims at making line graphs accessible to blind students through auditory and haptic media. We describe (1) our design space for representing line graphs, (2) the technology we use to develop our prototypes and (3) the insights from our preliminary work. KEYWORDS: force feedback, haptic display, line graphs, spatial sound, and visual impairment INTRODUCTION This work is being carried out as part of a 3-year EPSRC funded project (MultiVis). Its aim is to investigate different human sensory modalities to create systems that will make statistical information representations accessible to blind people. In this paper, we focus on line graphs. The opi...
Article
A system for the creation of computer-generated sound patterns of two-dimensional line graphs is described. The objectives of the system are to provide the blind with a means of understanding line graphs in the holistic manner used by those with sight. A continuously varying pitch is used to represent motion in the x direction. To test the feasibility of using sound to represent graphs, a prototype system was developed and human factors experimenters were performed. Fourteen subjects were used to compare the tactile-graph methods normally used by the blind to these new sound graphs. It was discovered that mathematical concepts such as symmetry, monotonicity, and the slopes of lines could be determined quickly using sound. Even better performance may be expected with additional training. The flexibility, speed, cost-effectiveness, and greater measure of independence provided the blind or sight-impaired using these methods was demonstrated.
Article
By applying multidimensional scaling procedures and other quantitative analyses to perceptual dissimilarity judgments, we compared the perceptual structure of visual line graphs depicting simulated time series data with that of auditory displays (musical graphs) presenting the same data. Highly similar and meaningful perceptual structures were demonstrated for both auditory and visual modalities, showing that important data characteristics (function slope, shape, and level) were perceptually salient in either presentation mode. Auditory graphics may be a highly useful alternative to traditional visual graphics for a variety of data presentation applications.
Conference Paper
In order to enhance operator performance and understanding within remote environments, most research and development of telepresence systems has been directed towards improving the fidelity of the link between operator and environment. Although higher fidelity interfaces are important to the advancement of a telepresence system, the beneficial effects of corrupting the link between operator and remote environment by introducing abstract perceptual information into the interface called virtual fixtures are described
Article
This paper describes a method of presenting structured audio messages, earcons, in parallel so that they take less time to play and can better keep pace with interactions in a human-computer interface. The two component parts of a compound earcon are played in parallel so that the time taken is only that of a single part. An experiment was conducted to test the recall and recognition of parallel compound earcons as compared to serial compound earcons. Results showed that there are no differences in the rates of recognition between the two groups. Non-musicians are also shown to be equal in performance to musicians. Some extensions to the earcon creation guidelines of Brewster, Wright and Edwards are put forward based upon research into auditory stream segregation. Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates. (C) 1995 Academic Press Limited
Article
This paper describes "From Dots to Shapes" (FDTS), an auditory platform composed by three classic games ("Simon", "Point Connecting" and "concentration game") for blind and visually impaired pupils. Each game was adapted to work on a concept of the Euclidean geometry (e.g. ) The tool, , is based on sonic and haptic interaction, and therefore could be used by special educators as a help for teaching basic planar geometry.
Human grasp choice and robotic grasp analysis
  • M R Cutkosky
  • R D Howe
The IT potential of haptics-Touch access for people with disabilities
  • Sjöströmc
Testing the equivalence of visual and auditory graphs
  • C Sahyuns
  • A Gardnerj