Article

Non-visual exploration of geographic maps: Does sonification help?

Abstract and Figures

Purpose: This study aims at evaluating the effectiveness of sonification as a mean to provide access to geo-referenced information to users with visual impairments. Method: Thiry-five participants (10 congenitally blind, 10 with acquired blindness and 15 blindfolded sighted) completed four tasks of progressive difficulty. During each task, participants first explored a sonified map by using either a tablet or a keyboard to move across regions and listened to sounds giving information about the current location. Then the participants were asked to identify, among four tactile maps, the one that crossmodally corresponds to the sonifed map they just explored. Finally, participants answered a self-report questionnaire of understanding and satisfaction. Results: Participants achieved high accuracy in all of the four tactile map discrimination tasks. No significant performance difference was found neither between subjects that used keyboard or tablet, nor between the three groups of blind and sighted participants. Differences between groups and interfaces were found in the usage strategies. High levels of satisfaction and understanding of the tools and tasks emerged from users' reports.
Content may be subject to copyright.
RESEARCH PAPER
Non-visual exploration of geographic maps: Does sonification help?
FRANCO DELOGU
1,2,3
, MASSIMILIANO PALMIERO
1,2
, STEFANO FEDERICI
3,4
,
CATHERINE PLAISANT
5
, HAIXIA ZHAO
5
& OLIVETTI BELARDINELLI
1,3
1
Department of Psychology, ‘Sapienza’ University of Rome, Rome, Italy,
2
RIKEN, Brain Science Institute, Wako, Japan,
3
ECONA, Interuniversity Centre for research On Cognitive processing in Natural and Artificial systems, Rome, Italy,
4
Facolta`
di Scienze della Formazione, Universita` degli Studi di Perugia, Perugia, Italy, and
5
Institute for Advanced Computer Studies,
University of Maryland, Maryland, USA
Accepted June 2009
Abstract
Purpose. This study aims at evaluating the effectiveness of sonification as a mean to provide access to geo-referenced
information to users with visual impairments.
Method. Thiry-five participants (10 congenitally blind, 10 with acquired blindness and 15 blindfolded sighted) completed
four tasks of progressive difficulty. During each task, participants first explored a sonified map by using either a tablet or a
keyboard to move across regions and listened to sounds giving information about the current location. Then the participants
were asked to identify, among four tactile maps, the one that crossmodally corresponds to the sonifed map they just explored.
Finally, participants answered a self-report questionnaire of understanding and satisfaction.
Results. Participants achieved high accuracy in all of the four tactile map discrimination tasks. No significant performance
difference was found neither between subjects that used keyboard or tablet, nor between the three groups of blind and
sighted participants. Differences between groups and interfaces were found in the usage strategies. High levels of satisfaction
and understanding of the tools and tasks emerged from users’ reports.
Keywords: Sonification, blindness, mental mapping, non-visual exploration, haptics
Introduction
There is a considerable amount of information that is
not available to blind individuals, in particular the
information which is encoded in spatial and graphi-
cal forms (e.g., diagrams, graphs, geo-political
maps). A subject that puzzles experimental research
is whether the issue is limited to a problem of access
to spatial information by users who are blind or
whether there are also cognitive constraints in the
construction of internal spatial representation for
people who have never had visual experience or who
have lost their vision in early infancy.
Experimental studies provide extensive evidence
that blind people experience difficulties in forming
mental representations of spatial structures [1–4] and
that they face difficulties in spatial imagery as well
as in navigating efficiently through spaces [5].
These results support the inference that visual
experience is necessary to the acquisition of spatial
concepts. However, there is no general agreement
about the nature of spatial processing difficulties
experienced by blind people. In particular, it is
unclear whether these spatial difficulties are due to
performance, competence, or ability deficits [1].
Several studies question this sight-centered model
of spatial processing, arguing that spatial representa-
tions generated by different sensory modalities
(haptic and auditory, for instance) or by verbal input
are functionally equivalent to visual representations
of space [6,7]. This view supports the existence of an
amodal spatial representation system that receives
inputs from different sensory channels [8]. Accord-
ing to this orientation, blind people should be able
to organize functionally equivalent spatial maps
using tactile, auditory, and kinaesthetic information.
Correspondence: Franco Delogu, Laboratory for Perceptual Dynamics, RIKEN BSI, 2–1 Hirosawa, Wako-shi, Saitama 351-0198, Japan.
E-mail: francodelogu@brain.riken.jp
Disability and Rehabilitation: Assistive Technology, May 2010; 5(3): 164–174
ISSN 1748-3107 print/ISSN 1748-3115 online ª2010 Informa UK Ltd.
DOI: 10.3109/17483100903100277
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
The theoretical debate that sets a crucial role for
vision against a multisensory, or amodal, spatial
representation is an open question [9] and has
important empirical implications for designing effec-
tive assistive tools.
In recent decades, the attempts to provide easier
and full access to spatial information for visually-
impaired people have considerably increased. Owing
to more sophisticated integrations of non-visual
perceptual stimuli and conceptual information, new
possibilities of information access have been devel-
oped [10]. One of the first studies addressing the
issue that whether other sensory information can
compensate the lack of visual experience in forming
mental spatial imagery, was carried out by Kerr. She
used a mental scanning task in haptic learning. The
results showed a strong relation between scanning
time and Euclidean distances both for sighted and
for congenitally blind people, although the response
times were significantly longer for blind than for
sighted subjects [11]. The same chronometric
performance for blind, blindfolded, and sighted
participants was found by Ro¨der and Ro¨ sler using a
similar paradigm [12]. Aleman et al. by asking
participants to memorize the position of a target
cube in two-dimensional (2D) and three-dimen-
sional (3D) matrices after haptic exploration con-
firmed that blind participants were capable of good
performance, although they made significantly more
errors than sighted participants [13]. Although there
is encouraging experimental evidence for effective
spatial mappings in absence of vision, other evi-
dences show that blind subjects have more dif-
ficulties than sighted ones in memorizing spatial
positions of target objects [14], and in main-
taining different items of spatial information simul-
taneously [15].
Currently, the most common solution for display-
ing spatial information in the absence of vision is by
tactile mapping. For a long time, tactile maps were
the only means for communicating spatial, particu-
larly geographical information, to visual impaired
people. The design and the production of tactile
maps have involved researchers from several fields:
applied geographers, computer scientists and psy-
chologists, amongst others. Since the early work [16–
19], cartographers have adopted different solutions
both for designing and producing tactile maps: raised
inks, thermoforming, vacuum forming, etching, and
accretion for tactile maps [20]. In parallel, psycho-
logical research investigated the impact of tactile
mapping on the cognitive system, quantifying acces-
sibility, identifying problems, possibilities and limita-
tions of tactile maps also in comparison to visually
presented maps. Implementations of haptic tools
within virtual navigation environments provided
encouraging results, as blind participants were able
to generate a verbal description and a physical model
of the explored virtual environment, with a general
improvement of navigation performance in real
space [5].
Despite their effectiveness, tactile maps showed
several limitations and problems both logistically
and cognitively. Firstly, tactile maps are difficult and
expensive to produce, are of low resolution, and
rarely reach a high level of quality [21]. In com-
parison with the great abundance of visual maps, the
variety of raised-line maps is rather scarce. Braille
labeling is also a problem in small maps where there
is no enough room for labeling all the countries,
regions, or features within the map. Furthermore,
tactile spatial exploration is necessarily a slow serial
process compared with rapid visual exploration.
Another limitation concerns the possibility to repre-
sent several features together (like in geopolitical
maps). Multi-feature representation in tactile maps
must be addressed with caution. When maps contain
too much information, the discriminability of single
feature can be impaired and even localization and
shape acquisition can be undermined. Clark and
Clark demonstrated that simplified maps, rather than
exact representations, lead to a better shape recogni-
tion [22].
In order to overcome these limitations, multi-
disciplinary researches have been developing new
ways of accessing map information, which should be
easier to learn and use by blind people.
The main alternatives to haptic technologies are
audio-based systems. Recent studies have demon-
strated that the auditory channel, with or without
tactile stimulation, can be a useful alternative for the
transmission of spatial information [23,24]. Specifi-
cally, during the last few decades sonification has
been used to develop new tools for transmitting
spatial information. Sonification, namely the use of
non-speech audio to convey information, allows the
transformation of data relations into acoustical
ones in order to facilitate communication or inter-
pretation [25]. Amongst others, sonification has
successfully been used to present geographical,
environmental, or census data [26]. One of the first
sonification systems was the Soundgraphs developed
by Mansur et al. [27]. Soundgraphs allows the
presentation of line graphs in sound. Time is
mapped to the x-axis and pitch to the y-axis. The
shape of the graph can then be heard as a rising or
falling note playing over time. Mansur et al.’s initial
results were very promising as users were able to
identify the types of curves very easily along
with maximum/minimum points. Moreover, the
system allows listeners to get an overview by
listening to all data very quickly, which is not easily
provided using speech. Another sonification tool,
The KnowWhere
TM
System, was developed by
Sonification of geographic maps 165
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
Krueger and Gilden to present geographic informa-
tion to users with visual impairments [28]. Users’
hands rested upon an illuminated surface covered
with a tactile grid and monitored by a ceiling-
mounted video camera. The video image was
analyzed by specialized processors and the location
of the user’s fingertip on the light table was
determined. An invisible virtual map is defined on
the desk surface and the feature that the user is
currently pointing to is signaled by a sound which
serves to identify the feature or kind of feature that
has been ‘touched’. Ramloll et al. found that using
non-speech sound in 2D numerical tables signifi-
cantly improved the ability of vision impaired users
to locate and acquire data [29]. Afonso et al.
implemented a virtual environment that incorporates
a high quality virtual 3D audio interface [30]. The
aim was to determine how a verbal description and
the active exploration of an environment affected the
building up of a mental spatial representation.
The authors found that active exploration was
better than verbal learning in generating spatial
imagery. Blind participants were able to generate
correct spatial representations of an environment,
although they needed more time than sighted
participants.
Evidence from both theoretical and empirical
research seem to converge to the idea that a
combination of sound and touch will work better
than a single modality in non-visual displaying of
spatial information [29,31] Recently, the Human
Computer Interaction Laboratory of the University
of Maryland developed a new sonification tool,
iSonic, to facilitate the exploration of geo-referenced
information by users with visual impairments [32]
(Figure 1). In a pilot study, the authors showed that
blind users were able to recognize geographic maps
by using an interactive sonification system [33]. A
later case study demonstrated that blind participants
could perform complex tasks using combinations of
sonified tables and maps [34].
Considering multisensory integration as a promis-
ing way to improve the access to spatial information
to blind people, we aimed at verifying whether or not
blind users are able to generate effective mental
representations of geographical information when
using the iSonic sonification tool.
In the present study, we intended to find out, both
in quantitative and qualitative terms, (1) whether
sonification is a useful mean to provide accurate geo-
political representations (specifically choropleth
maps), and (2) whether sonified map exploration is
affected by different input modalities of exploration.
At a theoretical level, starting from the claim that
spatial representation is not mandatorily linked to
visual modality, we aimed at verifying that (1) blind
subjects (congenitally or acquired) do not differ from
blindfolded sighted subjects in the identification of
sonified maps, and (2) blind people use different
perceptual and cognitive strategies, e.g., focusing
more specifically on sound information than sighted
subjects.
Method
Participants
Thirty-five subjects, 16 females and 19 males,
participated in the study (mean age ¼32.46;
SD ¼6.73). The sample is comprised of three groups
(10 ‘early blind’ participants, 10 ‘late blind’ partici-
pants and 15 ‘sighted blindfolded’). Subjects were
considered early blind when visual acuity in their
better eye was below 1/300 within the third year after
birth. We considered those individuals with sight at
birth, and who lost sight after age 3 as late blind.
Motor performance and hearing abilities were
normal in all subjects.
All participants were right-handed. Subjects were
all Italians and therefore likely to be poorly
acquainted with the USA geographical representa-
tions used as stimulus material.
Materials and tasks
Four sonified auditory maps and 16 tactile plastic
maps were used as experimental materials. Auditory
maps represented patterns of unemployment rates in
Figure 1. The iSonic software being used by a user with visual
impairments. As the user moved her finger on the touch-tablets
stereo a sound is produced representing the value of a selected
variable for the state at the finger location. Commands can be
issued using the keyboard, such as an automatic sweep of all the
states in a region.
166 F. Delogu et al.
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
USA. The rate for each state was sonified by a
musical pitch. Three levels of pitches were used,
with a high pitch (G4 at 783.9 Hz) for high
unemployment, a medium pitch (E4 at 659.2 Hz)
for medium levels, and a low pitch (C4 at 523.2 Hz)
for low unemployment rates. Patterns of three maps
were of progressive complexity, from the first being
clearly divided into three areas, to the third being
more varied and realistic (See leftmost maps in
Figure 2).
The fourth auditory map represented the Idaho
state and its counties (Figure 3). For this map, the
three levels of unemployments were not provided as
the map was used to assess how subjects explore and
represent shape, size, and external boundaries of a
map, not the pattern of values.
A total of 16 tactile plastic maps (42 cm630 cm)
were used in the four tasks of the experiment. For
each task, four tactile maps were used, with one
corresponding to the sonified map (target), and the
remaining three being the distractors. The distractor
maps, identical in shape to the target, vary only in
their pattern of unemployment (see the smaller maps
in Figure 2). In order to control whether subjects
were able to detect subtle variations we also included
distractors presenting patterns very similar to the
target’s one.
State and county borders consisted of embossed
lines. For the first three tasks, three different textures
represented the three categories of the value (rate of
unemployment) on the tactile maps: dotted texture
was equal to high value, herringbone texture to
medium value, and striped texture to low value (see
the picture in Figure 4 for an example).
For the fourth task, four tactile maps of four
different states were used: with one target map
corresponding to the sonified map (Idaho see
Figure 3), and the three distractors (New Hampshire,
Utah, and Washington). The tactile maps repre-
sented the general shapes of the states. There were
no textures inside the counties of the four tactile
maps.
A paper questionnaire was given at the end about
user satisfaction and general understanding of the
tools and tasks.
Figure 2. Maps of unemployment patterns across the USA. The
unemployment value is represented by different shades of grey (the
darker, the higher). In the sonified maps users would hear a high
pitch tone for high values and lower pitch tones for the lower
values.
Figure 3. Auditory map of Idaho shape. The map is blank.
Figure 4. An example of tactile maps with different textures
representing the three values of unemployment rate.
Sonification of geographic maps 167
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
Apparatus
Map exploration was conducted using an opera-
tional interactive software, iSonic, an environment
to assist users with visual impairment in exploring
geo-referenced data using coordinated maps and
tables, augmented with nontextual sounds and
speech output [34]. Only a fraction of the features
of iSonic were used to conduct this study. In
iSonic, different pitches of a violin timbre indicate
different levels of the chosen variable (such as
scholarization, unemployment, or crime rate sta-
tistics). Stereo sounds provide information about
the left–right location of the geographical area.
Table I lists the sonification features of iSonic that
we used.
iSonic can be operated using two input devices: a
computer keyboard and a touch-tablet. The key-
board interface allows a combination of techniques
to navigate the map. Users can perform the
following actions: start an automatic sweep to
obtain a quick overview of the data patterns;
perform a relative navigation using arrow keys to
move from one region to the neighboring ones; use
the numerical keypad to explore the 363 zones into
which the map is divided (see Figure 5), with
pressing 1 to play the left-bottom zone, pressing 9
to play the right-top zone, and so on; request details
of a data item (e.g., the name of the current state).
The modality of keyboard exploration was struc-
tured as shown in Table II.
The touch-tablet interface allows users to point to
and explore the display in a more direct way: when
users touch the smooth surface they hear the sound
whose pitch indicates the value for the region at the
finger position. They can also drag their finger on the
surface and hear as a continuous sound plays when
their finger moves within the region. The sound
stops when the user lifts her/his finger. When the
finger crosses a region border, a border sound is
played. Another sound is played when the user
moves outside the map. In both interfaces, by
pressing ‘0’ on the number pad (as shown in the
Table II) participants could hear a sweep of the
entire map.
The two interfaces show important differences:
keyboard exploration could be defined as qualita-
tive, discrete, and symbolic whereas touch-tablet
exploration as quantitative, continuous, and analo-
gical. More precisely, keyboard exploration is:
qualitative, because it informs about the state
values; discrete, because it progresses by means of
all-or-nothing steps; symbolic, because arrow-keys
typically move in only four directions. In contrast,
the touch-tablet interaction is: quantitative, because
each movement gives information about shape and
size of each state; continuous, because it keeps
informing users also when they stay still on a spot;
analogical-proprioceptive, because the movement of
a part of the body mirrors an analogous movement
on the map.
Figure 5. Participants were asked to indicate which level of
unemployment (low, medium, high) was the most represented
within each one of the nine above depicted map zones.
Table I. Sound and their corresponding parameters.
Sound parameter Map feature
Violin pitch
High state with a high level
of unemployment
Medium state with a medium level
of unemployment
Low state with a low level
of unemployment
Single click region border
Fret guitar background (area out of
the map borders)
Stereo panning
(relative loudness of left
and right channels)
Horizontal eccentricity
(position on the left-right axis)
Table II. Keyboard exploration.
168 F. Delogu et al.
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
Procedure
Each experimental session was conducted in a
soundproof and dimly lighted room and lasted for
*1 h and 30 min. The whole session consisted of
three phases: a pre-experimental phase, the experi-
ment, and a post-experimental questionnaire. Sub-
jects were randomly assigned to either the keyboard
or the touch-tablet interface condition.
In the pre-experimental phase, participants were
tested for their basic auditory discrimination abilities
with an auditory localization task and a pitch
discrimination task. Then, in a training session of
about 10 min, they could explore sonified training
maps (different from the ones used in the experi-
mental tasks) to practice with the interface and the
logic of the experimental tasks.
In the experimental phase, subjects performed four
tasks. Each task consisted of auditory exploration,
tactile recognition, and questionnaire filling. The
interface condition (keyboard or touch-screen) for
the exploration depended on the experimental group
each subject was assigned to. Subjects were in-
structed to explore the sonified map in order to learn
the overall data pattern. The first three tasks were to
explore the patterns of unemployment rate across
states. The fourth task was to explore the general
shape of the map. The purpose of the first three tasks
was to investigate the exploration and representation
of geo-referenced data patterns on maps, whereas the
fourth task was designed for assessing how subjects
explore and represent shape, size, and external
boundaries of the map. For each task, immediately
after exploring the auditory map, subjects were
presented with four tactile maps, one target and three
distractors. Subjects were invited to rate each of the
four tactile maps for its correspondence with the
sonified map, along a scale ranging from 0 to 10 (0
standing for ‘no correspondence at all’ and 10
standing for ‘perfect match’). Finally, participants
answered questions about the distribution of values
across the states (for task1, 2, and 3), or about the
approximate shape of the map (rectangular,
L-shaped, triangular, squared, circular, or rectangu-
lar) and the number of regions (for task 4). There was
a 5-min break between the tasks.
In the post-experimental phase, participants answered
a questionnaire about their satisfaction and under-
standing of the tool and tasks, based on a 6-point
Likert scale (from 0 completely in disagreement to 6
completely in agreement). The questionnaire covered
the following aspects: pitch sound detection, useful-
ness of stereo-panning, map shape discrimination,
discrimination of the shape of the regions inside the
Idaho map and their numerosity, understanding of
unemployment level distribution, software usability,
software easiness, pleasantness of the software.
Results
Accuracy
During each task, we asked participants to rate (from
0 to 10) the level of matching of each one of the four
tactile maps with the sonified map they explored. Only
one of the tactile maps (target) matched the sonified
map, whereas the other three were distractors.
An analysis of variance was performed for each one
of the four tasks, using the factor map(target and
three distractors) as the independent within-subjects
factor. High rates of matching between the target
tactile map and the sonified map (mean ¼7.30,
SD ¼0.14) and low rates of matching between
distractors and the sonified map (mean ¼3.75,
SD ¼0.49) indicated high accuracy.
The effect was significant in all four tasks with the
following values: F(3, 102) ¼19.33, p50.01 in task
1, F(3, 102) ¼14.08, p50.01 in task 2, F(3, 102) ¼
12.40, p50.01 in task 3 and F(3, 102) ¼25.03,
p50.01 in task 4. The post hoc analyses (Fisher’s
LSD) showed that in all tasks the target tactile map
was rated higher than all distractors. Considering
that the differences between the patterns of target
and distractors are in some cases very subtle (see for
example the second distractor map of task 1 and all
the distractors of task 3 in Figure 2), subjects
demonstrated to have a clear representation of the
unemployment pattern of target maps.
A separate analysis was conducted considering
Interface (keyboard or touch-screen) and the Group
(sighted vs. late blind vs. early blind subjects) as
between factors and Task as within factor. The
algebraic difference between the target tactile map
matching rates and the distractors map matching rates
was used as a measure for subject’s accuracy. This
measure was the dependent variable of the analysis.
The results showed that congenitally blind,
acquired blind, and sighted blindfolded subjects
did not differ in discriminating targets in all four
tasks: F(2, 29) ¼2.25, p¼0.12.
Regarding the influence of the interface, the use of
keyboard or touch-tablet during the exploration did
not lead to significant differences in discriminating
targets from distractors: F(1, 29) ¼0.075, p¼0.78.
The influence of TASK is not significant,
F(3, 87) ¼1.39, p¼0.25, indicating that the discri-
mination of target maps from distractors was of
comparable difficulty in the four different tasks.
Questionnaire
After each one of the first three tasks, subjects were
asked to report the level of value (low, medium, high
unemployment, or water/background) that was the
Sonification of geographic maps 169
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
most represented in each one of the following nine
zones of the map:
For task, four subjects were asked to estimate the
number of regions included in the map. Subjects were
particularly accurate in the first task (about 80% of
correct identification), in which the levels of unem-
ployment were clearly organized in three big patterns,
while in tasks 2 and 3, where the spatial organization
was more complex, they performed progressively
more poorly (62% and 53%, respectively). A one-
way within ANOVA confirmed that this difference is
statistically significant F(2, 16) ¼8.34, p50.01. Task
1 differed from task 2 and task 3 which in turn did not
differ from each other (post hoc Fisher’s Least
Significant Difference (LSD): p50.05).
We did not find any difference, F(2, 24) ¼1.7,
p40.05, in the percentage of correct identification
of values into quadrants among sighted, congenitally
blind, and acquired blind subjects.
Interestingly, in task 4, when subjects had to estimate
the number of regions inside the map, there was, in all
conditions, a general tendency to underestimate the
number of regions (mean ¼25.02; SD ¼14.02), re-
porting 544, the actual number of Idaho counties.
Final questionnaire
After all tasks were completed, participants answered
a set of questions (Likert-scale from 0 to 6) regarding
the general features of the task and the sonification
tool. A two-factor between-subject ANOVA was
performed, considering Interface and Group as
independent variables. Table III shows the complete
list of the questions and the level of significance both
for group and interface factors.
Subjects reported high levels of understanding and
satisfaction of the tool and the tasks, even though less
in the more specific aspects and features of the maps
(particularly for the number and shape of regions and
general shape of the maps). We found no statistically
significant difference between sighted and blind
participants and between the two interface groups
except for a few exceptions. Regarding the stereo
panning feature, (which set the proportions of the left
and right signal amplitude as a function on the
position of the source), blind subjects paid more
attention to stereo panning than sighted participants.
Both groups of blind subjects reported being helped
by the stereo panning more than sighted subjects
(post hoc LSD: p50.05).
The groups differed also in their perception of the
easiness of the interface: blind subjects (both early
and late blind) rated the interface higher for ease of
use than the sighted participants, probably due to
their experience with non-visual interfaces.
Finally, no significant difference was found in
blind and sighted subjects’ answers about the
effectiveness of the interface.
Exploration modalities and strategies
From the log files of tasks 1 and 3, the easiest and the
hardest tasks respectively, we extracted several
quantitative parameters to analyze modalities and
strategies of exploration.
Regarding the number of steps, (where step is
defined as a move from a state to another), the data
showed that touch-tablet users performed more steps
than keyboard users (295 vs. 118, respectively in
average). This difference is significant, F(1, 43) ¼
17.66, p50.001. The higher number of steps
performed by touch-tablet users is reflected in higher
exhaustivity, measured with the percent number of
states explored. In fact, we found that touch-tablet
users explored more states (83.20%) than keyboard
users (62.98%), and this difference is significant,
F(1, 39) ¼27.93, p50.001. There is no significant
influence of group either on the number of steps, or
on the exhaustivity of exploration. However, group-
ing together late blind and sighted participants, they
showed greater exhaustivity than those early blind,
F(1, 41) ¼4.49, p50.05.
Table III. Questions and significance level of the final questionnaire.
Question Rate (from 1 to 6) Group Interface
It is easy to distinguish the different sounds 5.1 F(2,29) ¼0.26, p¼0.76 F(1,29) ¼0.00, p¼0.99
Stereo panning is useful for the orientation 4.6 F(2,29) ¼6.22, p¼0.00** F(1,29) ¼1.10, p¼0.30
The shape of the maps is clear to me 3.7 F(2,29) ¼0.18, p¼0.83 F(1,29) ¼0.63 p¼0.43
The shape of the single regions was clear to me 3.7 F(2,29) ¼0.18, p¼0.83 F(1,29) ¼0.63 p¼0.43
The number of the region into the maps was clear to me 2 F(2,29) ¼0.02, p¼0.97 F(1,29) ¼0.47 p¼0.49
I could understand the distribution
of the level of unemployment during the exploration
4.2 F(2,29) ¼1,96, p¼0.15 F(1,29) ¼0.04 p¼0.83
The tool is easy to use 5.4 F(2,29) ¼4.05, p¼0.02* F(1,29) ¼1.54 p¼0.22
The tool is fun to use 4.4 F(2,29) ¼0.98, p¼0.38 F(1,29) ¼8.8 p¼0.00**
The tool is easy to learn 5.1 F(2,29) ¼1.53, p¼0.23 F(1,29) ¼1.61 p¼0.21
*The levels of significance at 0.05.
**The levels of significance at 0.01.
170 F. Delogu et al.
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
We analyzed whether participants tended to
exhibit linear movements by keeping a clear direction
for more than two steps and whether they maintained
specific directions while moving. The percentage of
‘keep-a-direction steps’ on the total number of
movements is high (54.69%). Through a uni-
factorial ANOVA, considering keep (percentage of
steps that does not change direction for at least two
movements) as the dependent variable and interface
as the between factor, we observed that keyboard
users tended to keep a direction more than touch-
tablet explorers, F(1, 39) ¼17.64, p50.001. There
was not a significant influence of group on keep,
F(2, 39) ¼2.01, p40.05.
There was significant preference for specific
directions, F(3, 117) ¼8.43, p50.001. The post
hoc analysis showed that rightward movements
(39.31%) were the most performed, followed by
leftward movements (30.60%), whereas downward
(18.08%) and upward (11.99%) movements were
performed less frequently. There was no influence of
the group on the direction preference, F(6, 117) ¼
1.96, p40.05, but there was influence of the
interface, F(3, 117) ¼8.43, p50.001. As shown in
Figure 6, while touch-tablet users showed clear
preference for rightward movements (which in the
post hoc is significantly higher than all the other
directions), keyboard users showed that they prefer
horizontal movements (both rightward and leftward)
to vertical ones (both upward and downward).
Discussion
In all four tasks, participants showed high accuracy
in recognizing the tactile map corresponding to the
sonified map explored. This result indicates that, at
least to some extent, auditory stimulation can be
effectively used to present geo-referenced informa-
tion. Considering the abovementioned limitations of
traditional tactile maps, our findings have relevant
empirical implications for the accessibility of blind
individuals to geopolitical data and, more in general,
to spatial information.
Considering the differences between blind and
sighted subjects, according to the hypothesis that
claims that vision plays a crucial role in setting up
spatial mapping during a critical or sensitive period
of development [35], we would expect early blind
subjects to perform poorer than late blind and
blindfolded ones. However, our results go against
this theoretical expectation, suggesting the possibility
of equivalent performance of blind and sighted
subjects in spatial representation and orientation
obtained through modalities other than vision. In
fact, the three groups do not differ in performing the
map recognition task. Our results are consistent with
the hypothesis of an equivalence between sensory
modalities in transmitting spatial information [6,7]
and with the amodal theory of spatial representation
[8,36].
Nevertheless, we should consider that subjects,
both sighted and blind, had difficulties in recognizing
specific details of the map. In particular, all
participants had difficulty remembering the distribu-
tion of the values in complex patterns (task 2 and
task 3) and the number of regions inside a map (task
4). In accordance with experimental evidence, it
seems that sonification, like other non-visual display
tools [37], at least in its present modalities, is more
indicated for navigating and learning the macro-
structure of a spatial configuration than for transmit-
ting spatial details. The latter would require a higher
engagement of working memory. Moreover in the
specific case of our experiment, the mandatorily
sequential information acquisition may have pro-
gressively shadowed previously acquired details.
Future research has to take into more direct account
the relationships between display characteristics and
cognitive processes involved in task exploration.
Regarding the relative effectiveness of the two
interfaces, we found that touch-tablet and keyboard
users showed no significant difference in the
performance. Both keyboard (exploring the map by
moving step-by-step with the arrow keys) and
touch-tablet (exploring the map by touching the
screen with a finger) exploration seem to lead to an
effective overall spatial representation. Consistent
with the amodal theory of spatial representations
[8,36], this result supports that different perceptual
experiences may result in comparable representa-
tions of the same spatial configuration. However, the
presence of representational differences, not detect-
able at a performance level, cannot be excluded.
Some differences in strategies and modalities
emerged from the log file analysis. Touch-tablet
users performed more steps than keyboard ones,
Figure 6. Percentage of movements towards the four directions in
function of the interface.
Sonification of geographic maps 171
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
moving faster along the map. This data could be due
to the positional spatial feedback in proprioceptive
exploration (by touch-tablet) which is lacking in
discrete exploration (by keyboard). It seems that a
more direct, analogical, on-line spatial processing is
guaranteed by touch-screen and that keyboard
exploration, being strictly symbolic and discrete,
requires a more conspicuous working memory
involvement for reconstructing the position after
each step. Exploring by arrow keys requires spatial
reasoning, thus it is slower. It is possible that the
greater cognitive effort with the keyboard interface
may have an effect on the duration of learning. This
aspect needs further investigation. Keyboard users
also tended to be less exhaustive and may have
missed some parts of the map when moving by
discrete steps through states with irregular shapes.
The fact that arrow exploration is discrete, symbolic,
and less sensitive to irregularities leads keyboard
users to keep a specific direction more than touch-
tablet users. On the contrary, touch-tablet explora-
tion, which provides information about the shape of
states, can promote a more dynamic exploration
behavior, and consequently, more frequent changes
of direction.
Data show that there was a general preference for
rightward movements. In particular, touch-tablet
users preferred to move to the right more than the
other directions and this may be tied to hand
lateralization and reading direction that is a well
acquired way of processing space and acting in it by
both blind and sighted individuals. Keyboard users,
instead, showed a general preference to horizontal
movements (both rightward and leftward). It must be
noted that the two keys (rightward and leftward) are
on the same row and may allow rapid backward/
forward control along the writing direction while the
hand maintains the same posture.
To summarize the analysis of exploration mod-
alities and strategies, the touch-tablet interface seems
to foster better exploration compared with the
keyboard interface. With proprioceptive cues, the
touch-screen interface allows subjects to have con-
stant awareness about the current exploration posi-
tion. On the contrary, with the keyboard interface,
subjects were forced to construct a mental mapping
by maintaining in memory and interpolating a large
amount of data that are only partially present and
mostly extracted by previous positions. Moreover,
the lack of information could cause shape and
location distortions in subjects’ mental maps. In
fact, as subjects had no information about state size,
they could tend to normalize the dimensions of each
state, and as a consequence, to misplace the centre of
the map. This problem is particularly relevant in
cases like the USA map, considering the concentra-
tion of small states in the North-East and the
presence of the biggest states in the South-West,
distribution unfamiliar to Italian subjects. Another
limitation of keyboard exploration is that the absolute
position is only obtainable by the rehearsal of the
previous steps while with the touch-tablet it is
immediately inferable following the proprioceptive
feedback. However, a possible advantage of keyboard
exploration is that it allows a simpler categorization
of the map division into states. For example, it is
easier to categorize the number of states along the
left–right and bottom–up axes. Obviously this is an
advantage only for regular maps. On the contrary it
seems that touch-tablet is more adapted for exploring
more complex maps and for detecting finer scale
details.
Regarding the influence of vision in exploration
modalities and strategies, we did not find differences
between early blind, late blind, and blindfolded
subjects: the equivalent performances by our three
groups are further evidences for the amodal theory of
spatial representations [8,36].
When rating the efficiency and the manageability
of the sonification tool and the tasks, our subjects
reported high levels of understanding and satisfac-
tion, even when they had difficulties in comprehend-
ing some specific features of the maps. We noticed
that blind subjects reported to be helped by the
stereo-panning more than sighted subjects. This
result is comprehensible, as, thanks to their experi-
ence, blind people pay more attention to non-visual
environmental information and perform better than
sighted people in sound localization [38,39]. Groups
differed also in their perception of the easiness of the
interface. Blind subjects (both early and late ones)
rated the tool easier to use than sighted ones. This
could again be due to their previous greater
experience and attention with non-visual events
and environments.
Conclusion
To conclude, the sonification tool iSonic is useful to
transmit geo-referenced information in the absence
of visual information confirming the hypothesis that
visual experience is not necessary for an efficient
spatial cognition [40]. Consistent with Millar [1],
our results show that different senses are able to
transmit spatial information, as spatial representa-
tions are not necessarily linked to any specific
sensory modality. However, all participants, both
blind and sighted, found difficulties in acquiring fine
details of sonified maps. Such difficulties seem to be
linked to the greater amount of time necessary to
acquire data by touching and hearing, compared with
by seeing and to a heavy working memory load
[41,42].
172 F. Delogu et al.
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
Touch-screen exploration seems to be preferable
to keyboard exploration for several reasons: among
others, it allows direct integration between haptic-
spatial cues and sonification, promotes more ex-
haustive explorations, causes less representational
distortions than keyboard, and finally is considered
by users as easier and funnier than keyboard.
Future research
The results, limited by the specificity of the empirical
applied research and by the use of a restricted
ecological cartographic pattern, should be general-
ized by further studies. The design of more
articulated experimental paradigms implying a con-
trolled manipulation of number, size, and shape of
states inside the map could improve the external
validity. Moreover, finer grained dependent mea-
sures allowing a finer description of how the spatial
patterns and mentally represented are needed.
Despite of the limited generalizability of our data,
this research confirms the importance of multi-
modal integration in the transmission of spatial
information. In fact, jointly with evidence from both
theoretical and empirical research, our research
supports the idea that a combination of sound and
touch will work better than a single modality for non-
visual display of spatial information. In particular,
the dynamic integration of auditory and haptic
capabilities could represent a new frontier for the
study of new effective non-visual displays allowing
users to access spatial information in more flexible
ways.
Declaration of interest: The authors report no
conflicts of interest. The authors alone are respon-
sible for the content and writing of the article.
References
1. Millar S. Understanding and representing space: theory and
evidence from studies with blind and sighted children.
Oxford: Oxford University Press; 1994.
2. Millar S. Understanding and representing spatial information.
Br J Vis Impair 1995;13:8–11.
3. Millar S. Theory, experiment and practical application in
research on visual impairment. Eur J Psychol Edu 1997;
12:415–430.
4. Ungar S, Blades M, Spencer C. The construction of cognitive
maps by children with visual impairments. In: Portugali J,
editor. The construction of cognitive maps. Dordrecht:
Kluwer Academic Publishers; 1996. pp 247–273.
5. Lahav O, Mioduser D. A blind person’s cognitive mapping of
new spaces using a haptic virtual environment. J Res Spec Edu
Needs 2003;3:172–177.
6. Avraamides M, Loomis J, Klatzky RL, Golledge RG. Func-
tional equivalence of spatial representations derived from
vision and language: evidence from allocentric judgments.
J Exp Psychol Learn Mem Cogn 2004;30:801–814.
7. de Vega M, Cocude M, Denis M, Rodrigo MJ, Zimmer H.
The interface between language and visuo-spatial representa-
tions. In: Denis M, Logie RH, Cornoldi C, de Vega M,
Engelkamp J, editors. Imagery, language and visuo-spatial
thinking. London: Psychology Press; 2001. pp 109–136.
8. Bryant DJ. A Spatial Representation System in Humans.
Psycholoquy. [Serial Online] 1992;3(16):Space (1). Electronic
Citation. http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy?
3.16 VIA The INTERNET. Last accessed on March 11, 2008.
9. Thinus-Blanc C, Gaunet F. Representation of space in blind
persons: vision as a spatial sense? Psychol Bull 1997;
121:20–42.
10. Golledge R, Klatzky R, Loomis J. Cognitive mapping and
wayfinding by adults without vision. In: Portugali J, editors.
The construction of cognitive maps. The Netherlands:
Kluwer; 1996. pp 215–246.
11. Kerr NH. The role of vision in ‘visual imagery’ experiments:
evidence from the congenitally blind. J Exp Psychol Gen
1983;112:265–277.
12. Ro¨ der B, Ro¨ sler F. Visual input does not facilitate the scanning
of spatial images. J Ment Imagery 1998;22:165–181.
13. Aleman A, van Lee L, Mantione M, Verkoijnen I, de Haan E.
Visual imagery without visual experiences: evidence from
congenitally totally blind people. Neuroreport 2001;12:
2601–2604.
14. Vecchi T. Visuo-spatial imagery in congenitally totally blind
people. Memory 1998;6:91–102.
15. Vecchi T, Tinti C, Cornoldi C. Spatial memory and
integration processes in congenital blindness. Neuroreport
2004;15:2787–2790.
16. Sherman JA. Current map resources and existing map needs
for the blind. Twenty-third Annual Meeting of the American
Congress on Surveying and Mapping. Washington DC.
Gaithersburg, MD: American Congress on Surveying and
Mapping; 1963. pp 26–29.
17. Sherman JA. Needs and resources in maps for the blind. New
Outlook Blind 1965;59:130–134.
18. Wiedel JW, Groves PA. Tactual Mapping: design, reproduc-
tion, reading and interpretation. Washington, DC: Depart-
ment of Health, Education and Welfare; 1969. p 116.
(Reprinted as Occasional Paper in Geography No. 2,
Department of Geography, University of Maryland, 1972.).
19. Andrews SK. Spatial cognition through tactile maps. First
International Symposium on Maps and Graphics for the
Visually Handicapped. March 10–12, 1983. Washington, DC:
Association American Cartographer; 1983. pp 30–40.
20. Salisbury J, Sirinivasan M. Virtual environment Technology
for training (VETT). BBN Report No. 7661. Cambridge,
MA: VETREC, MIT; 1992.
21. Jacobson RD. Navigating maps with little or no sight: a novel
audio-tactile approach. Workshop on Content Visualization
and Intermedia Representations held at the University of
Montreal. 1998, August 15; Montreal, Quebec. New Bruns-
wick: Association for Computational Linguistics (ACL)
publishers; 1998. pp 95–102.
22. Clark J, Clark DD. Creating tactile maps for the blind using a
GIS. ASPRS/ACSM AnnualConvention & Exposition, Bethes-
da, MD: J. Lyon, April 25 1994 Reno, Nevada. pp 283–288.
23. Kawai Y, Kobayashi M, Minagawa H, Miyakawa M,
Tomita F. A support system for visually impaired persons
using three-dimensional virtual sound. Seventh International
Conference on Computer Helping People with Special Needs
(ICCHP). July 17–21, 2000, Karlsruhe. Wien: Osterrei-
chische Computer Gesellshaft Publishers; 2000. pp 327–334.
24. Parente P, Bishop G. BATS: the blind audio tactile mapping
system. Forty-first ACM Annual Southeast Regional Con-
ference held at Amstrong Atlantic State University, March
7–8, 2003. Savannah, Georgia.
Sonification of geographic maps 173
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
25. Kramer G, Walker B, Bonebright TP, Cook J, Flowers N,
Miner J. Sonification report: status of the field and research
agenda. International Community for Auditory Display
(online). Internet. 1997. Electronic Citation. http://www.
icad.org/websiteV2.0/References/nsf.html. Last accessed on
January 23, 2008.
26. Kramer G, editor. Auditory display.Reading, MA: Addison-
Wesley; 1994.
27. Mansur DL, Blattner M, Joy K. Sound-Graphs: a numerical
data analysis method or the blind. J Med Syst 1985;
9:163–174.
28. Krueger MW, Gilden D. KnowWhere
TM
: an audio/spatial
interface for blind people. Fourth International Conference
for Auditory Display (ICAD) held at Xerox Palo Alto
Research Center (Parc). November 2–5, 1997, Palo Alto, CA.
29. Ramloll R, Yu W, Riedel B, Brewster SA. Using non-speech
sounds to improve access to 2D tabular numerical informa-
tion for visually impaired users. Fifteenth Annual conference
of British Computer Society (BCS) IHM-HCI, September
10–14, 2001, Lille. Berlin: Springer publishers; 2001.
pp 515–530.
30. Afonso A, Katz BFG, Blum A, Jacquemin C, Denis M. A
study of spatial cognition in an immersive virtual audio
environment: comparing blind and blindfolded individuals.
Eleventh Meeting of the International Conference on Audi-
tory Display (ICAD) held at the University of Limerick,
Ireland: Limerick, July 6–9 2005, Ireland. pp 228–235.
31. Wickens CD. Processing resources in attention. In:
Parasuraman R. Davies DR. editors. Varieties of attention.
New York: Academic Press Publishers; 1984. pp 63–101.
32. Zhao H, Shneiderman B, Plaisant C. Listening to choropleth
maps: interactive sonification of geo-referenced data for users
with vision impairment. In: Lazar J, editor. Universal usability.
New York: Hoboken John Wiley & Sons Ltd. Publishers;
2008. pp 141–174.
33. Zhao H, Smith BK, Norman K, Plaisant C, Shneiderman B.
Interactive sonification of choropleth maps: design and
evaluation. IEEE Multimed 2005;12:26–35.
34. Zhao H, Plaisant C, Shneiderman B, Lazar J. Data sonifica-
tion for users with visual impairment: a case study with geo-
referenced data. ACM T Hum Comput Interact 2008;15:
Article 4.
35. Fortin M, Voss P, Rainville C, Lassonde M, Lepore F. Impact
of vision on the developmental of topographical orientation
abilities. Neuroreport 2006;17:443–446.
36. Grush R. The emulation theory of representation: motor
control, imagery and perception. Behav Brain Sci 2004;
27:377–396.
37. Gaunet F, Martinez JL, Thinus-Blanc C. Early-blind subjects’
spatial representation of manipulatory space: exploratory
strategies and reaction to change. Perception 1997;
26:345–366.
38. Doucet ME, Gagne´ JP, Leclerc C, Lassonde M, Guillemot JP,
Lepore F. Blind subjects process auditory spectral cues more
efficiently than sighted individuals. Exp Brain Res 2005;
160:194–202.
39. Yabe T, Kaga K. Sound lateralization test in adolescent blind
individuals. Neuroreport 2005;16:939–942.
40. Tinti C, Adenzato M, Tamietto M, Cornoldi C. Visual
experience is not necessary for efficient survey spatial
cognition: evidence from blindness. Q J Exp Psychol 2006;
59:1306–1328.
41. Foulke E. Perception, cognition and the mobility of blind
pedestrians. In: Potegal M, editor. Spatial abilities: develop-
ment and physiological foundations. San Diego, CA: Aca-
demic Press; 1982. pp 55–76.
42. Millar S. Crossmodal and intersensory perception and the
blind. In: Walk RD, Pick HC, editors. Intersensory perception
and sensory integration. New York: Plenum; 1981.
pp 281–314.
174 F. Delogu et al.
Disabil Rehabil Assist Technol Downloaded from informahealthcare.com by Prof. Stefano Federici on 04/24/10
For personal use only.
... [187]. Delogu et al. [28] ont proposé une étu 59 struire la mise en page explorée [28] plus ludique que le clavier. ...
... [187]. Delogu et al. [28] ont proposé une étu 59 struire la mise en page explorée [28] plus ludique que le clavier. ...
... Delogu et al.[28], en accord avec Millar et al.[104], ont montré que tous les participants à leur étude, aveugles ou voyants, ont éprouvé des difficultés à obtenir des détails précis sur les cartes sonifiées.Les tactons sont des signaux vibro-tactiles abstraits structurés qui véhiculent des informations de différents paramètres (fréquence, amplitude, forme d'onde, durée, et le rythme)[15]. Leurs avantages résident dans la possibilité pour communiquer des informations, même lorsque le bruit ambiant est à un niveau trop élevé ou lorsque la vie privée doit être garantie. ...
Thesis
Pour les personnes déficientes visuelles, les documents en relief sont importants pour l'accès à la connaissance. Afin de répondre aux spécificités de la perception haptique, il est nécessaire de simplifier la quantité d'information disponible au sein d'un document en relief. Ce processus s'appelle l'adaptation du contenu et repose sur l'intervention d'un spécialiste qui sait concevoir un contenu adapté à l'exploration tactile et à la compétence des utilisateurs. Comme la modification est impossible, celui-ci doit être refait à chaque changement. Ainsi, la possibilité d'utiliser et d'explorer un contenu numérique plutôt que physique représente une alternative intéressante. Un grand nombre de données numériques en accès libre est disponible, et l'adaptation des données peut être réalisée grâce à des algorithmes prévus à cet effet. Cependant, les documents numériques sont par essence visuels, et par conséquent inaccessibles aux personnes déficientes visuelles en l'état. L'état de l'art concernant l'exploration haptique de données numériques fait apparaitre un ensemble de solutions reposant sur un artefact (comme une souris à retour de force). Cette approche présente de nombreuses limitations, notamment le fait de parcourir le document avec un seul point de contact. Une approche plus fonctionnelle consiste à laisser l'utilisateur explorer un graphique numérique avec ses mains : chaque doigt peut alors être considéré comme un curseur. Les feedback sonore et vibratoire sont déclenchés en fonction de la position du doigt sur le document numérique. Cependant, il est nécessaire de connaitre les éléments importants d'un graphique (c'est-à-dire les éléments qui déclencheront des feedbacks), ainsi que le rôle de chacun des doigts au cours de l'exploration. La problématique de cette thèse porte sur l'exploration des données spatiales numériques en reposant sur des interactions haptiques. L'objectif est double : 1) comprendre comment les personnes déficientes visuelles explorent les graphiques en relief ; et 2) proposer des techniques d'interaction haptique, basées sur des dispositifs personnels et transportables, qui permettent aux utilisateurs déficients visuels d'explorer des graphiques numériques dans différents contextes (à la maison ou à l'école par exemple). Pour répondre au premier point, nous avons réalisé deux études portant sur les stratégies d'exploration des documents en relief, en fonction du niveau d'expertise des utilisateurs. Nous avons montré que les stratégies utilisées dépendent de l'expertise mais aussi du type de document exploré. Pour répondre au deuxième point, nous avons proposé de nouvelles techniques d'interaction reposant sur l'utilisation d'une montre connectée qui permet des feedbacks localisés. Nous avons mené deux études expérimentales concernant la conception et l'évaluation de techniques d'interaction haptiques basées sur la montre. La première étude portait sur la comparaison de l'exploration de graphiques physiques (en relief) à celle de graphiques numériques virtuels. Les résultats montrent que deux de nos techniques d'interaction permettent une exploration de graphiques numériques plus rapide que l'exploration de graphiques en relief. Notre deuxième étude visait à développer et évaluer des techniques d'interaction permettant l'utilisation des deux mains lors de l'exploration de contenus numériques. Les résultats montrent que les stratégies bimanuelles reposant sur un feedback bilatéral localisé améliorent les performances d'exploration de graphiques numériques. Les résultats de ces études mettent en avant les avantages d'une interaction haptique bimanuelle. Combinées à un dispositif de localisation et de suivi des mains, les techniques d'interaction développées sur une montre connectée pourraient permettre aux utilisateurs d'interagir avec des contenus numériques dans de nombreuses situations de la vie quotidienne.
... While these studies ofer tacit design hypotheses about accessible visualization, empirical studies can provide direct insight into understanding accessible visualizations [23,31,59]. For example, Delogu et. ...
... For example, Delogu et. al. [23] found that integrating sonifcation into maps could negate performance diferences between sighted and nonsighted users. Yang et. ...
... Literature generally indicates that persons with blindness (PWBs) are capable of constructing cognitive maps from multimodal spatial information. Multimodal information might even be more useful than from one modality only, since they have led to more effective cognitive map construction than information from one modality only (Brayda et al., 2019;Delogu et al., 2010;Ducasse et al., 2018;Grussenmeyer et al., 2016;Papadopoulos and Barouti, 2015;Simonnet et al., 2012;Yatani et al., 2012). The combination of all senses most likely would lead to better spatial knowledge than information from fewer senses (Papadopoulos et al., 2017b). ...
Article
For efficient navigation, the brain needs to adequately represent the environment in a cognitive map. In this review, we sought to give an overview of literature about cognitive map formation based on non-visual modalities in persons with blindness (PWBs) and sighted persons. The review is focused on the auditory and haptic modalities, including research that combines multiple modalities and real-world navigation. Furthermore, we addressed implications of route and survey representations. Taking together, PWBs as well as sighted persons can build up cognitive maps based on non-visual modalities, although the accuracy sometime somewhat differs between PWBs and sighted persons. We provide some speculations on how to deploy information from different modalities to support cognitive map formation. Furthermore, PWBs and sighted persons seem to be able to construct route as well as survey representations. PWBs can experience difficulties building up a survey representation, but this is not always the case, and research suggests that they can acquire this ability with sufficient spatial information or training. We discuss possible explanations of these inconsistencies.
... Some studies suggest that PVIs perform worse on spatial tasks than sighted persons 13,20,32 . There is also research, however, that shows that PVIs perform similar 56,58,59 or even better 13,21,46,60 than sighted persons considering spatial cognition. Furthermore, some studies suggest that there are differences between PVIs who became blind very early or later in life 21,30,31 . ...
Article
Full-text available
The human brain can form cognitive maps of a spatial environment, which can support wayfinding. In this study, we investigated cognitive map formation of an environment presented in the tactile modality, in visually impaired and sighted persons. In addition, we assessed the acquisition of route and survey knowledge. Ten persons with a visual impairment (PVIs) and ten sighted control participants learned a tactile map of a city-like environment. The map included five marked locations associated with different items. Participants subsequently estimated distances between item pairs, performed a direction pointing task, reproduced routes between items and recalled item locations. In addition, we conducted questionnaires to assess general navigational abilities and the use of route or survey strategies. Overall, participants in both groups performed well on the spatial tasks. Our results did not show differences in performance between PVIs and sighted persons, indicating that both groups formed an equally accurate cognitive map. Furthermore, we found that the groups generally used similar navigational strategies, which correlated with performance on some of the tasks, and acquired similar and accurate route and survey knowledge. We therefore suggest that PVIs are able to employ a route as well as survey strategy if they have the opportunity to access route-like as well as map-like information such as on a tactile map.
... Advances in the field of hardware and software can also be included. Some examples of AT are sensors for the early detection of obstacles (ONG; ZHANG; NEE, 2013), assistive listening devices (WITTICH; SOUTHALL; JOHNSON, 2015), sonification to provide access to geographic maps (DELOGU et al., 2010) and others, that improve the quality of life of people with visual dysfunction by facilitating daily activities. ...
Article
Many people with total or partial visual impairment can use assistive technology (AT) to facilitate daily living activities. Smartphones and, especially, their applications, can be a tool of easy access and applicability as AT. The aim of this study was to develop an application for use on a smartphone or tablet to improve the visual ability of people with low visual acuity. The software, called Oftcam, was developed using the ANDROID operating system, written in Java Android under the Java version JDK 1.7, supporting the minimum version of Android 2.2. Its operating mechanism includes capturing and adjusting the image of interest to the users according to their needs: expansion, change of background and decentralization of the image of interest. The development of this free, easy-to-handle application will provide the possibility of integration between the user and the auxiliary professional, being, in practice, a mobile resource of health. Considering that most people have an increasing access to phones and tablets, we believe the use of this application is a good alternative to integrate the need and practicality in the daily lives of the visually impaired people.
... However, most instances of sonification research for the visually impaired are for assistance with daily life and navigation (Velázquez, 2010;Mascetti et al., 2016), and studies of engagement with the sonification of geographic or scientific (e.g., gas particle models) data show promising results but are often in the exploratory or small-sample-size stages (Delogu et al., 2010;Levy and Lahav, 2012;Weir et al., 2012). While further research into data sonification can help to quantify the learning benefits for both sighted and visually impaired individuals, the modality certainly offers engaging ways for the visually impaired to interact with informal learning environments and scientific textbook studies that would otherwise be inaccessible to them. ...
Article
Full-text available
Sharing the complex narratives within scientific data in an intuitive fashion has proven difficult, especially for communicators endeavoring to reach a wide audience comprised of individuals with differing levels of scientific knowledge and mathematical ability. We discuss the application of data sonification—the process of translating data into sound, sometimes in a musical context—as a method of overcoming barriers to science communication. Data sonification can convey large datasets with many dimensions in an efficient and engaging way that reduces scientific literacy and numeracy barriers to understanding the underlying scientific data. This method is particularly beneficial for its ability to portray scientific data to those with visual impairments, who are often unable to engage with traditional data visualizations. We explore the applications of data sonification for science communicators and researchers alike, as well as considerations for making sonified data accessible and engaging to broad audiences with diverse levels of expertise.
... Nell'esplorazione della mappa sonificata con il touchscreen (si veda procedura) il passaggio dal confine degli stati uniti con il 'nulla' veniva indicato da un suono simile a uno switcht di chitarra. Mentre, nell'esplorazione della mappa sonificata con la tastiera (si veda procedura), la fine del confine veniva indicato con una voce che indicava il nome del confine (per esempio "confine sinistro") (come Delogu et al., 2010). ...
Thesis
The focuses of my thesis work are: a) the analysis of the audio-tactile binding problems (experiments on the linguistic domain) (chpt.1), b) the analysis and investigation of the audio-tactile temporal binding window (experiments with non-predictable unisensorial and multisensorial audio-tactile streams) (chpt.2), c) the sensory substitution theory (specifically the possibility to vehiculate auditory linguistic information through a tactile sensory substitution device) (chpt.3). The aim of the work was to find a sensory substitution device (tactile stimulation) that can effectively help hearing-impaired people to better understand speech. The results did not confirm the influence of the tactile stimulation on the auditory perception and did not show evidence for audio-tactile binding. In the discussion I analysed the methodological limits (stimuli, procedure) and the limits in the current knowledge that are crucial for the aims of this work (e.g. not sufficient knowledge on auditory perception of speech, time plasticity and duration of training, training procedures).
Conference Paper
Full-text available
In this article we present Audiograph, a web-based auditory graph application that allows users to sonify numerical data directly from a spreadsheet, using Google Sheets. One or more data series can be converted into sound at the same time. Our tool is implemented as an add-on inside Google Sheets using the powerful sound synthesis engine FAUST. We have chosen FAUST as the sonification platform due to its highly efficient rendering of digital sound processing algorithms and its integration with WebAssembly, which allows to use any modern web browser in a much more powerful way that with previous JavaScript-based technologies.
Chapter
Full-text available
Digital Interactive Maps on touch surfaces are a convenient alternative to physical raised-line maps for users with visual impairments. To compensate for the absence of passive tactile information, they provide vibrotactile and auditory feedback. However, this feedback is ambiguous when using multiple fingers since users may not identify which finger triggered it. To address this issue, we explored the use of bilateral feedback, i.e. collocated with each hand, for two-handed map exploration. We first introduced a design space of feedback for two-handed interaction combining two dimensions: spatial location (unilateral vs. bilateral feedback) and similarity (same vs. different feedback). We implemented four techniques resulting from our design space, using one or two smartwatches worn on the wrist (unilateral vs. bilateral feedback respectively). A first study with fifteen blindfolded participants showed that bilateral feedback outperformed unilateral feedback and that feedback similarity has little influence on exploration performance. Then we did a second study with twelve users with visual impairments, which confirmed the advantage of two-handed vs. one-handed exploration, and of bilateral vs. unilateral feedback. The results also bring to light the impact of feedback on exploration strategies.
Article
Knowledge of locations and relationships shape our cognitive images of the environment and affect spatial decision making and behavior. Theoretical and empirical literature on cognition and spatial ability questions if the visually impaired possess a workable spatial schema. Results obtained from visually impaired subjects in studies of tactual mobility, thematic and general reference maps show how tactually mapped information increases geographic knowledge, enhances environmental perspectives, facilitates spatial decision making tasks and can be used to form complex spatial constructs. -Author
Chapter
Performance by the blind has been of interest in understanding cross-modal recognition since Molyneux asked his celebrated question whether a blind man, made to see, would recognize by sight alone an object that he had hitherto perceived only through touch. Von Senden (1960) suggested that there is little transfer. But for complete restoration of sight some preoperative residual vision is necessary (Rapin, 1979; Riesen, 1975). Gregory and Wallace’s (1963) patient had light perception preoperatively. After the corneal graft that restored his sight, he recognized uppercase letters that he had previously learned only through touch. But, despite an interest in tools, he could not easily identify relatively unfamiliar tools until after he had explored them by touch. Gregory (1974, p. 106) suggests that although his patient “came to use vision his ideas of the world arose from touch.”
Article
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.
Article
Blind individuals require to compensate for the lack of visual nformation by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The, interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former. NeuroReport 16:939-942 (c) 2005 Lippincott Williams & Wilkins.
Article
Philosophers and psychologists have sometimes argued that traditional distinctions between the spatial senses and, for that matter, between afferent sensory perception and efferent motor control are arbitrary and unhelpful (e.g., Bornstein, 1936; Freedman, 1968; von Hornbostel, 1927). Conceptual and experimental isolation of visual, auditory, and somesthetic processes (ultimately based upon Muller’s so-called “law of specific nervous energies”) led, undoubtedly, to a tremendous increase in knowledge of peripheral sensory physiology and to more or less detailed descriptions of sensory pathways to the central nervous system. Yet such work seemed to imply that human beings and other creatures see, hear, feel, and so on as isolated independent acts, as though individuals could only be known to each other as distinct independent visual, auditory, and sentient persons. Moreover ordinary language does not make the distinctions between modality dimensions which any treatment of, say, sight, hearing, and touch as isolated and distinct ways of knowing the world would seem to require. (Ordinary language is in fact shot through with synesthetic comparisons. See, e.g., Marks, 1975, for a recent discussion of synesthesia.) Of course, knowledge is perceptually based, but it is not obvious that it is visually based or (pace Berkeley) tactually based.