Conference PaperPDF Available

Enhancing Navigation Skills through Audio Gaming


Abstract and Figures

We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks.
Content may be subject to copyright.
Enhancing Navigation Skills
through Audio Gaming
We present the design, development and initial
cognitive evaluation of an Audio-based Environment
Simulator (AbES). This software allows a blind user to
navigate through a virtual representation of a real
space for the purposes of training orientation and
mobility skills. Our findings indicate that users feel
satisfied and self-confident when interacting with the
audio-based interface, and the embedded sounds allow
them to correctly orient themselves and navigate within
the virtual world. Furthermore, users are able to
transfer spatial information acquired through virtual
interactions into real world navigation and problem
solving tasks.
Orientation and Mobility, Virtual Environment, Visual
Impairment, audio games, videogames
ACM Classification Keywords
K.4.2 [Computing Milieux]: Computers and Society –
Social Issues: Assistive technologies for persons with
General Terms
Design, Experimentation
Several different approaches have been developed to
assist the blind with orientation and mobility (O&M).
Copyright is held by the author/owner(s).
CHI 2010, April 10–15, 2010, Atlanta, Georgia, USA.
ACM 978-1-60558-930-5/10/04.
Jaime Sánchez
Department of Computer Science
Center for Advanced Research in Education (CARE)
University of Chile
Blanco Encalada 2120.
Santiago, Chile
Mauricio Sáenz
Department of Computer Science
Center for Advanced Research in Education (CARE)
University of Chile
Blanco Encalada 2120.
Santiago, Chile
Alvaro Pascual-Leone
Berenson-Allen Center for Noninvasive Brain Stimulation,
Department of Neurology, Beth Israel Deaconess Medical
Center, Harvard Medical School
Lotfi Merabet
Berenson-Allen Center for Noninvasive Brain Stimulation,
Department of Neurology, Beth Israel Deaconess Medical
Harvard Medical School
CHI 2010: Work-in-Progress (Spotlight on Posters Days 3 & 4)
April 14–15, 2010, Atlanta, GA, USA
One possibility to help them become more autonomous
is to provide them with virtual-based training which
could ultimately be transferred to real world settings.
Along these lines, a number of studies using virtual
environment simulators allow a blind user to interact
through both audio [1], [4] and tactile cues [6].
Another possibility would be through the use of audio-
based games. Some studies have pointed out the
importance of gaming for improving problem solving
skills [10]. Moreover, the possibility of using games for
learning in pedagogical contexts opens up enormous
opportunities to bring education closer to students’
everyday life experiences, increasing motivation,
commitment to learning, and better shaping the
students’ current learning styles [2].
There have been numerous insights gained from the
design and use of videogames for visually impaired
people. When legally blind people interact with
videogames that include visual cues, they take
advantage of whatever residual vision they have in
order to achieve better results from the interaction
(e.g. through the use of high magnification). Certainly,
for a totally blind user this is not possible and it is thus
necessary to provide them with relevant information
regarding the environment through other sensory
channels such as touch and hearing [7]. There are also
studies implementing videogames for learning
mathematics in blind students [7]. Another study has
used audio-based gaming to reinforce science concepts
in a ludic environment for visually impaired children
[8]. As the child interacts with the game to fulfill the
underlying mission, he/she develops problem-solving
skills while learning science curriculum. Other
videogames assisted the development of spatial
knowledge in blind children [5]. Therefore, if
videogames can improve the development of different
types of skills, can they also improve the development
of navigation skills in blind children? The development
of orientation and mobility skills (O&M) is essential for
the autonomous navigation of a blind user.
The purpose of this research was to evaluate an audio-
based virtual environment simulator developed by our
group called Audio-based Environment Simulator
(AbES) designed to improve orientation and mobility
skills in blind users.
The simulator was developed to represent a real,
familiar or unfamiliar environment to be navigated by a
blind person. In the virtual environment, there are
different elements and objects (walls, stairwells, doors,
toilets or elevators) through which the user can
discover and come to know his/her location.
The simulator is capable of representing any real
environment by using a system of cells through which
the user moves. The user has audio feedback in the
left, center and right side channels, and all his/her
actions are carried out through the use of a traditional
keyboard, where a set of keys have an associated
action. All of the actions in the virtual environment
have a particular sound associated to them. In addition
to this audio feedback, there are also spoken audio
cues that provide information regarding the various
objects and the user’s orientation in the environment.
Orientation is provided by identifying the room in which
the user is located and the direction in which he/she is
facing, according to the cardinal compass points (east,
west, north and south). AbES includes three modes of
CHI 2010: Work-in-Progress (Spotlight on Posters Days 3 & 4)
April 14–15, 2010, Atlanta, GA, USA
interaction: Free Navigation, Path Navigation and Game
The free navigation mode provides the blind user with
the possibility of exploring the building freely in order
to become familiar with it (Figure 1). For a beginning
user, we found it useful to include the option that all
the doors in the building are open, making the
navigation simpler. In the same way, for beginners it is
necessary to hear all of the instructions that the
simulator provides. For this reason the “Allow Text-To-
Speech to end before any action” option is necessary.
Path navigation provides the blind user with the task of
finding a particular room. The facilitator must choose
the departure and arrival room and select how many
routes he/she deems it is necessary to take. When all
the routes have been selected, the user begins his/her
interaction with the simulator and has to navigate all
the chosen paths, thus training in, surveying and
mapping the building.
The game mode provides blind users with the task of
searching for “jewels” placed in the building. The
purpose of the game is to explore the rooms and find
all the jewels, bringing them outside one at a time and
then going back into the building to continue exploring.
Enemies are randomly placed in the building, and try to
steal the user’s jewels and hide them elsewhere. There
is a verbal, audio warning when the user is facing two
cells away from a jewel or an enemy. The enemies
always remain inside the building. In this game mode,
the facilitator can choose the number of jewels to find
(two, four or six) and the number of monsters (two,
four or six).
Preliminary Cognitive evaluation
The first part of the study included the participation of
seven children aged ten to twelve years old who attend
the Santa Lucia School for Blind Children in Santiago,
Chile. None of the participants had any other
neurological deficits and their visual status was
confirmed by their medical records or an
ophthalmological evaluation.
Checklists were designed for each of the activities.
These checklists were based on standard orientation
and mobility instruments [3] and contain both common
and specific indicators that measure different aspects of
the students’ levels of progress (spatial orientation,
spatial knowledge and spatial representation) when
working on the various activities. A Likert-type scale
with scores ranging from 1 (never) to 4 points (always)
was used to quantify the results and calculate
percentages of achievement for each indicator.
All the activities with the children were carried out in 6
sessions lasting three hours and fifteen minutes each.
During this time, five activities were performed in which
the children participated by interacting with AbES. Each
of these activities was evaluated by considering the
attainment of three navigation skills: Spatial
Orientation, is the ability to locate oneself within the
simulated map and direct efficiently from one point to
another; Spatial Knowledge, is the ability to
recognize, identify and remember the location of the
elements that can be found in the environment; and
Spatial Representation, is the ability to create a
mental image of the space that has been navigated.
Figure 1. Real and virtual environments in
AbES. (A) The first floor plan of the St. Paul
building. (B) Virtual representation of the
same floor in AbES showing various objects
the user interacts with. (C) Floor plan of
the Santa Lucia building. (D) Virtual
representation of the first floor of the Santa
Lucia building
Figure 2. (A) Tactile model representing
the space that the students will travel
through virtually using the AbES simulator.
(B) Blind participant exploring the model of
the real space through touch.
CHI 2010: Work-in-Progress (Spotlight on Posters Days 3 & 4)
April 14–15, 2010, Atlanta, GA, USA
1. Initial interaction with a Concrete Model.
Previous studies have shown that children are likely
to understand certain processes more fully when
modeling and solving tasks using concrete materials
that supplement interactions with a virtual
environment [7]. In this study, this cognitive task
consisted of exploring with a concrete tactile model
of the environment that would be navigated using
AbES for the remainder of the time. This model
contains the building’s main structural divisions, as
well as the names of the different spaces written in
Braille (Figure 2). Once the exploration of the model
had been completed, the students had to construct
a spatial representation of the space through the
use of concrete materials such as plasticine, LEGO
or by making a drawing. In order to not
contaminate the sample, the maps used in this
stage were different from those used in later stages.
2. Free Navigation. To explore the environment, the
students’ interacted AbES using free navigation
mode, traveling at their own pace through all of the
spaces represented. Once they had finished
exploring the virtual model, the students had to
recreate the spaces they could remember using
concrete material of their choice.
3. Path Navigation Mission. In this activity the
students had to start from a predetermined start
point (the computer room) and travel to four
different destinations distributed throughout the first
floor of the building (the massage therapy room, the
fourth grade classroom, the front hall and the early
childhood intervention room). They were instructed
to take the shortest possible route (Figure 3). Once
they had finished exploring the virtual model, the
students had to represent the spaces they could
remember through the use of concrete material.
4. The Game. In this activity the students had to
interact with AbES in the game mode, seeking out
the hidden jewels and bringing them to the school’s
inner yard. This activity allows the students to take
different routes from those used in the “Path
Navigation Mission”, as they now knew of other
rooms and were constructing a mental image of the
spaces traveled. In this activity, the students
interacted for half of the time with the simulator and
during the other half of the time they performed the
same task in the real environment (Figure 3). This
way of interacting increases the participation and
motivation of the children, thus facilitating their
learning [9].
5. Concrete Representation. In this activity the
students interacted freely with the environment
(just as they had done in activity 2). Once this free
navigation was completed, they were asked to
represent the spaces traveled with concrete material
(Lego® bricks and drawings) (Figure 4).
In general, the students obtained high scores on all the
activities held. For Spatial Orientation, the five activities
with the AbES simulator demonstrate very high
achievement percentage scores (Concrete Model: 74%,
Free Navigation: 84%, Path Navigation Mission: 81%,
The Game: 77%, Concrete Representation: 80%), with
free navigation representing the activity that obtained
the best results (Figure 5) (Chi Square= 1.895; dof=4;
p > 0.05) showing no evidence of significant difference.
By navigating freely, the students are focused only on
the task of moving through the environment. At the
same time, the students ask themselves more
questions about the virtual surroundings, which allow
them to become even more oriented. Spatial knowledge
also has high scores for all the activities held (Concrete
Model: 81%, Free Navigation: 87%, Path Navigation
Mission: 80%, The Game: 88%, Concrete
Representation: 79%), with The Game activity having
the highest achievement percentage score (Chi
Square= 9,714; dof=4; p < 0.05) (Figure 5) showing
evidence of significant difference. In playing, the
students had to travel through the space while
Figure 3. Students playing to get the
jewel in the real environment
CHI 2010: Work-in-Progress (Spotlight on Posters Days 3 & 4)
April 14–15, 2010, Atlanta, GA, USA
concentrating and paying close attention to details, as
the activity implied locating the jewel and bringing it to
the schoolyard. To do this, they not only had to know
where they were but also remember well the paths to
be able to get out and leave the jewel in the right
place, without being caught by the monster. All this
information was successfully transferred when they
played in the real environment.
Finally, the activity with the highest spatial
representation scores was The Game (Concrete Model:
42%, Free Navigation: 52%, Path Navigation Mission:
53%, The Game: 86%, Concrete Representation:
59%), which resulted in much higher scores that those
obtained for the other activities (Chi Square= 5,837;
dof=4; p > 0.05) (Figure 5), showing no evidence of
significant difference. In playing at finding the jewel,
the students travel throughout the entire environment,
picking up information on the spaces and the objects
within the environment, and thus being able to
successfully play the game in the real spaces. In this
way, they are able to obtain more information from the
space and to improve their mental representation,
which is then successfully transferred to the real world
In summary the global activity that generated the best
results is The Game (84%) (Figure 6), (Chi Square=
4.000; dof=4; p > 0.05), although there is no evidence
of significant difference. When the students played,
they obtained better scores than when they performed
other activities with AbES. When playing, they
remained more concentrated and focused on fulfilling
the goals of the game, being able to pick up on more
information provided by the simulator, and in a more
efficient manner.
The purpose of this research was to evaluate an audio-
based virtual environment simulator developed by our
group called Audio-based Environment Simulator
(AbES) designed to improve orientation and mobility
skills in blind users. O&M training remains a mainstay
in blind rehabilitation and with systematic and rigorous
training, individuals with visual impairment can gain
functional independence. Here, we show that the
creative use of interactive virtual navigation
environments such as AbES combined with other
strategies may provide for flexibility adjusting for a
person’s own needs, strengths and weaknesses to
supplement their O&M training curricula.
Of particular note was the robust nature of spatial
cognitive information that could be obtained by
interacting with AbES the gaming mode. We intended
for users to be able to play and enjoy the game and in
doing so, learn to navigate their surrounding
environment, understand the spatial organization and
layout of its spaces, its dimensions and the
corresponding objects Key to this approach is the fact
that this information is learned implicitly though
gaming interactions rather than explicit route learning.
As users became more skilled at playing AbES through
navigating freely at their own pace, they were in fact
laying the foundations for transferring virtual learning
to real world navigation. This game mode has been the
activity that generated the best results as far as the
students’ spatial representations (although there was
no evidence of significant difference), showing that this
kind of interaction requires them to focus on the tasks
that they are carrying out. It makes them more
attentive careful and resourceful through constant
inquisition about the places they are traveling through.
Figure 4. The students’ representations
made with concrete material. (A)
Representation with Legos® bricks. (B)
Drawn representation
Figure 5. Graphic shows the results
obtained by the students for the three
aspects evaluated (Spatial Orientation,
Spatial Knowledge and Spatial
CHI 2010: Work-in-Progress (Spotlight on Posters Days 3 & 4)
April 14–15, 2010, Atlanta, GA, USA
They thus obtain more robust information regarding
their surroundings translating to better results in the
transfer of this knowledge to the real physical world.
Future Work
We continue to investigate the feasibility, effectiveness,
and potential benefits of training and learning to
navigate unfamiliar environments using virtual
auditory-based gaming systems. In parallel, we are
also developing methods for quantifying behavioral
gains as well as uncovering brain mechanisms
associated with navigational skills. A key direction for
future research will be to understand what aspects of
acquired spatial information are actually transferred
from virtual to real environments, and the conditions
that promote this transfer. This implies the use of
experimental designs in order to clearly determine the
impact that the use of this technology has on the
development of navigation skills. We further propose
that understanding how the brain creates spatial
cognitive maps used for navigation and over time, as
well as a function of an individual’s own experience and
motivation will have potentially important repercussions
in terms of how rehabilitation is carried out and,
ultimately, an individual’s overall rehabilitative success.
This report was funded by the Chilean National Fund of
Science and Technology, Fondecyt #1090352 and
Project CIE-05 Program Center Education PBCT-Conicyt
[1] Amandine, A., Katz, B., Blum, A., Jacquemin, C.,
Denis, M. (2005) A Study of Spatial Cognition in an
Immersive Virtual Audio Environment: Comparing Blind
and Blindfolded Individuals. Proc. of 11
ICAD, pp.
[2] Cipolla-Ficarra, F. (2007). A Study of Acteme on
users Unexpert of VideoGames. J. Jacko (Ed.): Human-
Computer Interaction, Part IV, HCII 2007, LNCS 4553,
pp. 215-224
[3] González, F., Millán, L., Rodríguez, C. (2003).
Orientación y Movilidad. Apuntes del curso
“Psicomotricidad, y Orientación y Movilidad para la
persona con discapacidad visual”, VII semestre
Trastornos de la visión, Universidad Metropolitana de
Ciencias de la Educación. Sin numeración.
[4] Kehoe, A., Neff, F., Pitt, I. (2007) Extending
traditional user assistance systems to support an
auditory interface. Proc. of the 25th IASTED
International Multi-Conference: artificial intelligence
and applications, pp. 637 – 642
[5] Lumbreras, M. & Sánchez, J. (1999). Interactive 3D
sound hyperstories for blind children. Proceedings of
the ACM-CHI '99, Pittsburgh, USA, pp. 318-325
[6] Murai, Y., Tatsumi, H., Nagai, N., Miyakawa, M.
(2006). A Haptic Interface for an Indoor-Walk-Guide
Simulator. K. Miesenberger et al. (Eds.); ICCHP 2006,
LNCS 4061, pp. 1287 – 1293
[7] Sánchez, J. (2008). User-Centered Technologies for
Blind Children. Human Technology Journal, 45(2), pp.
[8] Sánchez, J., Elías, M. (2007). Science Learning by
Blind Children through Audio-Based Interactive
Software. 12th Annual CyberTherapy 2007 Conference:
Transforming Healthcare Through Technology, pp. 40
[9] Squire, K. (2003). Video games in education.
International Journal of Intelligent Simulations and
Gaming, 2(1).pp. 1-16.
[10] Steinkuehler, C. (2008). Cognition and literacy in
massively multiplayer online games. In Leu, D., Coiro,
J., Lankshear, C. & Knobel, K. (Eds.), Handbook of
Research on New Literacies. Mahwah NJ: Erlbaum, pp.
Figure 6. Total results obtained by the
students for each of the activities.
CHI 2010: Work-in-Progress (Spotlight on Posters Days 3 & 4)
April 14–15, 2010, Atlanta, GA, USA
... Multiple studies show that persons with blindness (PWBs) can build a cognitive map of a comprehensive virtual environment with mostly nonverbal auditory feedback (Connors et al., 2014b(Connors et al., , 2014aSánchez et al., 2010), and after exploring routes with verbal as well as non-verbal feedback (Aziz, Stockman, and Stewart, 2022). After exploring the environment, participants were able to retrieve route information from their cognitive map. ...
... Moreover, exploring an auditory version of an environment has been shown to support wayfinding in the real-world space in a subset of studies. PWBs showed good transfer of spatial knowledge from the virtual to the real-world environment (Connors et al., 2014b(Connors et al., , 2014aLoeliger and Stockman, 2014;Sánchez et al., 2010). For instance, they could use the map layout learned in the virtual environment to reach target locations in the real environment (Connors et al., 2014b(Connors et al., , 2014a. ...
... An important consideration in this line of research, is that there is a substantial difference between learning a small-scale map, and directly experiencing a real-world environment. A tactile map is a simplified model of a real-world environment, and omits irrelevant information (Ungar, 2000), even more so than most auditory virtual environments (Aziz et al., 2022;Connors et al., 2014aConnors et al., , 2014bHalko et al., 2014;Sánchez et al., 2010). In addition, a tactile map can be explored much faster, and is perceived more allocentrically (from above) than a large-scale environment (Ungar, 2000). ...
For efficient navigation, the brain needs to adequately represent the environment in a cognitive map. In this review, we sought to give an overview of literature about cognitive map formation based on non-visual modalities in persons with blindness (PWBs) and sighted persons. The review is focused on the auditory and haptic modalities, including research that combines multiple modalities and real-world navigation. Furthermore, we addressed implications of route and survey representations. Taking together, PWBs as well as sighted persons can build up cognitive maps based on non-visual modalities, although the accuracy sometime somewhat differs between PWBs and sighted persons. We provide some speculations on how to deploy information from different modalities to support cognitive map formation. Furthermore, PWBs and sighted persons seem to be able to construct route as well as survey representations. PWBs can experience difficulties building up a survey representation, but this is not always the case, and research suggests that they can acquire this ability with sufficient spatial information or training. We discuss possible explanations of these inconsistencies.
... Therefore, we decided to name archetypical application scenarios to underline which generic spatial information was mainly used for each cluster. Generally speaking, VR applications for blind and visually impaired users can consist basically of any generic spatial information; for example, 3D haptic mathematical graphs in an education context (e.g., [19,30,[46][47][48][49][50][51][52]) or virtual proxies of real spaces ( [7,16,[41][42][43][53][54][55][56][57][58][59][60][61][62][63]). Especially, the knowledge gain and transfer in the latter context when training orientation and mobility aspects in VEs have been extensively analyzed. ...
... Some researchers also began with analyzing the transfer of virtually trained knowledge to navigation in the corresponding real places [75] or used other devices like commercial off-the-shelf products [76,77]. Beginning with [78,79], the former was intensively reached by Orly Lahav and Jaime Sanchez throughout the mid-2010s (e.g., [34,58,62,68,80,81]). During this period, audio rendering also became much more powerful [82], making it possible to walk through and understand unknown in the absence of haptic feedback [16,55,63]. ...
... Instead of individual and rare special laboratory prototypes (as has often been the case to date), a common software and hardware platform should be targeted and used in the long term, which could be useful for sighted, blind, visually impaired and people with other impairments. [16,35,42,54,55,59,60,62,74,79,81,85,[132][133][134][135][136][137][138][139][140][141][142][143][144][145][146][147][148][149] VR is undoubtedly a very modern and high-tech way to convey spatial information, often it is simply more practical to use simpler and less complex approaches. The high-level taxonomy presented here cannot yet make a definite statement in this context, but is intended to support future work in this field by helping researchers to further develop their ideas as good as possible using existing knowledge. ...
Full-text available
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects' and spaces' limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium-and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback ('small scale') were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically ('medium scale') or avatar-walkable ('large scale') egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.
... Mobility and wayfinding areas are known as Orientation and Mobility (O&M) (WIENER et al., 2010;WELCH et al., 2016). In this area, the study Sánchez & Sáenz (2014) presents the application Audiopolis ( Figure 3), a video game for navigating throw a virtual city interacting with audio and haptic interfaces. The Audiopolis also has the goal of improving evolving users' cognitive skills. ...
... A user playing Audiopolis Source:Sánchez & Sáenz (2014). ...
Full-text available
Visual disability has a major impact on people’s quality of life. Although there are many technologies to assist people who are blind, most of them do not necessarily guarantee the effectiveness of the intended use. As part of research developed at the University of Chile since 1994, we investigate the interfaces for people who are blind regarding a gap in cognitive impact, which include a broad spectrum of the cognitive process. In this work, first, a systematic literature review concerning the cognitive impact evaluation of multimodal interface for people who are blind was conducted. The study selection criteria include the papers which present technology with a multimodal interfaces for people who are blind that use a method to evaluate the cognitive impact of interfaces. Then, the results of the systematic literature review were reported with the purpose of understanding how the cognitive impact is currently evaluated when using multimodal interfaces for people who are blind. Among forty-seven papers retrieved from the systematic review, a high diversity of experiments was found. Some of them do not present the data results clearly and do not apply a statistical method to guarantee the results. Besides this, other points related to the experiments were analyzed. The conclusion was there is a need to better plan and present data from experiments on technologies for cognition of people who are blind. Moreover, this work also presented a data qualitative analysis based on the Grounded Theory based method to complement and enrich the systematic review results. Finally, a set of guidelines to conduct experiments concerning the cognitive impact evaluation of multimodal interfaces for people who are blind are presented.
... Simulator [32] [33] ...
Conference Paper
Full-text available
This paper aims to review the issues related to designing rich interaction paradigms in the context of serious games and 3D virtual environments for educational purposes. Serious games are applications specifically designed for training users to acquire particular skill sets. They possess the ability to draw the player's attention and give him the feeling of uniqueness and individualization in the environment, building knowledge based on connecting a new experience with a previous one. The effectiveness of a virtual environment is influenced by various factors, such as sense of presence (the feeling of being immersed in an environment although not being actually there), mental and emotional involvement, motivation, sense and flow and achievement etc. We also discuss the design principles related to creating complex environments and setting the game's goals and objectives, in order to ensure the player's progress and a pleasant, enriching user experience. In the near future, we aim to design a game with hierarchical levels of difficulty that would provide auditory and haptic stimuli and follow general game design and learning principles, such as learning by doing or experimenting, reflection and meta reflection - that is the learning transfer from virtual to live contexts. Particular attention will be dedicated to level design, immersion (by emphasizing the role of the player as the protagonist - offering the feeling that the training is addressed to him, interaction design - sense of control through relevant provided by the game and motivation that is triggered by necessity, fun, curiosity, interest and sense of achievement. Moreover, the highest focus will be concentrated towards ensuring a sense of flow, in order to make the player feel at ease with the task and skills achievement. Our contribution will be two-fold and materializes in improving the blind subjects' sound localization and navigational skills and providing
... Some studies suggest that PVIs perform worse on spatial tasks than sighted persons 13,20,32 . There is also research, however, that shows that PVIs perform similar 56,58,59 or even better 13,21,46,60 than sighted persons considering spatial cognition. Furthermore, some studies suggest that there are differences between PVIs who became blind very early or later in life 21,30,31 . ...
Full-text available
The human brain can form cognitive maps of a spatial environment, which can support wayfinding. In this study, we investigated cognitive map formation of an environment presented in the tactile modality, in visually impaired and sighted persons. In addition, we assessed the acquisition of route and survey knowledge. Ten persons with a visual impairment (PVIs) and ten sighted control participants learned a tactile map of a city-like environment. The map included five marked locations associated with different items. Participants subsequently estimated distances between item pairs, performed a direction pointing task, reproduced routes between items and recalled item locations. In addition, we conducted questionnaires to assess general navigational abilities and the use of route or survey strategies. Overall, participants in both groups performed well on the spatial tasks. Our results did not show differences in performance between PVIs and sighted persons, indicating that both groups formed an equally accurate cognitive map. Furthermore, we found that the groups generally used similar navigational strategies, which correlated with performance on some of the tasks, and acquired similar and accurate route and survey knowledge. We therefore suggest that PVIs are able to employ a route as well as survey strategy if they have the opportunity to access route-like as well as map-like information such as on a tactile map.
... Outros exemplos podem ser encontrados nos trabalhos do Professor Jaime Sanchéz da Universidade do Chile, como o desenvolvimento de softwares, jogos educativos e ambientes 3D virtuais que utilizam sons espacializados para o desenvolvimento da aprendizagem e da cognição de usuários com deficiência visual, com destaque aos trabalhos de navegação espacial [27][28] [29]. ...
Conference Paper
Full-text available
The purpose of this paper is to report a preditive evaluation methodology of educational softwares focusing the accessibility criteria for visually impaired people. This methodology embraces concepts of predictive evaluation, focusing on GOMS method and intended to teachers who need select and evaluate educational softwares. The experimental results with a blind evaluator show the methodology has been well received and the evaluation form presented in this paper was effective to check the product implementability to the outlined goals. We hope this work will serve as an instrument of support to the decision-making process of choice of accessible software tailored to the needs of educational institutions. RESUMO O propósito deste artigo é relatar uma metodologia de avaliação preditiva de softwares educativos com o enfoque em critérios de acessibilidade por pessoas com deficiência visual. Este instrumento metodológico foca no método GOMS e é destinado a docentes com formação em educação inclusiva que precisam selecionar e avaliar softwares educativos. Os resultados dos experimentos com um avaliador com cegueira mostraram uma boa aceitação da metodologia proposta e que a ficha de avaliação elaborada neste trabalho foi eficaz para a averiguação de quão aplicável um produto pode ser, dentro dos objetivos traçados. Acredita-se que este trabalho sirva como instrumento de apoio à tomada de decisão no processo de escolha de softwares acessíveis adequados às especificidades das instituições de ensino. 1. INTRODUCÃO Os dados do Censo da Educação Básica [16] revelam a presença de mais de 620 mil alunos com algum tipo de deficiência em salas de aula comuns (alunos incluídos) e 199 mil alunos em escolas especializadas. As estatísticas refletem as mudanças ocasionadas pela Lei de Diretrizes e Bases da Educação Nacional [5], que passa a entender que a educação especial deve ser oferecida preferencialmente na rede regular de ensino, não se limitando a estar presente somente em escolas especializadas. Desde então as escolas buscam proporcionar a adaptação daqueles alunos para a sala de aula comum. Mudanças arquiteturais, capacitação de professores e aquisição de instrumentos de trabalho e de estudo adaptados buscam garantir a acessibilidade no ambiente escolar, conceito definido por Garcia [10] como todas as possibilidades usufruídas pelo estudante com deficiência que o permitem frequentar e relacionar-se com a comunidade acadêmica. Os recursos de aprendizagem não devem se restringir apenas a brinquedos, livros e ferramentas de calcular e escrever. Tecnologias assistivas em dispositivos computacionais como tablets, smartphones, computadores e lousas digitais, que tanto atraem os alunos por meio de seus conteúdos digitais, simulações e atividades interativas, também devem ser levados em consideração. Outro exemplo de tecnologias digitais comumente utilizadas em ambientes educacionais são os softwares educativos. Eles proporcionam o desenvolvimento cognitivo do educando através das interações e do lúdico. Segundo Oliveira, Costa e Moreira [21], o que caracteriza o software educativo é o seu caráter didático. Dessa forma ele se distingue dos demais, que não foram criados com a finalidade de favorecer os processos de ensino-aprendizagem, mas que podem ser inseridos no contexto didático, a exemplo dos editores de texto e planilhas eletrônicas. São exemplo de softwares educativos os CAI (Instrução Assistida por Computador), os softwares inteligentes, os tutoriais, as simulações e os jogos educativos.
... The manipulation of aural information has been reported as an effective way to interact with a complex auditory space [5]. This notion is buttressed by later research suggesting that spatialized sound facilitates the accurate reconstruction of space [6], whereas audio-based interfaces facilitate users in correctly navigating virtual worlds and transferring the acquired spatial information to real-life situations [7]. Other research works focus on the natural properties of sound suggesting that multi-layered sonic interaction promotes users to gain and retain attention on the appropriate information and subsequently relate that information to a larger system of conceptual knowledge [8], as well as exhibiting great potential in helping users become more proficient at fine movements and the complicated manipulation of tools [9]. ...
Full-text available
Augmented Reality Audio Games (ARAG) enrich the physical world with virtual sounds to express their content and mechanics. Existing ARAG implementations have focused on exploring the surroundings and navigating to virtual sound sources as the main mode of interaction. This paper suggests that gestural activity with a handheld device can realize complex modes of sonic interaction in the augmented environment, resulting in an enhanced immersive game experience. The ARAG “Audio Legends” was designed and tested to evaluate the usability and immersion of a system featuring an exploration phase based on auditory navigation, as well as an action phase, in which players aim at virtual sonic targets and wave the device to hit them or hold the device to block them. The results of the experiment provide evidence that players are easily accustomed to auditory navigation and that gestural sonic interaction is perceived as difficult, yet this does not affect negatively the system’s usability and players’ immersion. Findings also include indications that elements, such as sound design, the synchronization of sound and gesture, the fidelity of audio augmentation, and environmental conditions, also affect significantly the game experience, whereas background factors, such as age, sex, and game or music experience, do not have any critical impact.
Background: Obesity is one of the greatest modern public health problems, due to the associated health and economic consequences. Decreased physical activity is one of the main societal changes driving the current obesity pandemic. Objective: Our goals are to fill a gap in the literature and study whether users organically utilize a social media platform, Twitter, for providing motivation. We examine the topics of messages and social network structures on Twitter. We discuss social media's potential for providing peer support and then draw insights to inform the development of interventions for long-term health-related behavior change. Methods: We examined motivational messages related to physical activity on Twitter. First, we collected tweets related to physical activity. Second, we analyzed them using (1) a lexicon-based approach to extract and characterize motivation-related tweets, (2) a thematic analysis to examine common themes in retweets, and (3) topic models to understand prevalent factors concerning motivation and physical activity on Twitter. Third, we created 2 social networks to investigate organically arising peer-support network structures for sustaining physical activity and to form a deeper understanding of the feasibility of these networks in a real-world context. Results: We collected over 1.5 million physical activity-related tweets posted from August 30 to November 6, 2018. A relatively small percentage of the tweets mentioned the term motivation; many of these were made on Mondays or during morning or late morning hours. The analysis of retweets showed that the following three themes were commonly conveyed on the platform: (1) using a number of different types of motivation (self, process, consolation, mental, or quotes), (2) promoting individuals or groups, and (3) sharing or requesting information. Topic models revealed that many of these users were weightlifters or people trying to lose weight. Twitter users also naturally forged relations, even though 98.12% (2824/2878) of these users were in different physical locations. Conclusions: This study fills a knowledge gap on how individuals organically use social media to encourage and sustain physical activity. Elements related to peer support are found in the organic use of social media. Our findings suggest that geographical location is less important for providing peer support as long as the support provides motivation, despite users having few factors in common (eg, the weather) affecting their physical activity. This presents a unique opportunity to identify successful motivation-providing peer support groups in a large user base. However, further research on the effects in a real-world context, as well as additional design and usability features for improving user engagement, are warranted to develop a successful intervention counteracting the current obesity pandemic. This is especially important for young adults, the main user group for social media, as they develop lasting health-related behaviors.
As humans, we rely heavily on our vision to interact with the world. Therefore, it is not surprising that individuals who are profoundly blind must make remarkable adjustments in order to pursue education, secure employment, and remain socially integrated. According to the World Health Organization (WHO), approximately 285 million people are visually impaired worldwide; 39 million of whom are considered profoundly blind (WHO 2012) see also (Frick and Foster 2003). While significant progress has been made in eye care delivery and treatment, in developed countries (such as the United States), there are approximately 10 million visually impaired and 1.3 million legally blind individuals; a significant proportion of which are children (estimated at 55,200) (AFB 2013a, b).
Full-text available
This article explores the forms of information literacy that arise in commercial entertainment games like World of Warcraft. Using examples culled from eight months of online ethnographic data, the authors detail the forms of information literacy that arise as a regular part of in-game social interaction, emphasizing (ironically) the intellectual nature of such purportedly ‘barren’ forms of play and highlighting the ways in which such practices help redefine the current model of what constitutes information literacy by bringing the collective and collaborative nature of such practices to the fore. Implications for future research are also discussed.
Full-text available
The purpose of this paper is to review, summarize, and illustrate research work involving four audio-based games created within a user-centered design methodology through successive usability tasks and evaluations. These games were designed by considering the mental model of blind children and their styles of interaction to perceive and process data and information. The goal of these games was to enhance the cognitive development of spatial structures, memory, haptic perception, mathematical skills, navigation and orientation, and problem solving of blind children. Findings indicate significant improvements in learning and cognition from using audio-based tools specially tailored for the blind. That is, technologies for blind children, carefully tailored through user-centered design approaches, can make a significant contribution to cognitive development of these children. This paper contributes new insight into the design and implementation of audio-based virtual environments to facilitate learning and cognition in blind children.
Full-text available
This study presents the combined efforts of three research groups toward the investigation of a cognitive issue through the development and implementation of a general purpose VR environment that incorporates a high quality virtual 3D audio interface. The psychological aspects of the study concern mechanisms involved in spatial cognition, in particular to determine how a verbal description of an environment or the active exploration of that environment affects the building of a mental spatial representation. Another point is to investigate the role of vision by observing whether or not participants without vision (blind from birth, late blind or blindfolded sighted individuals) can benefit from these two learning modalities. This paper presents the preliminary results of this study. Additionally is a description of the generic toolkit and companion architecture that has been developed and used for modeling the environment and interface in a cohesive manner. Details for generating an immersive multimodal experimental environment for this platform are also included.
Full-text available
The literature sustains that there is a clear need for science-oriented software for blind children. In this research we present interactive audio-based multimedia software for children with visual disabilities that can be used as a supporting tool for learning science. We studied how to learn science using audio as the principal media and how to develop challenging and encouraging software that at the same time can assist the learning of science in blind children. We designed a customized navigation model for end-users and built a generic software model for role-playing-games. We evaluated the usability of the software and its cognitive impact, verifying that the software stimulates a free and independent use at the users' own pace. The application was considered as appealing, challenging, and encouraging for learning science by end-users.
Conference Paper
Full-text available
Interactive software is currently used for learning andentertainment purposes. This type of software is not very commonamong blind children because most computer games and electronictoys do not have appropriate interfaces to be accessible withoutvisual cues.This study introduces the idea of interactive hyperstoriescarried out in a 3D acoustic virtual world for blind children. Wehave conceptualized a model to design hyperstories. ThroughAudioDoom we have an application that enables testing cognitivetasks with blind children. The main research question underlyingthis work explores how audio- based entertainment and spatial soundnavigable experiences can create cognitive spatial structures inthe minds of blind children.AudioDoom presents first person experiences through explorationof interactive virtual worlds by using only 3D auralrepresentations of the space.
Computer and video games are a maturing medium and industry and have caught the attention of scholars across a variety of disciplines. By and large, computer and video games have been ignored by educators. When educators have discussed games, they have focused on the social consequences of game play, ignoring important educational potentials of gaming. This paper examines the history of games in educational research, and argues that the cognitive potential of games have been largely ignored by educators. Contemporary developments in gaming, particularly interactive stories, digital authoring tools, and collaborative worlds, suggest powerful new opportunities for educational media.
Conference Paper
The implementations of user assistance in most commercial computing applications deployed today have a number of well known difficulties and limitations. Speech technology can be used to complement traditional user assistance techniques and mitigate some of these problems. This paper describes the implementation of an auditory assistance system, and discusses some of the issues encountered in the design and development relevant to such systems.
Conference Paper
We present here a heuristic evaluation of the use of videogames among inexperienced players through the notion of acteme. Firstly, we're going to analyse the evolution of new technologies, videogames, and behaviours of people from childhood to school age by drawing a parallel between adults and children. Secondly, we will take into consideration mass media theorists' different points of view regarding the introduction of videogames in homes and educational institutions. Third, the role of the main types of interactive games in each formative process of the individual has also been studied. Finally, a guideline has been created with those components of videogames that encourage self-learning and increase attention and motivation of inexpert users.
Conference Paper
We are developing a haptic-sensable system to help a blind person understand 3D shapes. As a first attempt we have implemented a pathway simulator which simulates a guiding of a pathway through haptic recognition. If we could indicate a pathway by haptic means to the user, i.e., if we simulate a feeling sensed in his palm and caused by a sliding long cane along the pathway, we believe it might give him an on-site feeling of the pathway. The purpose of this haptic pathway simulator is to help a user with his making a mental map of the pathway. So in the simulator we provide guiding information of the surroundings verbally as well.