Content uploaded by Orly Lahav
Author content
All content in this area was uploaded by Orly Lahav on Apr 19, 2015
Content may be subject to copyright.
Multi-sensory virtual environment for supporting blind persons’
acquisition of spatial cognitive mapping, orientation, and mobility skills
O Lahav and D Mioduser
School of Education, Tel Aviv University, Ramat Aviv, Tel Aviv, ISRAEL
lahavo@post.tau.ac.il
ABSTRACT
Mental mapping of spaces, and of the possible paths for navigating through these spaces, is
essential for the development of efficient orientation and mobility skills. The work reported
here is based on the assumption that the supply of appropriate spatial information through
compensatory channels (conceptual and perceptual), may contribute to the blind people’s
spatial performance. We developed a multi-sensory virtual environment simulating real-life
spaces. This virtual environment comprises developer / teacher mode and learning mode.
1. RATIONALE
The ability to navigate space independently, safely and efficiently is a combined product of motor, sensory
and cognitive skills. This ability has direct influence in the individuals’ quality of life.
Mental mapping of spaces, and of the possible paths for navigating through these spaces, is essential for
the development of efficient orientation and mobility skills. Most of the information required for this mental
mapping is visual information (Lynch, 1960). Blind people lack this crucial information, thus facing great
difficulties (a) in generating efficient mental maps of spaces, and therefore (b) in navigating efficiently
within these spaces. A result of this deficit in navigational capability is that many blind people become
passive persons, depending on others for continuous aid (Foulke, 1971). More then 30% of the blind do not
mobilize independently outdoors (Clark-Carter, Heyes & Howarth, 1986).
The work reported here is based on the assumption that the supply of appropriate spatial information
through compensatory sensorial channels, as an alternative to the (impaired) visual channel, may contribute
to the mental mapping of spaces and consequently, to blind people’s spatial performance.
Research on blind people’s mobility in known and unknown spaces (Dodds, Armstrong & Shingledecker,
1981; Golledge, Klatzky & Loomis, 1996; Ungar, Blades & Spencer, 1996), indicates that support for the
acquisition of spatial mapping and orientation skills should be supplied at two main levels: perceptual and
conceptual levels.
At the perceptual level, the deficiency in the visual channel should be compensated with information
perceived via other senses. Touch and hearing become powerful information suppliers about known as well
as unknown environments. In addition, haptic information appears to be essential for appropriate spatial
performance. Haptics is defined in the Webster dictionary (1993), as “of, or relating to the sense of touch”.
Fritz, Way & Barner (1996) define haptics as “ tactile refers to the sense of touch, while the broader haptics
encompasses touch as well as kinaesthetic information, or a sense of position, motion and force.” Haptic
information is commonly supplied by the cane for low-resolution scanning of the immediate surroundings, by
palms and fingers for fine recognition of objects’ form, textures, and location, and by the legs regarding
surface information. The auditory channel supplies complementary information about events, the presence of
other people (or machines or animals) in the environment, materials which objects are made of, or estimates
of distances within a space (Hill, Rieser, Hill, Halpin & Halpin, 1993).
At the conceptual level, the focus is on appropriate strategies for an efficient mapping of the space and
the generation of navigation paths. Research indicates two main scanning strategies used by people: route
and map strategies. Route strategies are based in linear (therefore sequential) recognition of spatial features.
Map strategies, considered to be more efficient than the former, are holistic in nature, comprising multiple
perspectives of the target space (Fletcher, 1980; Kitchin & Jacobson, 1997). Research shows that blind
people use mainly route strategies while recognizing and navigating new spaces (Fletcher, 1980).
Proc. 3rd Intl Conf. Disability, Virtual Reality & Assoc. Tech., Alghero, Italy 2000
2000 ICDVRAT/University of Reading, UK; ISBN 0 7049 11 42 6 53
2. THE PROPOSED STUDY
Advanced computer technology offers new possibilities for supporting visually impaired people’s acquisition
of orientation and mobility skills, by compensating the deficiencies of the impaired channel.
Research on the implementation of haptic technologies within virtual navigation environments reports on
its potential for initial training as well as for support and rehabilitation training with sighted people (Giess,
Evers & Meinzer, 1998; Gorman, Lieser, Murray, Haluck & Krummel, 1998), as well as with blind people
(Jansson, Fanger, Konig & Billberger, 1998; Colwell, Petrie & Kornbrot, 1998).
In light of these promising results, the main goals of this study are:
(a) The development of a multi-sensory virtual environment enabling blind people to learn about
different (real life) spaces which they are required to navigate (e.g., school, work place, public
buildings).
(b) The systematic study of blind people’s acquisition of spatial navigation skills by means of the virtual
environment.
In the following sections a brief description of the learning environment will be presented, as well as
preliminary results of the pre-pilot evaluation of it.
3. THE ENVIRONMENT
For the research project reported here, we developed a multi-sensory virtual environment simulating real-life
spaces. This virtual environment comprises two modes of operation:
(a) Developer / Teacher mode.
(b) Learning mode.
3.1 Developer / Teacher mode
The core component of the developer mode is the virtual environment editor. This module includes three
tools: (a) 3D environment builder; (b) Force feedback output editor; (c) Audio feedback editor.
3.1.1 3D environment builder. By using the 3D-environment editor, the developer can define the
environment characteristics. These characteristics are:
- Determine the size and the form of the room.
- Determine the ground texture.
- Selected the objects in the environment (doors, windows, walls, rectangle, cylinder etc.)
3.1.2 Force feedback output editor. By this editor the developer is able to attach Force-Feedback effects
(FFE) to all objects in the environment. Examples of FFE’s are vibrations produced by ground textures (e.g.,
stones, parquet, grass etc); force fields surrounding objects; friction sensation.
3.1.3 Audio feedback editor. This editor allows the attachment of appropriate audio-feedback to the objects,
for example: “facing a window”, “turn right” etc.
Figure 1 shows the environment-building editor screen. The interface allows the developer to determine
the different features of the target space, e.g., size, objects, FFE’s and audio effects attached to the objects,
ground texture.
By using the developer mode, the environment developer can built new navigation environments,
accordingly to the need of the users, and to progressive levels of complexity.
3.2 Learning mode
The learning mode includes two interfaces: User interface and Teacher interface.
3.2.1 The user interface. The user interface consists of a 3D virtual environment, which simulates real rooms
and objects. The user navigates this environment using the Microsoft Force Feedback Joystick (F.F.J).
During this navigation varied interactions occur between the user and the environment components. As a
result of this interactions the user get haptic feedback through the F.F.J. This feedback includes sensations
such as friction, force fields and vibrations.
Proc. 3rd Intl Conf. Disability, Virtual Reality & Assoc. Tech., Alghero, Italy 2000
2000 ICDVRAT/University of Reading, UK; ISBN 0 7049 11 42 6
54
Figure 1. 3D environment builder
By using the F.F.J the user can get information at two levels:
- Foot level – this mode provides information that is equivalent to the information that the user gets by
his feet, as he walks in the real space.
- Hand level – this mode provide information that is parallel to the information that the user gets by his
hand in the real space.
In addition the user receives auditory information generated by a “guiding computer agent”,
contextualized for the particular simulated environment. This audio feedback aims to provide appropriate
references whenever the user gets lost in the virtual space. Figure 2 shows the user-interface screen.
Figure 2. The user interface
Proc. 3rd Intl Conf. Disability, Virtual Reality & Assoc. Tech., Alghero, Italy 2000
2000 ICDVRAT/University of Reading, UK; ISBN 0 7049 11 42 6 55
Figure 3. The teacher interface
3.2.2 The teacher interface. This interface comprises series of features serving teachers during and after the
learning session. Several monitors on the screen present updated information on the user’s navigation, e.g.,
position, objects reached. In addition, other functions allow the teacher to record the user’s navigation path
and replay it aftermath to analyze and evaluate the user’s performance (Figure 3).
4. PRE-PILOT FORMATIVE EVALUATION OF THE FORCE FEEDBACK
VIRTUAL ENVIRONMENT
The pre-pilot formative evaluation stage aimed to analyze the user’s performance within the environment
regarding three main aspects:
(a) User’s response to F.F.J, and the type of FFE’s that were of high effect on the user.
(b) Users ability to identify the environment’s components. Two issues were addressed:
- User identification or recognition of the space and the objects.
- User’s difficulties in the identification of the objects’ shape and size.
(c) User navigation within the environment. Two issues were addressed:
- Environment characteristics that lead the user to high immersion-feeling.
- User’s movement in the environment.
5. METHOD
5.1 Subject
The subject, A., is a forty nine years old, a congenital blind. He is a computer user for more than eleven
years. A. using a cane for mobility in outdoor.
5.2 Procedure
The study consisted of two stages: F.F. evaluation stage, and navigation in virtual environment stage.
5.2.1 Force feedback evaluation stage. A series of probes were administered, at which different FFE’s were
tested by the subject. Data on the subject’s reports was collected by direct observation of his performance,
and by interview questions. As a result of this stage, a characterization of the potential value of the different
Proc. 3rd Intl Conf. Disability, Virtual Reality & Assoc. Tech., Alghero, Italy 2000
2000 ICDVRAT/University of Reading, UK; ISBN 0 7049 11 42 6
56
effects for the builders of navigational environment was obtained. The F.F. evaluation stage lasted about half
an hour.
5.2.2 Navigation in virtual environment stage. At the beginning of the stage the subject received a short
explanation about the features of the environment and how to operate the F.F.J. The series of tasks included:
(a) free navigation; (b) directed navigation; (c) tasks focussing on emerging difficulties; and (d) task aimed to
probe auditory support (human feedback in this preliminary version), referring to direction, turns, and
proximity to objects. As a result of this stage, a characterisation of appropriate and required features of the
environment and the navigation tools was generated. This stage lasted about forty-five minutes. At the end of
this session an open interview was conducted.
5.2.3 Data collection. Three data-collection instruments were used in this study. The first was a log
mechanism built-in in the computer system which stored the subject’s movements within the environment. In
addition the whole session was video recorded. The third data collector instrument was an open interview.
Figure 4. Subject’s navigation in the environment
6. RESULTS
6.1 Force feedback joystick features
A. Learned to work freely with the force feedback joystick within a short period of time. During the first
session A. recommended to define a magnetic force field around the objects and in front of the walls. By this
magnetic force field the user can feel an attraction or repulsion whenever he approaches an object or an
obstacle. The force feedback characterizations that were effective to the user were high resistance force,
bumps vibrations and high frictions.
6.2 Identification of environmental components
A. could identify when he bumped into an object, or arrived to one of the room’s corners. The subject could
not identify the objects. As a result of the size of the objects and without a magnetic force, the subject was
lost in the space.
6.3 Navigation
A. moved within the environment in a rapid response, the rapid walking cause him to get lost in the haptic
space. Another reason that made him to lose is way in the space, was the walking at the environment without
references.
Proc. 3rd Intl Conf. Disability, Virtual Reality & Assoc. Tech., Alghero, Italy 2000
2000 ICDVRAT/University of Reading, UK; ISBN 0 7049 11 42 6 57
Proc. 3rd Intl Conf. Disability, Virtual Reality & Assoc. Tech., Alghero, Italy 2000
2000 ICDVRAT/University of Reading, UK; ISBN 0 7049 11 42 6
58
Figure 4 shows the intricate paths in one navigation task. The paths unveils situations at which the user
got trapped in corners, lost referential landmarks in the space, or in contrast, his attempts to grasp the object
from all angles.
The pre-pilot probes resulted in the devise of several required improvements:
- Enlargement of the objects.
- Improvement of the resolution of correspondence between the movement and the force feedback.
- Introduction of friction effects for walking along the walls.
- Reduction of allowed navigation velocity in the environment.
At the time of the conference, detailed results from the actual study as well as preliminary conclusions
will be presented.
Acknowledgements. The study presented here is partially supported by Grant Microsoft Research Ltd.
7. REFERENCES
Clark-Carter, D.D., Heyes A.D. and Howarth C.I. (1986). The effect of non-visual preview upon the walking
speed of visually impaired people. Ergonomics, 29 (12), 1575-1581.
Colwell, C., Petrie, H. and Kornbrot, D. (1998). Haptic Virtual Reality for Blind Computer Users. Assets ‘98
Conference.
Dodds, A.G., Armstrong, J.D. and Shingledecker C.A., (1981). The nottingham obstacle detector:
development and evaluation. Journal of Visual Impairment and Blindness, 75 (5), 203-209.
Fletcher, J.F. (1980). Spatial representation in blind children 1: development compared to sighted children.
Journal of Visual Impairment and Blindness, 74 (10), 318-385.
Foulke, E. (1971). The perceptul basis for mobility. Research Bulletin of the American Foundation for the
Blind, 23, 1-8.
Fritz, J. P., Way, T. P. & Barner, K. E. (1996). haptic representation of scientific data for visually impaired or
blind persons. In Technology and Persons With Disabilities Conference.
Giess, C., Evers, H. and Meinzer, H.P. (1998). Hptic volume rendering in different scenarios of surgical
planning. Proceedings of the Third PHANToM Users Group Workshop, M.I.T.
Golledge, R. G., Klatzky , R. L., and Loomis, J. M. (1996). Cognitive Mapping and Wayfinding by Adults
Without Vision. In J. Portugali (Ed.). The Construction of Cognitive Maps (pp. 215-246). Netherland,
Kluwer Academic Publishers.
Gorman, P.J, Lieser, J.D., Murray, W.B., Haluck, R.S, and Krummel, T.M. (1998). Assessment and
validation of force feedback virtual reality based surgical simulator. Proceedings of the Third PHANToM
Users Group Workshop, M.I.T.
Hill, E.W., Rieser, J.J., Hill, M.M., Hill, M., Halpin, J. and Halpin R. (1993). How persons with visual
impairments explore noval spaces: Strategies of good and poor performers. Journal of Visual Impairment
and Blindness, October, 295-301.
Jansson, G., Fanger, J., Konig, H. and Billberger, K. (1998). Visually impaired persons’ use of the
PHANToM for information about texture and 3D form of virtual objects. Proceedings of the Third
PHANToM Users Group Workshop, M.I.T.
Kitchin, R.M. and Jacobson, R.D. (1997). Techniques to Collect and Analyze the Cognitive Map Knowledge
of Persons with Visual Impairment or Blindness:Issues of Validity. Journal of Visual Impairment and
Blindness, 91 (4).
Lynch, K. (1960). The image of the city. Cambridge, Ma., MIT Press.
Merria - Webster (1993). Webster’s third new international dictionary’ of the english language.
Encyclopaedia Britannica, Inc. U.S.A.
Ungar, S., Blades, M and Spencer, S. (1996), The construction of cognitive maps by children with visual
impairments. In J. Portugali (Ed.). The Consruction of Cognitive Maps (pp.247-273). Netherlands,
Kluwer Academic Publishers.