Conference PaperPDF Available

Designing Guides for Blind People

Authors:

Figures

Content may be subject to copyright.
Designing Guides for Blind People
Hugo Nicolau, Tiago Guerreiro, Joaquim Jorge
Instituto Superior Técnico, Departamento de Engenharia Informática, Lisboa, Portugal
Phone: +351 214233565, Fax: +351 213145843, e-mail: {hman, tjvg, jaj}@vimmi.inesc-id.pt
Abstract1Most blind users frequently need help when
visiting unknown places. While the white cane or guide
dog can aid the users in their mobility, the major
difficulties arise in orientation. The lack of both
reference points and visual cues are the main causes.
Despite extensive research in orientation interfaces for
the blind, their guiding instructions are not aligned with
the users’ needs and language, resulting in solutions
which provide inadequate feedback. We aim to overcome
this issue allowing users to walk through unknown
places, by receiving a familiar and natural feedback. Our
contributions are in understanding, through user studies,
how blind users explore an unknown place, their
difficulties, capabilities, needs and behaviors. We also
analyzed how these users create their own mental maps,
verbalize a route and communicate with each other. By
structuring and generalizing this information, we were
able to create a prototype that generates familiar
instructions, behaving like a blind companion, one with
similar capabilities that understands their “friend” and
speaks the same language. Finally, we evaluated the
system with the target population, validating our
approach and guidelines. Results show a high degree of
overall user satisfaction and provide encouraging cues to
further the present line of work.
I. INTRODUCTION
There are approximately 163.000 people with some
degree of visual impairment in Portugal, more than
1.6% of the total population, of which about 17.500
are blind. Moreover, there are an estimated 45 million
blind people and 135 million visually impaired people
worldwide [17].
The autonomy of human beings is essential to their
welfare and the ability to move to other places is part
of their daily lives. However, for the majority of blind
people, walking around unfamiliar places is a very
difficult or sometimes impossible task, without help.
Indeed, orientation and mobility are essential skills for
a blind person, but frequently confused. While
mobility depends on skillfully coordinating actions to
avoid obstacles in the immediate path, spatial
orientation requires coordinating one’s actions relative
The authors would like to thank Raquel and Martin Sain
Foundation, all the users that participated in our studies and
Loquendo. Hugo Nicolau and Tiago Guerreiro were supported by
the Portuguese Foundation for Science and Technology, grants
SFRH/BD/46748/2008 and SFRH/BD/28110/2006, respectively.
to both the further ranging surroundings and the
desired destination. On the other hand, orientation
regards the ability to establish and maintain awareness
of a person’s position in space relative to both
landmarks in the surrounding environment and to the
intended destination.
The adoption of a white cane or guide dog is the
main aid to mobility for the target group. However,
major difficulties arise in the orientation task,
particularly in places unbeknown to users. The main
causes of these problems are the lack of reference
points and the inability of blind people to access
visual cues.
Well-established orientation and mobility techniques
using a cane or guide dog, while effective for
following paths and avoiding obstacles, become less
helpful when finding specific locations or objects.
Even thought tactile maps and Braille signs provide
possible solutions, both are insufficient and sometimes
inadequate to the users’ needs.
Meanwhile, there have been efforts to use
technology as a means to help visually impaired
people with their spatial orientation tasks. Among
these the Sendero GPS and Trekker are two systems
commercially available for outdoor environments, as
shown in Figure 1. However, owing to use of
specialized hardware, their cost and size pose serious
obstacles to market penetration.
(a) Sendero GPS (b) Trekker
Fig. 1. Orientation devices for blind people.
On the other hand, mobile phones are part of our
daily lives and play an important role in modern
society. Due to the constant evolution of mobile
devices, particularly in terms of communication and
processing capabilities, their range of applications
extends well beyond basic communication, ranging
from productivity to leisure. Nowadays, most mobile
devices already incorporate a wide set of features,
such as mp3 players, games, digital cameras, GPS or
Internet access, providing a great potential for future
mobile applications. Indeed, their availability, low
cost, miniaturization and communication capabilities
(e.g. WiFi or Bluetooth), makes them suitable to
develop mobile guides.
New advances in location-based guides for the blind,
have proven too generalist to be useful to the target
group. In contrast, by focusing the design on the users,
our approach aims at providing blind users with
familiar feedback, so they can easily follow and
understand all the instructions given and reach their
destination successfully. To achieve this goal, we
performed studies with a group of blind users and a
mobility instructor, gathering a set of guidelines and
strategies aligned with the focus group common
language, habits and requirements.
Additionally, we developed a prototype that behaves
like a blind companion, which speaks users’ language
and guides them to their desired destinations. Finally,
in order to fully evaluate and validate our approach we
performed user tests in an indoor building with blind
people. Next we present the most relevant works in
this research area together with a brief discussion on
their advantages and limitations.
II. RELATED WORK
For the past two decades much effort has been
applied to developing new orientation systems for the
blind and visually impaired. Most these systems focus
on adopting technology to estimate the user’s position
with higher precision [3, 7, 13, 14, 16]. On the other
hand, the interaction techniques and, most important,
the feedback offered to users are often forgotten.
Tri-dimensional (3D) sounds are frequently used to
provide audio cues that seem to come from the same
place as the object or waypoint to which they refer.
These approaches require the user to wear
headphones, in order to present a spatialized sound.
The 3D feedback can be presented in either speech or
non-speech audio (i.e. earcons or auditory icons) [11].
One such approach is SWAN (System for Wearable
Audio Navigation) [9], a navigation and orientation
aid for the visually impaired, which represents
pertinent data through sonification. However, while
using non-speech audio [2, 9, 13] may provide
immediately recognizable sounds, it may also lead to a
busy and uncomfortable listening experience.
Loomis et al. [5] used 3D speech synthesis to
indicate to users either their distance to the next way
point or an object name, so that they can follow the
sound until they arrive at their destination. The main
drawback in using spoken feedback is the high
cognitive load that it entails. Indeed, in such systems
each spoken message usually lasts more than one
second, so the system is often talking. However, this
offers the designer a higher expressiveness than non-
speech interfaces. Furthermore, speech-based
orientation systems can use mono synthesis (e.g.
dialogues) to guide blind users, freeing them from the
need to wear headphones. Indeed, some users may
refuse to wear earbuds [2, 5] because they block out
important ambient sounds.
As an alternative to audio interfaces, the vibration
modality can also be used to guide a blind user. Van
Erp et al. [8] used a belt with eight tactors around the
users’ waist in order to map the next waypoint
direction. Marston et al. [6] were able to perform the
same task with a single tactor, which indicates
whether the user is on or off track. While this
modality is well suited to noisy environments and may
be sufficient for route guidance tasks, it lacks the
expressiveness of audio interfaces, which can be used
to describe places, objects or features.
Speech is the most natural way to communicate.
However, current orientation systems that provide
spoken feedback to users are not able to fully address
their needs and capabilities. The given instructions are
often difficult to follow or even to understand [10, 12,
15].
In order to find the most adequate interface to guide
the visually impaired, Ross and Blasch [1] evaluated
three different interfaces: stereophonic (non-speech),
speech output and a shoulder-tapping system. The
authors concluded that a multimodal interface
featuring tapping and speech was the most adequate.
While the shoulder-tapping system was most widely
used, the speech interface consisted in announcing the
relative position of the destination, once every two
seconds (e.g. “one o’clock … one o’clock …”). Once
again, the speech feedback may be difficult to
understand, as it uses a technical and unnatural
language for blind users. Additionally, constant audio
feedback may prove too intrusive and annoying.
Our main goal and this paper’s contribution is to
provide blind users with both natural and familiar
feedback, by understanding the whats, whens and
hows of a guidance system for the blind. We achieved
this by studying the users, their needs, habits and
using their own orientation abilities to improve both
the system familiarity as well as its adequacy to the
task on hand.
III. USER CENTERED DESIGN
The main contribution of this paper relies in the
results obtained from studies with the target
population. User centered design is a philosophy that
puts the person in the center and in all stages of the
design process, trying to gather as much information
from the user and his surroundings as possible, in
order to guarantee quality.
Our approach tries to provide familiar feedback by
studying the users’ capabilities and needs. Indeed, by
following a user centered design, we were able to
identify common capabilities and behaviors among the
target population, when exploring an unknown place.
On the other hand, we have also studied the way these
users described and guided other blind users, so we
could build a virtual guide that offers them an easily
understandable set of instructions.
A. Interviews and Questionnaires
On a first stage we performed interviews with eight
blind users, allowing us to get a first contact with the
target population. These interviews were semi-
structured and mainly composed by open questions,
leading to an informal and friendly dialogue. After the
interviews’ analysis, we conducted eighteen
questionnaires in order to obtain the users’ profile,
current limitations, needs, degree of independency and
technological knowledge.
According to the questionnaires results, our user
group was over forty five years old and had a low
educational background (below 12th grade). Indeed,
the educational level of most users (39%) was at or
below the 4th grade. All users were legally blind, i.e.
they needed screen readers to access visual
information on their devices. They had at least one
cell phone and used it daily to place and receive phone
calls, even those without any screen reading software.
These results show that the user group has some,
albeit very limited, experience with mobile devices.
Finally, according to the questionnaires, despite
users walking alone most of their time, 35% always
required help to find their intended destination, when
visiting a public building. This implies that orientation
guides are indeed needed, in this particular case, for
indoor environments. However, the remaining
question is: How to guide a blind user?
B. Exploration Experiment
In order to efficiently guide the users, an orientation
system has to offer them both adequate and easily
understandable feedback. Therefore, we needed to
analyze their behaviors, difficulties, capabilities,
limitations and techniques when exploring an
unfamiliar place, so we could build an interface that
addresses these requirements. Moreover, we also
wanted to identify the most important information for
users in an indoor environment and analyze how their
mental maps evolve when exploring it. Finally, in
order to provide familiar feedback, we have studied
how blind users verbalize and communicate routes to
blind colleagues.
The chosen place corresponded to a basement with
two floors (Figure 2) in a training center for blind
users. The first and second floor had approximately
132m2 (11mx12m) and 80m2 (10mx8m), respectively.
Because this basement was mainly used for storage
purposes, the place was not adapted to blind users, i.e.
there were diverse obstacles (e.g. broken chairs, tables
or old machinery) around the place and some paths
were very difficult to navigate.
(a) Second floor (b) First floor
Fig. 2. Map and route chosen. The blue dots represent
the reference points chosen by the users.
Three participants volunteered to take the
exploratory experiment. Their main goal was to
explore and acquire a thorough knowledge of this
unfamiliar place, in order to guide a blind colleague
through a predefined route.
The route is depicted by footsteps in Figure 2. The
starting point was in the second floor, near the room’s
entrance. The users had to follow the corridor for four
meters and then turn right, without entering the room
right before the curve. Then they had to go straight
ahead for eight meters until they reach the stairs.
Already in the first floor, the users had to find a table
with a phone, which was approximately five meters
away from the stairs and then turn left. After that, they
had to go straight ahead for eight meters, avoiding all
obstacles, until they reach a door. Finally, to complete
the route, the users had to climb up the stairs and find
the elevator door.
To assure that all users had a similar knowledge of
the place in the end of the experiment, we conducted a
set of exploratory tasks (Figure 3) for three days,
where each user went through three different sessions
lasting 45 minutes each.
Fig. 3. Users in the exploratory task.
Additionally, the conducted experiment, as well as
the users’ feedback allowed us to gather a set of
conclusions about their behaviors and needs when
exploring an unknown place:
Reference points vs. obstacles: All infrastructure
elements or other artifacts that cannot be easily moved
and allows the users to infer their position are
reference points. The remaining artifacts are obstacles.
Trial-and-error approach: In order to explore
unfamiliar places blind users adopt a trial-and-error
approach, using common mobility techniques. They
walk slowly through the place, using their white canes
to avoid obstacles, while building a mental map.
Through a correct use of mobility techniques, users
can easily identify doors, tables, stairs, chairs, walls or
pillars by their acoustic and tactile properties.
Obstacles impede the correct perception of reality:
The existing obstacles in a room can mislead the user
about the place’s size or layout. Indeed, they can also
impede the correct identification of possible reference
points (e.g. a pillar surrounded by chairs).
Unfortunately, because obstacles can be easily moved
this may be a common problem to blind users.
Users do not like to explore new routes: Despite the
users’ ability to memorize a route, they confine
themselves to a well known path and do not like to
explore new ones, unless they are “forced” to.
Incomplete mental map: As a consequence of the
previous point, it is likely that users have an
incomplete mental map of the place. This may lead to
more frequent disorientations when they stray from
the path.
When users are lost they turn back: If for some reason
users get lost, their first reaction is to turn back in
order to identify a well known reference point and
then adapt their mental map to that new condition.
Less is better: In the end of each session the users
were invited to describe the predefined route, as if
they were guiding a blind colleague. From the
gathered stories, we noticed a big concern in building
a very simple and precise set of instructions, with very
few details or long descriptions about the place or
route.
C. Group Meeting
After the observation phase, we conducted a group
meeting with the three participants and a former
instructor of orientation and mobility techniques for
blind users, in order to consolidate and discuss all the
obtained results.
As all the stories gathered in the exploration
experiment were very similar, we decided to build one
(the “best”) story for the predefined route (Figure 2),
which entailed some lively discussions. As mentioned
before, there was a big concern in building more
precise and simpler stories. Moreover, all participants
stated that the instructions should have the minimum
amount of detail as possible, so as not to distract users.
Therefore, obstacles and other artifacts that are not
crucial to the orientation task should not be mentioned
(that is the purpose of the white cane). However, they
also agreed that a system should allow users to request
more detail about an instruction (e.g. number of steps
on a flight of stairs) or place (i.e. context) if needed.
When blind users enter an unknown place it is very
difficult for them to perceive its environmental flow
[1]. Indeed, they may be able to deduce some of its
context, through their hearing, smell or the ability to
detect heat sources. However, there is information that
is inaccessible and may be crucial to the orientation
task. In this meeting we were able to identify the main
elements that must be present to contextualize the user
in an unknown indoor environment: structure - place
characteristics (e.g. dimensions, layout or number of
floors); interest points - corresponds to all the possible
destinies (e.g. shops, offices, building services or
toilets).
There was also some discussion about the
vocabulary used in the instructions, which may be
crucial to user performance. Neither an instruction nor
its vocabulary can be ambiguous. For instance, the
instruction “go ahead until the end of the ramp and
turn right”, is clearly ambiguous. The reference point
that is given (i.e. the end of the ramp) may be hard to
identify, leading the user to unexpected places.
Another example is the instruction, “go around the
table by the right side and go ahead …” In this case,
users do not know how far around the table are they
supposed to go.
Finally, throughout the experiment and particularly
in the group meeting, it was clear that reference points
are crucial to guide blind users. A reference point is
some infra-structure element or other artifact that
cannot be easily moved and is used to infer the users’
position. Through our experiment, it is clear that the
usage of reference points to communicate a route is a
natural way to guide someone. However these
reference points have to be chosen very careful, so
blind users can be able to identify them (usually with
their white cane) and follow the intended route.
Moreover, this is the only way they have to build their
mental map of some place. Additionally, when a blind
user gets lost, their first reaction is to turn back and try
to find a reference point, so they can locate themselves
and continue with their path.
The blue dots in Figure 2 correspond to the reference
points that were chosen by the users in the exploration
task. They were easily identifiable structure elements,
such as doors, stairs and metal machines.
IV. PROVIDE FAMILIAR FEEDBACK
In an orientation system, the way the user is guided
and the feedback he receives is crucial to his
performance. However, this task is often forgotten or
inadequate. We aim to overcome this issue by
providing a natural and familiar feedback.
A. Elements and Rules
Through a study of how blind users verbalize a
route, we were able to identify the main elements and
rules for the automatic building of instructions:
Action: this element corresponds to the verb of the
instruction (e.g. turn, go, enter or leave).
Direction: some of the previous actions need a
direction so they can make sense. For instance, the
action turn needs to be complemented with the
respective direction, left or right.
Side: sometimes, we need to explicitly identify the
side, in which the user should walk, in order to
identify a reference point or avoid dangers.
Time/Distance: usually, a reference point is associated
to a decision point (i.e. direction shift). The reference
to these points is done through time or distance
elements. For example, "turn right after you pass a
door" or "follow three steps forward ..."
Object: this element can be any artifact existent on the
local (e.g. doors, stairs, tables or walls).
To summarize, all the instructions can be defined
through the following regular expression [4]:
Action Direction? ([Time Distance Side])* Object?
B. Algorithm
Our algorithm is an implementation of the rules
already defined in the previous section. These rules,
just like the algorithm, were defined for the
Portuguese language. Next we will present it in more
detail.
After the users’ position is established, each set of
instructions gets starting from the next localizable
point along the route. This way, when users reach that
point, they can receive a new set of instructions.
Moreover, if there are any reference points or
direction shifts in between two localizable points, the
instruction is subdivided. In this iterative fashion all
the instructions will be built till the user reaches his
destination.
If the next point in the route is a reference point,
then the instruction is very simple, e.g. Action
Direction Object, since the object is easily identifiable.
On the other hand, whenever there is a change in
direction, the user has no reference point. Therefore,
we need to build a reference taking into account the
nearest identifiable objects (e.g. doors). This way he
can get an instruction like, “turn right when you pass a
door”. If we cannot build such a reference, then we
need to use distances to guide users (e.g. “… and then
turn right after three steps”). In either case, using the
side element can be necessary to help users find a
reference point or identifiable element easily.
If users get lost, stray off the route or need help, the
system will ask them to go back to the last reference
point. According to our studies, this was the natural
behavior of users when they felt disoriented. That is
one of the main reasons why correctly identifying
reference points is so important. If they cannot go
back, the system will try and guide them starting from
the nearest localizable position.
V. EVALUATION
To validate the approach presented above, we built a
functional prototype and tested it with the target
population (Figure 4). The evaluation group was
composed by six legally blind users, with ages from
21 to 55 years (three women and three men). Although
all users were legally blind, one of them had residual
vision, while the others were totally blind. Moreover,
none of the users had previous experience either our
prototype or the type of spoken feedback provided by
it.
Fig. 4. Blind user testing the prototype.
The trials were performed in a controlled
environment, in a training facility for blind users. To
fully evaluate our approach, we chose both an
unfamiliar place and route (Figure 2.b). In this
evaluation, we used a Bluetooth localization system,
where each beacon, with a one meter range,
corresponded to a reference point. The evaluation was
performed with a HTC TyTn's mobile device with
Windows Mobile 5.0 and a TTS system (Loquendo,
Portuguese voice).
During the evaluation, one of the users needed
external (i.e. human) help to complete the task. In this
particular case, the user had some difficulties
distinguishing left and right directions, therefore the
need for external help. All the remaining users were
able to successfully complete the task without needing
any type of help, i.e. they never felt lost.
The mean time needed to complete the route was
146 seconds with a standard deviation of 76 seconds
(Figure 5). Moreover, approximately 20% of this time
corresponded to the Bluetooth discovery process,
where the users had to wait near the last reference
point for the next instruction.
Fig. 5. Task completion times, in seconds.
Despite the high waiting time and the difference
between times to achieve the final destination, all the
users were able to understand all the instructions and
clearly identify the reference points, so that they could
wait for the next instruction (which indicates that the
instructions were adequate). Figure 6 represents the
average time that the users spent at each point of the
route. Analyzing the obtained results, we can observe
that all users followed a similar route and spent most
of their time near reference points, waiting for the next
instruction.
Fig. 6. Thermal map: average time that the users spent
in each point of the route.
To assess the users’ opinion about the given
instructions, we performed a questionnaire in the end
of each evaluation session. The obtained results
showed an overall user satisfaction of 4.5 in a 5 point
Likert scale. All users stated that the instructions were
both easily understandable and accurate. Also they
demonstrated a high interest in our prototype and in
using it in public buildings and outdoor environments.
Moreover, one of the users asked if he could repeat the
task, but without using his white cane, which
demonstrates the confidence that he acquired in using
our prototype.
The main complain about our system was the high
waiting times when the users arrived at a reference
point. They felt the system was too slow to react and
could not follow their progress adequately. Most users
stated that the system should be able to offer a new
instruction immediately after they successfully
identified a new reference point.
VI. CONCLUSIONS
The actual orientation systems for blind people need
appropriate interfaces. Almost all research that has
been done aims at locating users with higher
precision. Although this is a crucial component, it
does not by itself guarantee a successful approach.
While there is some research in orientation interfaces
for the blind, the feedback they provide to users is
mostly inadequate.
New approaches that allow guiding the users and
align the interface with their capabilities and needs are
required. Therefore, our contribution lies in analyzing
blind users’ behaviors while exploring unknown
places, so that we could build a more appropriate
interface and dialogue system. In order to guarantee a
familiar feedback we have also studied how these
users verbalize a route and communicate it. Moreover,
we defined a set of elements and rules so we could
generate these instructions automatically.
Our results indicate that generated instructions were
easily understandable (without training) and users
were able to follow them through a predictable path.
To assess each user’s opinions we performed a
debriefing session. The results thus obtained were
very positive showing that users were satisfied with
the system and, more importantly happy the audio
feedback it provided.
VII. FUTURE WORK
In order to efficiently guide a blind user, a good
localization system is required. However, this
component of our prototype has to be modular enough
to easily support different technologies (e.g. RFID,
WLAN, Bluetooth …) and take advantage of each one
of them. Moreover, we will focus on extending our
system to new scenarios. All our studies and research
were carried out in a very limited number of indoor
buildings. Indeed, our approach needs to be fully
evaluated in different indoor/outdoor sceneries with
different routes, obstacles and reference points.
One of the main issues present in audio interfaces,
particularly in orientation systems manifests itself
when they interfere with the general audition (and
attention) of the users. We will explore multimodal
solutions to this and how different feedback
modalities can be used in real life scenarios, typically
noisy environments, so users do not need to carry the
mobile device in one hand at all times or wear
headphones.
REFERENCES
[1] D. Ross and B. Blasch, “Wearable Interfaces for
Orientation and Wayfinding”, in Proc. 4th Int.
ACM Conf. on Assistive Technologies, 2000, 193-
200.
[2] F. Tomita et al., “R&D of Versatile 3D Vision
System VVV”, in Proc. of SMC’98, 1998, pp.
4510-4516.
[3] J. Coughlan and R. Manduchi, “Functional
Assessment of a Camera Phone-Based
Wayfinding System Operated by Blind Users”, in
Conference IEEE-BAIS, Research on Assistive
Technologies Symposium, 2007.
[4] J. Friedl, “Mastering Regular Expressions”,
O’Reilly Media Inc., 2006.
[5] J. Loomis et al., “Personal Guidance System for
People with Visual Impairment: A Comparison of
Spatial Displays for Route Guidance”, in Journal
of Visual Impairment and Blindness, 2004, pp.
135-147.
[6] J. Marston et al., “Nonvisual Route Following
with Guidance from a Simple Haptic or Auditory
Display”, in Journal of Visual Impairment and
Blindness, 2007, pp. 203-211.
[7] J. Rajamaki et al., “LaureaPOP Indoor Navigation
Service for the Visually Impaired in a WLAN
Environment”, in Proc. 6th WSEAS Int. Conf. on
Electronics Hardware, 2007, pp. 96-101.
[8] J. Van Erp et al. “Waypoint Navigation with
Vibrotactile Waist Belt”, in ACM Transactions on
Applied Perception, 2005, pp. 106-117.
[9] J. Wilson et al., “SWAN: System for Wearable
Audio Navigation”, in 11th IEEE Int. Symposium
on Wearable Computers, 2007, pp. 1-8.
[10] L. Ran et al., “Drishti: An Integrated
Indoor/Outdoor Blind Navigation System and
Service”, in Proc. 2nd IEEE Conf. on Pervasive
Computing and Communications, 2004, pp. 23-
30.
[11] M. Blattner et al., “Earcons and Icons: Their
Structure and Common Design Principles”,
Human Computer Interaction, 4, 1989, pp. 4-11.
[12] M. Turunen et al., “Mobile Speech-based and
Multimodal Public Transport Information
Service”, in Proc. of MobileHCI Workshop on
Speech in Mobile and Pervasive Environments,
2006.
[13] M. Turunen et al., “Design of a Rich Multimodal
Interface for Mobile Spoken Route Guidance”, in
Proc. Interspeech, 2007, pp. 2193-2196.
[14] S. Bohonos et al., “Universal Real-Time
Navigational Assistance (URNA): An Urban
Bluetooth Beacon for the Blind”, in Proc. 1st Int.
Workshop on Systems and Network Support for
Healthcare and Assistive Living Environments,
2007, pp. 83-88.
[15] S. Helal et al., “Drishti: An Integrated Navigation
System for Visually Impaired and Disabled”, in
Proc. 5th Int. Symposium on Wearable Computers,
2001, pp. 149-156.
[16] S. Willis and S. Helal, “RFID Information Grid
for Blind Navigation and Wayfinding”, in Proc.
9th IEEE Int. Symposium on Wearable Computers,
2005, pp. 34-37.
[17] United for Sight, “Global Eye Health Statistics”,
in http://www.uniteforsight.org/eye_stats.php, last
visited in 18/02/2009.
... This specific configuration satisfies the basic requirements to the assistive devices for the B&VI spatial cognition at the industrial facilities. This functionality was designed via interviewing of [11,[16][17][18][19][20]) and Internet (e.g. [21][22][23]) resources. ...
... In [16], the behavior of B&VI is discussed when they explore unknown places. It helps to build a more appropriate interface and dialogue system. ...
Conference Paper
Industry 4.0 technologies simplify the blind and visually impaired (B&VI) people employment and make their work conditions B&VI friendly. The developed soft-/hardware complex uses wearable Raspberry Pi 3 B microcomputer and the Bytereal iBeacon fingerprinting to uniquely identify the B&VI location at the three-workroom industrial facilities of 40 m2.
... This specific configuration satisfies the basic requirements to the assistive devices for the B&VI spatial cognition at the industrial facilities. This functionality was designed via the [16][17][18][19][20]) and Internet (e.g. [21][22][23]) resources. ...
... In [16], the behavior of B&VI is discussed when they explore unknown places. It helps to build a more appropriate interface and dialogue system. ...
Preprint
Full-text available
Nowadays, Industry 4.0 technologies simplify the blind and visually impaired (B&VI) employment and make the work conditions more B&VI friendly. The interviewing of B&VI and the analysis of literature and Internet resources show that two features must be implemented in the assistive equipment at least – detection of unexpected obstacles in front of the B&VI on the distances of up to 1 m and the B&VI localization. The developed soft-/hardware uses wearable Raspberry Pi 3 B microcomputer with an ultrasonic range sensor HC-SR04 to solve the first problem and the Bytereal iBeacon fingerprinting to solve the second one. The presented approach was successfully tested at the three-workroom industrial facilities – the B&VI detected obstacles and the blind companion remotely localized the B&VI via the iBeacon fingerprinting, HTML dynamic website, and MQTT protocol.
... Most other studies of this type used blindfolded, sighted individuals and did not directly compare speech and tactile outputs. Prior research has specifically cited the lack of user-centered design as a barrier in the successful implementation of these devices by the visually impaired population [12][13][14][15][16][17][18]. And, for that reason our subject population included individuals who were blind. ...
... Although the use of virtual sounds in providing simple guiding cues has been demonstrated as superior to synthesized speech in minimizing cognitive load [10,21], the infancy of its deployment to bone conduction headphones deemed it impractical for our purposes [22]. In addition, synthesized speech provides an expressiveness [18] that blind subjects familiar with common mobile platforms are already comfortable. ...
Article
Full-text available
Sensory substitution devices engage sensory modalities other than vision to communicate information typically obtained through the sense of sight. In this paper, we examine the ability of subjects who are blind to follow simple verbal and vibrotactile commands that allow them to navigate a complex path. A total of eleven visually impaired subjects were enrolled in the study. Prototype systems were developed to deliver verbal and vibrotactile commands to allow an investigator to guide a subject through a course. Using this mode, subjects could follow commands easily and navigate significantly faster than with their cane alone (p <0.05). The feedback modes were similar with respect to the increased speed for course completion. Subjects rated usability of the feedback systems as “above average” with scores of 76.3 and 90.9 on the system usability scale.
... This is due to the problems manifested by the devices, when used outside the scope of the laboratory (Dakapoulos & Bourbakis, 2010;Quinones et al., 2011). These problems are excessive size, weight, battery life, reliability, difficult to use and price (Fok et al., 2011;Nicolau et al., 2009). Another significant limitation evidenced by the ETA studies is the lack of user-centered designs. ...
Article
One of the challenges faced by blind persons to achieve optimal mobility is the detection and avoidance of obstacles located in their travel path. Besides the widely used white cane, alternative or complementary devices have been developed, such as electronic aids that provide feedback about the environment. However, the devices available have been unable to provide an optimal solution with widespread acceptance, motivating the present work. The eBAT (electronic Buzzer for Autonomous Travel) is designed to offer optimal protection and employs the user’s own mobile phone for easier use and reduced manufacturing costs. For this work, a group of 25 blind individuals was used to validate the eBAT based on the single-subject with reversal method (ABA study). The results show a significant decrease in the number of involuntary contacts in an unknown travel path between the first phase of the study, which did not involve the eBAT, and the second, where it was used. When the device was again removed in the third phase, the number of contacts rose. We may therefore conclude that the eBAT fills an important gap in mobility aids for blind people, yielding a clear benefit by reducing the participants’ feeling of insecurity.
... Often examples are mentioned [2,7,13], but the underlying grammar is not explained and it is unclear whether such a grammar exists at all. Nicolau et al [10,11] describes approaches to a grammar used for indoor purposes. They analyzed how PVIs verbalize routes. ...
Conference Paper
Pedestrian navigation systems are rarely accessible or suit the needs of persons with visual impairments. They usually lack a standardized grammar for their speech instructions, forcing users to learn new types of instructions for each new system. Thus, we propose (1) a German grammar with syntax rules and vocabulary for mobile pedestrian navigation systems that take into account the special requirements of people with visual impairments. (2) a set of rules for specifying what should be spoken and when, given GPS accuracy in a city [18]. We describe (3) the methodology used to obtain the grammar as well as (4) a qualitative evaluation with orientation and mobility experts and with people with visual impairments who deployed our grammar during a user study. Our approach is the first of its kind, as there is no such grammar neither for German nor for English, as far as we know. It serves as a contribution to standardize pedestrian navigation speech instructions for people with visual impairments.
... 72 Vocabulary used in presenting the information should be unambiguous in nature and follow regular expression with focus on action, direction, reference to distance/time, side and finally some local object like ''turn right after passing the pillar on your left''. 73 The information requirements may vary significantly from person to person. Type of information that system can provide must be available as a choice to meet individuals' needs. ...
Article
Full-text available
This work systematically reviews the assistive technology solutions for pedestrians with visual impairment and reveals that most of the existing solutions address a specific part of the travel problem. Technology-centered approach with limited focus on the user needs is one of the major concerns in the design of most of the systems. State-of-the-art sensor technology and processing techniques are being used to capture details of the surrounding environment. The real challenge is in conveying this information in a simplified and understandable form especially when the alternate senses of hearing, touch, and smell have much lesser perception bandwidth than that of vision. A lot of systems are at prototyping stages and need to be evaluated and validated by the real users. Conveying the required information promptly through the preferred interface to ensure safety, orientation, and independent mobility is still an unresolved problem. Based on observations and detailed review of available literature, the authors proposed that holistic solutions need to be developed with the close involvement of users from the initial to the final validation stages. Analysis reveals that several factors need serious consideration in the design of such assistive technology solutions.
Chapter
Smart assistive devices for blind and visually impaired (B&VI) people are of high interest today since wearable IoT hardware became available for a wide range of users. In the first project, the Raspberry Pi 3 B board measures a distance to the nearest obstacle via ultrasonic sensor HC-SR04 and recognizes human faces by Pi camera, OpenCV library, and Adam Geitgey module. Objects are found by Bluetooth devices of classes 1-3 and iBeacons. Intelligent eHealth agents cooperate with one another in a smart city mesh network via MQTT and BLE protocols. In the second project, B&VIs are supported to play golf. Golf flagsticks have sound marking devices with a buzzer, NodeMcu Lua ESP8266 ESP-12 WiFi board, and WiFi remote control. In the third project, an assistive device supports the orientation of B&VIs by measuring the distance to obstacles via Arduino Uno and HC-SR04. The distance is pronounced through headphones. In the fourth project, the soft-/hardware complex uses Raspberry Pi 3 B and Bytereal iBeacon fingerprinting to uniquely identify the B&VI location at industrial facilities.
Article
In the new digital era, workplaces are constantly changing to incorporate, e.g. new information and communication technologies (ICTs), as well as ergonomic features that attempt to improve the wellbeing of workers. Such technological advances, along with globalisation and demographic changes (i.e. ageing populations, falling birth rates and migration) have modified the world of work that we used to know: organisations have increasingly capitalised on ideas, creativity and potential contributions of their employees (Burke and Ng 2006). Despite this, the Employment Forum on Disability (EFD 2008) highlighted that some changes (e.g. computers, work stations and training), which should make working conditions easier for employees, often create greater barriers for workers with a disability. In this context, Greisler and Stupak (2002) indicate that, for example, many industrialised countries see technological progress ‘as a ready means through which governments can address issues of social exclusion’. However, inaccessible technologies may be a cause of major exclusion in the workplace for disabled workers who cannot interact with such technologies (Foster 2011) and with the built environment.
Article
Full-text available
Many navigation systems for visually impaired people have been developed, but few can provide dynamic interactions and adaptability to changes. The aim of this working life project is to create an innovative navigation service solution that will improve the quality of life of the visually impaired. For example, a university campus might be such a large and rambling place that visually impaired people can not always find the services or rooms they want to. Therefore, we are devising new service solutions that exploit the technical features of modern WLAN systems in an innovative way. This paper describes the LaureaPOP system that focuses on the indoor navigation service design in a wireless local area network environment.
Conference Paper
Full-text available
We present a design of a rich multimodal interface for mobile route guidance. The application provides public transport information in Finland, including support for pedestrian guid- ance when the user is changing between the means of trans- portation. The range of input and output modalities include speech synthesis, speech recognition, a fisheye GUI, haptics, contextual text input, physical browsing, physical gestures, non-speech audio, and global positioning information. To- gether, these modalities provide an interface that is accessible for a wide range of users including persons with various lev- els of visual impairment. In this paper we describe the func- tional aspects and the design of the interface of our publicly available prototype system. Index Terms : speech interfaces, multimodal interfaces, mo- bile applications, accessibility
Article
Full-text available
A path-following experiment, using a global positioning system, was conducted with participants who were legally blind. On- and off-course confirmations were delivered by either a vibrotactile or an audio stimulus. These simple binary cues were sufficient for guidance and point to the need to offer output options for guidance systems for people who are visually impaired.
Book
Regular expressions are a central element of UNIX utilities like egrep and programming languages such as Perl. But whether you're a UNIX user or not, you can benefit from a better understanding of regular expressions since they work with applications ranging from validating data-entry fields to manipulating information in multimegabyte text files. Mastering Regular Expressions quickly covers the basics of regular-expression syntax, then delves into the mechanics of expression-processing, common pitfalls, performance issues, and implementation-specific differences. Written in an engaging style and sprinkled with solutions to complex real-world problems, Mastering Regular Expressions offers a wealth information that you can put to immediate use. Regular expressions are an extremely powerful tool for manipulating text and data. They are now standard features in a wide range of languages and popular tools, including Perl, Python, Ruby, Java, VB.NET and C# (and any language using the .NET Framework), PHP, and MySQL. If you don't use regular expressions yet, you will discover in this book a whole new world of mastery over your data. If you already use them, you'll appreciate this book's unprecedented detail and breadth of coverage. If you think you know all you need to know about regular expressions, this book is a stunning eye-opener. As this book shows, a command of regular expressions is an invaluable skill. Regular expressions allow you to code complex and subtle text processing that you never imagined could be automated. Regular expressions can save you time and aggravation. They can be used to craft elegant solutions to a wide range of problems. Once you've mastered regular expressions, they'll become an invaluable part of your toolkit. You will wonder how you ever got by without them. Yet despite their wide availability, flexibility, and unparalleled power, regular expressions are frequently underutilized. Yet what is power in the hands of an expert can be fraught with peril for the unwary. Mastering Regular Expressions will help you navigate the minefield to becoming an expert and help you optimize your use of regular expressions. Mastering Regular Expressions , Third Edition, now includes a full chapter devoted to PHP and its powerful and expressive suite of regular expression functions, in addition to enhanced PHP coverage in the central "core" chapters. Furthermore, this edition has been updated throughout to reflect advances in other languages, including expanded in-depth coverage of Sun's java.util.regex package, which has emerged as the standard Java regex implementation. Topics include: A comparison of features among different versions of many languages and tools How the regular expression engine works Optimization (major savings available here!) Matching just what you want, but not what you don't want Sections and chapters on individual languages Written in the lucid, entertaining tone that makes a complex, dry topic become crystal-clear to programmers, and sprinkled with solutions to complex real-world problems, Mastering Regular Expressions , Third Edition offers a wealth information that you can put to immediate use. Reviews of this new edition and the second edition: "There isn't a better (or more useful) book available on regular expressions." --Zak Greant, Managing Director, eZ Systems "A real tour-de-force of a book which not only covers the mechanics of regexes in extraordinary detail but also talks about efficiency and the use of regexes in Perl, Java, and .NET...If you use regular expressions as part of your professional work (even if you already have a good book on whatever language you're programming in) I would strongly recommend this book to you." --Dr. Chris Brown, Linux Format "The author does an outstanding job leading the reader from regex novice to master. The book is extremely easy to read and chock full of useful and relevant examples...Regular expressions are valuable tools that every developer should have in their toolbox. Mastering Regular Expressions is the definitive guide to the subject, and an outstanding resource that belongs on every programmer's bookshelf. Ten out of Ten Horseshoes." --Jason Menard, Java Ranch
Article
In this paper we examine earcons, which are audio messagesused in the user-computer interface to provide information andfeedback to the user about computer entities. (Earcons includemessages and functions, as well as states and labels.) We identifysome design principles that are common to both visual symbols andauditory messages, and discuss the use of representational andabstract icons and earcons. We give some examples of audio patternsthat may be used to design modules for earcons which then may beassembled into larger groupings called families. The modules aresingle pitches or rhythmicized sequences of pitches calledmotives. The families are constructed about related motivesthat serve to identify a family of related messages. Issuesconcerned with learning and remembering earcons are discussed.
Conference Paper
People with severe visual impairment need a means of remaining oriented to their environment as they move through it. Three wearable orientation interfaces were developed and evaluated toward this purpose: a stereophonic sonic guide (sonic "carrot"), speech output, and shoulder-tapping system. Street crossing was used as a critical test setting in which to evaluate these interfaces. The shoulder-tapping system was found most universally usable. Considering the great variety of co-morbidities within this population, the authors concluded that a combined tapping/speech interface would provide usability and flexibility to the greatest number of people under the widest range of environmental conditions.
Conference Paper
Drishti is a wireless pedestrian navigation system. It integrates several technologies including wearable computers, voice recognition and synthesis, wireless networks, Geographic Information System (GIS) and Global positioning system (GPS). Drishti augments contextual information to the visually impaired and computes optimized routes based on user preference, temporal constraints (e.g. traffic congestion), and dynamic obstacles (e.g. ongoing ground work, road blockade for special events). The system constantly guides the blind user to navigate based on static and dynamic data. Environmental conditions and landmark information queried from a spatial database along their route are provided on the fly through detailed explanatory voice cues. The system also provides capability for the user to add intelligence, as perceived by the blind user, to the central server hosting the spatial database. Our system is supplementary to other navigational aids such as canes, blind guide dogs and wheel chairs.
Conference Paper
We describe a complete hardware/software system, dubbed Universal Real-Time Navigational Assistance (URNA), which enables communication of relevant location-aware information to a blind person carrying a Bluetooth-enabled cell phone. Although URNA can be used for a number of different applications (e.g., an information kiosk at a shopping mall or public transit information at a bus stop), we concentrate on the challenging case of an urban intersection. Information provided to the user as he or she approaches the intersection includes a description of the intersection topology and real-time notification of the state of the traffic lights. The main FPGA- based control board (NavCon) interfaces with a traffic controller and with the Bluetooth modules, which are each mounted atop the intersection's pedestrian heads (pedheads) - the lights signaling a pedestrian when to 'WALK' or 'DON'T WALK'. The cell phone software (PedNav), written in Java 2 Micro Edition (J2ME), uses Text-To-Speech (TTS) for presenting the information transmitted by NavCon to the blind user.
Article
Presenting waypoint navigation on a visual display is not suited for all situations. The present experiments investigate if it is feasible to present the navigation information on a tactile display. Important design issue of the display is how direction and distance information must be coded. Important usability issues are the resolution of the display and its usefulness in vibrating environments. In a pilot study with 12 pedestrians, different distance-coding schemes were compared. The schemes translated distance to vibration rhythm while the direction was translated into vibration location. The display consisted of eight tactors around the user's waist. The results show that mapping waypoint direction on the location of vibration is an effective coding scheme that requires no training, but that coding for distance does not improve performance compared to a control condition with no distance information. In Experiment 2, the usefulness of the tactile display was shown in two case studies with a helicopter and a fast boat.
Article
In this article we examine earcons, which are audio messages used in the user-computer interface to provide information and feedback to the user about computer entities. (Earcons include messages and functions, as well as states and labels.) We identify some design principles that are common to both visual symbols and auditory messages, and discuss the use of representational and abstract icons and earcons. We give some examples of audio patterns that may be used to design modules for earcons, which then may be assembled into larger groupings called families. The modules are single pitches or rhythmicized sequences of pitches called motives. The families are constructed about related motives that serve to identify a family of related messages. Issues concerned with learning and remembering earcons are discussed.