ArticlePDF AvailableLiterature Review

Navigation Systems for the Blind and Visually Impaired: Past Work, Challenges, and Open Problems

Authors:

Abstract and Figures

Over the last decades, the development of navigation devices capable of guiding the blind through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s objective is to provide an updated, holistic view of this research, in order to enable developers to exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will be briefly described and analyzed from a historical perspective, from the first “Electronic Travel Aids” and early research on sensory substitution or indoor/outdoor positioning, to recent systems based on artificial vision. Thereafter, user-centered design fundamentals are addressed, including the main points of criticism of previous approaches. Finally, several technological achievements are highlighted as they could underpin future feasible designs. In line with this, smartphones and wearables with built-in cameras will then be indicated as potentially feasible options with which to support state-of-art computer vision solutions, thus allowing for both the positioning and monitoring of the user’s surrounding area. These functionalities could then be further boosted by means of remote resources, leading to cloud computing schemas or even remote sensing via urban infrastructure. Link: https://www.mdpi.com/1424-8220/19/15/3404
Content may be subject to copyright.
Sensors 2019, 19, 3404; doi:10.3390/s19153404 www.mdpi.com/journal/sensors
Review
Navigation Systems for the Blind and Visually
Impaired: Past Work, Challenges,
and Open Problems
Santiago Real* and Alvaro Araujo
B105 Electronic Systems Lab, ETSI Telecomunicación, Universidad Politécnica de Madrid Avenida
Complutense 30, 28040 Madrid, Spain
* Correspondence: sreal@b105.upm.es; Tel.: +34-91-0672-244
Received: 18 July 2019; Accepted: 30 July 2019; Published: 2 August 2019
Abstract: Over the last decades, the development of navigation devices capable of guiding the blind
through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s
objective is to provide an updated, holistic view of this research, in order to enable developers to
exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will be
briefly described and analyzed from a historical perspective, from the first “Electronic Travel Aids”
and early research on sensory substitution or indoor/outdoor positioning, to recent systems based
on artificial vision. Thereafter, user-centered design fundamentals are addressed, including the
main points of criticism of previous approaches. Finally, several technological achievements are
highlighted as they could underpin future feasible designs. In line with this, smartphones and
wearables with built-in cameras will then be indicated as potentially feasible options with which to
support state-of-art computer vision solutions, thus allowing for both the positioning and
monitoring of the user’s surrounding area. These functionalities could then be further boosted by
means of remote resources, leading to cloud computing schemas or even remote sensing via urban
infrastructure.
Keywords: assisting systems; navigation systems; perception; situation awareness; visually
impaired
1. Introduction
Recent studies on global health estimate that 217 million people suffer from visual impairment,
and 36 million from blindness [1]. The affected have their autonomy jeopardized in terms of many
everyday tasks, with the emphasis being placed on those that involve moving through an unknown
environment.
Generally, individuals rely primarily on vision to know their own position and direction in the
environment, recognizing numerous elements in their surroundings, as well as their distribution and
relative location. Those tasks are usually grouped under the categories of “orientation” or
“wayfinding,” while the capability to detect and avoid nearby obstacles relates to “mobility.” A lack
of vision heavily hampers the performance of such tasks, requiring a conscious effort to integrate
perceptions from the remaining sensory modalities, memories, or even verbal descriptions. Past work
described this as a “cognitive collage” [2].
In this regard, a navigation system’s purpose is to provide users with required and/or helpful
data to get to a destination point, monitoring their position in previous modeled maps. As we will
see, researchers working in this field have yet to find effective, efficient, safe, and cost-effective
Sensors 2019, 19, 3404 2 of 20
technical solutions for both the outdoor and indoor guidance needs of blind and visually impaired
people.
Nevertheless, in recent years, we have seen unprecedented scientific and technical
improvements, and new tools are now at our disposal to face this challenge. Thus, this study was
undertaken to re-evaluate the perspective of navigation systems for the blind and visually impaired
(BVI) in this new context, attempting to integrate key elements of what is frequently a disaggregated
multidisciplinary background.
Given the purpose of this work, its content and structure differ from recent reviews on the same
topic (e.g., [3,4]). Section 2 presents a historical overview that gathers together previous systems in
order to present a novel survey of the principles, key points, strategies, rules, and approaches of
assistive device design that are currently applicable. This is particularly important in the field of non-
visual humanmachine interface, as the perceptual and cognitive processes remain the same. Next,
Section 3, on related innovation fields, reviews several representative devices to introduce a set of
technical resources that are yet to be fully exploited, e.g., remote processing techniques, simultaneous
localization and mapping (SLAM), wearable haptic displays, etc. Finally, Sections 4 and 5 include a
brief introduction to user-centered design approaches, and a discussion of the currently available
technical resources, respectively.
2. Background on Guidance and Navigation Systems for the Visually Impaired
This section describes general aspects of the classic design of guidance and navigation systems
for the visually impaired from a historical perspective; in order to see the development and results,
as well as to prov ide futur e de signer s with an o ver all view of device enhancements, which have taken
place through a process of trial and error. As stated before, it is important to take into consideration
that some of these classic approaches and their impact on the targeted public could even be applicable
to current device design.
2.1. The Beginnings of Electronic Travel Aids
Over the last 70 years, researchers have worked on various prototypes of electrical obstacle
detection devices for BVI people known as electronic travel aids (ETA). This was mainly caused by
the fast development of radar and sonar systems, which was encouraged by the Second World War.
Some of the most representative prototypes are Leslie Kay’s sonar-based Sonic Torch and Binaural
Sonic Guide. Both of these will be described in Section 2.2.
The main reason why most of these first devices worked with ultrasonic signals instead of optic
or radio frequency seems to lie in propagation speed [5]: the large reflection delay of sound waves
allowed them to be used for distance measurements (sonar). On the other hand, systems like Laser
Cane [6] resorted to techniques such as optical triangulation that resulted in less precision.
Other renowned sonar-based devices developed in the 1960s and 1970s were Russell’s
PathSounder [7], the Nottingham Obstacle Detector [8] (Blind Mobility Research Unit, Nottingham
University) and the Mowat Sensor [9]. All of them had similar characteristics, differing mainly in
beam width and user interface, where the latter used sounds and/or vibrations to inform the user
about the presence or absence of obstacles and, sometimes, even allowed them to make range
estimations.
Later, in the 1980s, ETA gradually began to add processing capabilities to their designs, allowing
them to further expand, filter, or make judgements about the sensors’ collected data (e.g., Sonic
Pathfinder [10]). Also, user interfaces were improved by making them more efficient and user-
friendly (e.g., by including recording speech [11]).
2.2. Sensory Substitution Devices
Sensory substitution, which derives from neuroplasticity theory, refers to the capability of the
brain to assimilate information belonging to one specific sensory channel through another. Thus, it
rapidly became a complementary field to the abovementioned ETA development.
Sensors 2019, 19, 3404 3 of 20
In this context, Paul Bach-y-Rita started collaborating with Carter Compton Collins et al. in 1964
to develop systems capable of making blind individuals able to perceive visual information through
haptics [12]. Their first device projected images captured by a camera onto the skin through vibrations
using a matrix of 20 × 20 haptic actuators, as displayed in Figure 1 [13]. Later surveys showed that
both the blind and the blindfolded could “determine the position of visual objects, their relative size,
shape, number, orientation, direction, and rate of movement,” and also “track moving targets”; even
though it took a long time, the skin on the back could not handle the “vibratory-image” resolution
[14] and the large amount of data observed in outdoor tests easily overloaded the user.
Figure 1. The “Tactile Television” by Paul Bach-y-Rita et al.
Despite its limitations, this project led to a number of similar systems culminating with
BrainPort, a version currently available on the market [15]. This device is based on a tongue
electrotactile interface: the tactile stimulus was artificially induced with surface currents in the tongue
targeting each area’s corresponding afferent nerves. Some years later (2016), a study was conducted
to evaluate the functional performance of BrainPort in profoundly blind individuals [16], with
encouraging results from object recognition and basic orientation and mobility tasks.
Most subsequent visualtactile sensory substitution devices kept mapping point-for-point
camera images into haptic replicas by means of mechanical elements or electrotactile interfaces (e.g.,
Forehead Retina System [17], HamsaTouch [18]). On the other hand, systems like the Electro-Neural
Vision System (ENVS) or Haptic Radar [19], which will both be further described in later sections,
focused on providing users with distance measurements from nearby obstacles so as to give them a
rough, yet intuitive, notion of their surroundings.
Conversely, visualauditory sensory substitution has experienced far more improvements over
the years. The first devices developed were Kay’s Sonic Torch [20] and Sonic Guide [21]. These
devices moved the sonar reflected signal spectrum within the hearing range, leaving the task of
feature recognition through sound up to the user. Some of them could even identify elements such
as poles, vegetation, etc. Again, consequently, large amounts of data overloaded the user; a fact that
led to solutions such as using a narrower beam width to attenuate background noise.
Later visualauditory designs focused on mapping images and/or proximity sensor readings
into sounds in a way that could be easily deciphered by the brain. Leaving aside projects like UMS’s
NAVI [22] or Hokkaido University’s mobility aid, which tried to enhance the Sonic Guide original
design by replicating the echolocation of bats [23], one of the most well-known projects was Peter B.
L. Meijer’s vOICe [24].
Throughout the years since its development, vOICe has been studied from different
perspectives, from achieved visual acuity [25] to its potential when integrated with 3D cameras. Also,
some user interviews revealed what seems to be acquired synesthesia, as they reported recovering
visual perceptions such as depth [26].
Lastly, another approach exemplified by La Laguna University’s Virtual Acoustic Space [27]
resorts to human hearing to recognize some 3D space characteristics from room reverberation, sound
Sensors 2019, 19, 3404 4 of 20
tone, etc. By means of stereo-vision 3D recording and head-related transfer function (HRTF)
processed sounds, the device could reproduce virtual sound sources located over the captured
surfaces through the user’s headphones. As the researchers stated: “the musical effect of hearing this
stimulus could be described as perceiving a large number of raindrops striking the surface of a pane
of glass.” Later tests showed how blind subjects could make use of these sounds to build a basic
schematic diagram of their surroundings [27].
2.3. Navigation Systems for the Visually Impaired
From their birth until 1985, ETAs were not well received by the public: studies such as [5] from
the USA, one of the leading countries in the research and development of devices for the BVI, stated
that no more than 30003500 devices were sold. Even that number does not seem accurate, as “very
little is known about who purchased these ETAs.”
Those first designs focused mainly on providing obstacle avoidance support. Using them as
stepping stones, numerous devices were then developed as enhanced versions in terms of weight,
cost, consumption, reliability, etc. For instance, Bat K Cane [28] is a commercial sonar-based ETA
designed by Leslie Kay et al. after SonicGuide. Other similar examples are UltraCane and MiniGuide
[28,29], a built-in cane and hand-held device, respectively. These make use of vibrations to provide
the user with adapted data from the ultrasound transductors.
Alternatively, sensory substitution device research opted for conveying visual perceptions to the
BVI mainly via acoustics or haptics. Nevertheless, as mentioned earlier, the large cognitive load
limited the amount of data that could be assimilated efficiently by the user, which consequently
reduced the overall impact, and specifically those advances related to mobility.
Hence, neither ETAs nor general-purpose sensory substitution devices could help users reach a
destination by themselves. That deficiency drove researchers to develop navigation systems that are
specially adapted for the BVI, with the first known devices dating from the 1970s and 1980s.
Those devices rapidly incorporated computer-modeled maps of the environment, and required
several built-in sensors and landmarks for keeping track of their position (e.g., [30]). In particular, the
use of odometry became widely adopted, as can be seen in projects like Michigan University’s
GuideCane [31], preceded by NavBelt [32]. These systems made use of both position and ultrasound
transducer data to guide BVI users to a nearby destination while avoiding obstacles. However, with
the proliferation of portable, lightweight devices, solutions that did not require continuous floor
contact were preferred. Probably one of the technical advances that had the most impact in this
context would be the arrival of global navigation satellite systems (GNSS), specifically the Global
Positioning System (GPS), which became fully operational in April 1995.
In 1985, both C. C. Collins [12] and Jack M. Loomis [33] proposed applying GPS for BVI
guidance. Later, by 1993, Loomis et al. developed their first prototype of the UCSB Personal Guidance
System (UCSB PGS), a GPS-based portable device conceived as a complement to the cane that could
lead the user on an outdoor route, though it did not offer any obstacle avoidance support.
The UCSB PGS project focused on designing the user interface and the geographic information
system (GIS) before finally ending in 2008. Various modalities of haptic and acoustic input/outputs
were tested [34], from speech interfaces to a hand-held tool made to convey descriptions of the
surroundings according to which direction it was being pointed in. This solution is analogous to that
of Talking Signs [35]. Among the output modalities, the researchers prioritized simulating virtual
sound sources along the route, similar to what was previously seen in Virtual Acoustic Space, as it
gave much better results in terms of cognitive load, time to complete the course, distance traveled,
etc. [36], and it was highly rated in after-test surveys. Also, open headphones allowed the user to hear
the surroundings, which mostly compensated for one of the most significant inconveniences
preventing its usage in early stages.
From then onwards, numerous systems made for BVI peoples guidance relied on GNSS
measurements, mostly GPS supported by dead reckoning navigation and GNSS augmentation
technology. UCSB PGS is one of the first examples, reporting an accuracy of nearly 1 m by the
combination of differential GPS plus inertial navigation. Some years later, Trekker, BrailleNote GPS,
Sensors 2019, 19, 3404 5 of 20
and other related products rapidly became available on the market. Furthermore, projects like the
European Tormes and PERNASVIP [37], with the contribution of the European Space Agency,
pursued enhanced GNSS positioning for BVI people guidance. For example, PERNASVIP’s technical
objectives included locating “visually disabled pedestrians in urban environments within a 4-m
accuracy, 95% of the time, with less than 15 s of the time to first fix.” Regretfully, mainly due to
multipath errors in some urban areas, these specifications were only partially achieved.
As can be seen, locating technology became the backbone of navigation systems. Therefore,
because of limited coverage by GNSS—e.g., indoor signal obstruction—and inertial navigation
accumulated error, complementary systems were needed to keep track of users along their route.
Some of the preferred solutions were networks consisting of:
Ultrasound transmitters: As an illustrative case, the University of Florida’s Drishti project [38]
(2004) applied this kind of technology to BVI people guidance, combining differential GPS
outdoors and an ultrasound transmitter infrastructure indoors. As for the latter, a mean error of
approximately10 cm and 22 cm maximum, was observed. However, the accuracy may be easily
degraded due to signal obstruction, reflection, etc.
Optical transmitters: By 2001, researchers from Tokyo University developed a BVI guidance
system made of optical beacons, which were installed in a hospital [39]. The transmitters were
positioned on the ceiling, with each one sending an identification code associated to its position.
The equipment carried by the users read the code in range, then reproduced recorded messages
accordingly. Another system worth mentioning belongs to the National University of Singapore
[40] (~2004). This time the position was inferred by means of fluorescent lights, each of these
lights having its own code to identify the illuminated area. As can be seen, this line of work has
similar features to those of Li-fi.
RFID tags: Many of the technical solutions for positioning services were based on an
infrastructure of beacons, be they radio frequency, infrared, etc. However, the subsequent costs
of installation and maintenance, or their rigidity against changes in the environment (e.g.,
furniture rearrangement), were points against their implementation. To make up for these
problems, RFID tag networks were proposed. Whereas, active tag costs are usually in the tens
of dollars, passive tags cost only tens of cents. Also, as batteries are discarded, the network
lifetime increases while maintenance costs are lowered, thus making them attractive solutions
for locating systems. Even though their range only covers a few meters, range measuring
techniques based on receive signal strength (RSS), received signal phase (RSP) or time of arrival
(TOA) could be applied [41]. However, the estimation of the user’s position is usually that of the
tag in range. As an example of this line of work, the University of Utah launched an indoor
Robotic Guide project for the visually impaired in 2003 [42]. One year later, their prototype
collected positioning data from a passive RFID network with a range of 1.5 m, effectively guiding
test users along a 40-m route. By 2005, their installation in shopping carts was proposed [43]. In
line with this, the PERCEP’ project [44] provided acoustic guidance messages by means of a
deployment of passive RFID tags and an RFID reader embedded in a glove. RFID positioning
will be widely adopted in the coming years, becoming one of the classic solutions. Nevertheless,
the applications are not only limited to this area. For example, they were also found suitable to
search for or identify distant objects [45].
Alternatively, another possible implementation is to correlate the data collected by different
sensors with a 3D map of the environment. This was exemplified by the work of Andreas Hub et al.,
i.e., in their subsequent hand-held [46] and built-in helmet prototypes [47].
These devices made use of techniques such as WiFi RSS measurements, inertial navigation, and
stereo-vision for positioning. Furthermore, the data gathered by these sensors were applied to the
recognition of previously modeled elements, e.g., pedestrians. Although error-prone, this
functionality was further enhanced by delimiting the set of possible nearby elements, as some of them
were associated to a static or semi-static position (e.g., table, chair, etc.).
Sensors 2019, 19, 3404 6 of 20
From then on, most navigation systems for the BVI would resort to a combination of
technologies, which are usually classified as indoor and/or outdoor solutions. Also, they started to
gather complementary data from external sources through the net.
This can be exemplified by the schematic diagram of the SmartVision project [48] shown in
Figure 2. As illustrated in the previous figure, stereo vision was applied for vision positioning, and
in subsequent projects included obstacle recognition functions, although it again resulted in poor
performance when it came to reliability, accuracy, etc. [49]. Therefore, the locating system would
effectively rely on external infrastructure (GPS, RFID, Wi-Fi). Positioning data were then combined
with maps and points-of-interest (POI) available on a geographic information system (GIS) server,
and thereafter offered directly to users.
Figure 2. The SmartVision project: a schematic diagram.
From then on, various indoor positioning technologies were tested, some of which were based
on Ultra-Wide Band (UWB) [50,51], passive Infrared Radiation (IR) tags [52], or Bluetooth low energy
(BLE) beacons [53] combined with inertial sensors [54], and even some that exploited the magnetic
signature of a building’s steel frame [55]. Among them, UWB technology stands out mainly because
of its sub-meter accuracy (e.g., 1520 cm in [50]) and robustness to multipath interference, an issue
inherent to both indoor and outdoor positioning. However, navigation through indoor scenarios
usually does not require sub-meter accuracy due to similar patterns between scenarios, a reduced set
of potentially hazardous elements, or a reduced size of the environment, which eases orientation and
mobility tasks.
Nevertheless, as navigation systems continued their development, and the amount of
information collected for blind navigation grew larger, the need for efficient user interfaces became
even more apparent.
Several classic solutions involved speech, beginning with recorded messages (e.g., Guide Dog
Robot, Sonic Pathfinder); later, speech synthesis and recognition were also gradually incorporated
(e.g., Tyflos [56]). At this point, sensory substitution became an attractive solution for blind
navigation system user interfaces, more so when the user needed the system to rapidly provide
detailed information regarding its immediate surroundings, while maintaining a low cognitive load.
In line with this, the ENVS project [57] is another representative example that conveys depth
perceptions through haptics. Again, it makes use of a pair of cameras to capture the 3D environment
and present it to the user as tactile stimuli in their fingers. Distance data were encoded in the pulse
width of electrotactile stimulation signals. If the gloves were aligned with the cameras, it seemed as
if things were being touched at a distance. Furthermore, the tests showed how this solution allowed
users to intuitively assimilate information from 10 virtual proximity sensors (Figure 3) with a
relatively low cognitive load.
Sensors 2019, 19, 3404 7 of 20
Figure 3. The ENVS project.
By 2005, the device incorporated a built-in GPS and compass to allow for outdoor guidance [58].
Orientation data were passed on to the user through the electrotactile gloves, overlapping the
distance-encoding signals.
3. Related Innovation Fields
This section focuses on related R&D technological areas that currently benefit from greater
attention and investment, as they could constitute some of the most important contributors in order
to achieve BVI mobility self-sufficiency.
3.1. Mixed Reality
In recent years, virtual and real environments have been slowly breaking down barriers and
becoming closer, e.g., by virtualizing physical objects or an individual’s movement, mixing virtual
and real elements in an immersive scenario, etc.
When forming a picture of mixed reality, system latencies ranging between tenths or even
hundredths of seconds are often required. Specifically, complying with that limitation when
virtualizing features of real elements led to the development of low-latency techniques and
commercial products for recording the three-dimensional environment.
Such circumstances would boost the implementation of functionalities needed for navigation
systems such as obstacle detection and recognition. This would then be exemplified by projects like
NAVI [59], based on Microsoft Kinect.
Soon enough, the high potential of applying computer vision for positioning was further
exploited. Simultaneous locating and mapping technology (SLAM), which can be found in Google’s
Project Tango, allowed for centimeter-level accuracy indoor positioning. Project Tango and related
technologies such as Intel RealSense provided vision positioning solutions, with reported cases of
application in commercially available drones like Yuneec’s Typhoon H. Specifically, the applications
for BVI navigation that had been previously contemplated materialized in the development of
various prototypes. For example, the Smart Cane system [60] used a depth camera and a server for
SLAM processing that allowed for six degrees-of-freedom indoor location, plus obstacle detection
features. Also, ISANA [61] exploited Project Tango for indoor wayfinding and obstacle detection,
using compatible hardware platforms (i.e., Phab 2 or Yellowstone mobile devices) and haptic
actuators embedded in a cane. Analogously, in [62] a novel prototype is described that used Tango
and Unity, a game engine, to capture the user’s movement in a continuously updated virtual replica
of the indoor environment. In addition to wayfinding and mobility assistance, SLAM techniques were
also used for tasks such as face recognition [63].
Another remarkable application of this technology for VI people’s guidance lies in the user
interface. One solution proposed by Stephen L. Hick et al., from Oxford University, exploited the
residual vision by enhancing 3D perceptions with simplified images emphasizing depth (Figure 4)
Sensors 2019, 19, 3404 8 of 20
[64]. They recently tried to access the market with their Smart Specs [65] glasses, with VA-ST start-
up funding.
Alternatively, mixed reality allows users to interact with virtual elements overlapping with their
actual surroundings, thus providing intuitive cues of orientation, distance from and shapes of objects,
etc.
Figure 4. A VA-ST Smart Specs captured image.
The usage of virtual sound sources to guide pedestrians along a route is one of the classic
solutions seen in projects like UCSB PGS, or even Haptic Radar. The latter combined its original IR-
based obstacle avoidance system with virtual sound guidance, which resulted in positive after-test
appraisals [66]. Nevertheless, some criticisms and suggestions were made, mainly in relation to the
area covered by the IR sensors and the vibrational interface.
Also, virtual sounds could not only be applied for guidance, but also for at least several tasks
that involved 3D enhanced perception, as previously seen in Virtual Acoustic Space.
Aside from solutions based on sound, virtual tactile elements were also studied, albeit
apparently less. The Virtual Haptic Radar project [67], originating from Haptic Radar, is a
representative example. It substituted its predecessor’s IR sensors by the combination of a three-
dimensional model of the surroundings plus an ultrasonic-based motion capture system worn by the
user. As described in Figure 5, once the user reached a certain area near the object, warning vibrations
were triggered accordingly.
Figure 5. Virtual Haptic Radar project.
However, one of the main problems hampering tactile-based solutions is the haptic interfaces
available. Most portable designs seem to resort to mechanical components, thus causing a conflict
between their bulkiness and the subtlety of the induced perceptions. Alternatives such as
electrotactile devices remain experimental so far.
Sensors 2019, 19, 3404 9 of 20
3.2. Smartphones
Over the last decade smartphones, among other portable devices, have gradually included a
variety of features that would make them resourceful platforms for developers, some of which will
be discussed next.
As a stand-alone device, a smartphone shows a high and rapidly increasing processing capacity
in comparison with its price. Additionally, it incorporates a diverse set of built-in tools and sensors,
like cameras, GNSS modules, accelerometers, gyroscopes, or NFC readers. In addition, close-range
communication via Bluetooth or Wi-Fi further expands the previous assortment of uses, e.g., by
means of external sensors for obstacle detection, high-precision RTK-GNSS modules, etc.
On the other hand, mobile networks keep on improving with each new release, leading to the
usage of remote resources. In accordance with this, cloud computing services are nowadays
commercialized at various levels of abstraction, such as infrastructure (IaaS), platforms (PaaS), or
software (SaaS). Remarkable examples in our line of work, as will later be shown, are artificial vision
SaaS, as offered by Google or Microsoft, providing developers with APIs to get access to Google
Cloud Platform and Microsoft Cognitive Services resources, respectively.
An additional aspect to be aware of is the acceptance of smartphones specifically by BVI users
[68]. Even before accessibility for handicapped people made its way into software design standards,
as can be seen in Apple’s iOS, mobile phones have progressively become widely adopted for calls or
to send text messages. Now, with the generational change, the number of users of these new
technologies has further increased.
In this environment, research on navigation systems for BVI users found a new field to exploit,
e.g., the BLE-based NavCog smartphone application [53] or purely inertial prototypes [69] for indoor
wayfinding. Regarding general-purpose sensory substitution, a few visualauditory systems soon
became publicly available software applications, e.g., EyeMusic [70,71], or even the classic vOICe
[72]. Conversely, visualtactile sensory substitution systems were once again comparatively scarce.
One example would be HamsaTouch, seen in Section 2.2, which recreates Bach-y-Rita’s and Collins
et al.’s prototypes in a smartphone equipped with a haptic electrotactile display (Figure 6b). On the
other hand, applications such as Seeing AI [73] or TapTapSee [74] provide users with verbal
descriptions of captured images, making use of remote processing resources in a cloud computing
schema.
(a) (b)
Figure 6. Lazzus (a). HamsaTouch (b).
Nevertheless, the focus of attention was placed on GNSS-based outdoor navigation. Next, some
representative examples of available applications are briefly described:
Moovit [75]: a free, effective, and easy-to-use tool that offers guidance on the public transport
network, managing schedules, notifications, and even warnings in real time. It is one of the
assets for mobility tasks recommended by ONCE (National Organization of Spanish Blind
People).
Sensors 2019, 19, 3404 10 of 20
BlindSquare [76]: specifically designed for the BVI, this application conveys the relative location
of previously recorded POIs through speech. It makes use of Foursquare’s and OpenStreetMap’s
databases.
Lazzus [77]: a paid application, again designed for BVI users, which coordinates GPS and built-
in motion capture and orientation sensors to provide users with intuitive cues about the location
of diverse POIs in the surrounding area, even including zebra crossings. It offers two modes of
operation: the 360° mode verbally informs of the distance and orientation to nearby POIs,
whereas the beam mode describes any POI in a virtual field of view in front of the smartphone.
Its main sources of data are Google Places and OpenStreetMap.
Some of these functionalities are also shared by an increasing number of commercially available
applications, each with specific characteristics and improvements. For example, Seeing AI GPS [78]
includes solutions analogous to 360° and beam modes of Lazzus plus pre-journey information;
NearBy Explorer offers several POI notification filters, etc.
3.3. Wearables
So far, bone conduction headphones and smart glasses with a built-in camera have mainly been
used for BVI mobility support. Furthermore, as the size and cost of sensors and microprocessors
further decreased, and given the advantages of wearable devices, the development of designs
specifically aimed at these people has been slowly boosted.
Some of the main points in favor of wearable designs include the sensors’ wider field-of-view,
the usage of immersive user interfaces, or users’ request for discreet, hands-free solutions. In Figure
7, some strategic placements of these sensors and interfaces are shown, including a few examples of
market-available products.
Figure 7. Wearables for the BVI: common placements.
Firstly, regarding the sensors’ field-of-view, some devices rely on the user to scan their
surroundings, whereas others resort to intermediary systems that monitor the scene. Among them,
the first strategy was therefore to look for placements that eased “scanning movements,” placing
sensors on the wrist (Figure 7B), the head (Figure 7A) or embedded in the cane (Figure 7C).
Specifically, systems corresponding with Figure 7B,C tended to imitate the features of the first ETA.
This was exemplified by Ultracane, SmartCane (Figure 7C) or Sunu-band [79] (Figure 7B), as all of
them offered obstacle detection functionalities supported by ultrasound proximity sensors via a
vibrational user interface. On the other hand, the third category of wearables (Figure 7A) was usually
seen in camera-based sensory substitution or artificial vision systems, e.g., Seeing AI, Orcam MyEye
[80], BrainPort, or even vOICe.
Sensors 2019, 19, 3404 11 of 20
Conversely, the second strategy generally opts for a wider field-of-view, thus sensors were often
positioned in relatively static and non-occlusive placements all over the torso (red dots in Figure 7).
That was the case with Toyota’s Project Blaid [81], a camera-based, inverted-U-shaped wearable that
rested on the user’s shoulders. Among its functionalities, it pursued object and face recognition, with
an emphasis placed on elements related to mobility such as stairs, signals, etc.
Regarding user interfaces, speech and Braille made up the first solutions for acoustic and tactile
verbal interfaces, coupled with headphones and braille displays. As an example, Figure 7B shows the
“Dot” braille smartwatch.
Other kinds of solutions strived for a reduced cognitive load by means of intuitive guidance
cues, usually exploiting the innate space perception capabilities of touch and hearing. Many examples
have been mentioned in this text, from Virtual Acoustic Space or UCSB PGS to Haptic Radar. Non-
occlusive headphones and vibratory interfaces are some of the devices most commonly used as they
benefit from a low cost, a reduced-weight design, etc., while still being able to generate immersive
perceptions such as virtual sound sources, or the approach to tactile virtual objects, as seen initially
in Haptic Radar, and later in Virtual Haptic Radar.
This latter approach is also found in the Spatial Awareness project, based on Intel RealSense.
The developed prototype conveys distance measurements through the vibration of eight haptic
actuators distributed over the user’s torso and legs.
4. Challenges in User-Centered System Design
As will be discussed, a major flaw in the design of navigation systems for BVI users seems to lie
in a set of reiterated deficiencies concerning the knowledge of the users’ needs, capabilities,
limitations, etc., despite the great amount of work that has accumulated over the last few decades.
Thus, this section will attempt to gather key user-centered design features prior to a further
discussion of system design in Section 5.
One of the first problems faced in the development of assistive technology is the heterogeneity
of the tar geted pub lic [82]. The assis tan ce requ ired is relate d to the users’ residu al vis ion , among oth er
circumstances, such as physical or sensory disabilities deriving from the ageing process that should
be noted (81% of the BVI are aged above 49 years [1]). In particular, this section will focus on blindness
as the most severe case of disability, so as to provide the reader with enough data to infer the needs
of specific users.
Several user requirements concerning navigation systems for the blind have often been
addressed. Firstly, regarding the disposal of environmental information, some typical features to
offer are [5]:
1. “The presence, location, and preferably the nature of obstacles immediately ahead of the
traveller.” This relates to obstacle avoidance support.
2. Data on the “path or surface on which the traveller is walking, such as texture, gradient,
upcoming steps,” etc.
3. “The position and nature of objects to the sides of the travel path,” i.e., hedges, fences, doorways,
etc.
4. Information that helps users to “maintain a straight course, notably the presence of some type
of aiming point in the distance,” e.g., distant traffic sounds.
5. “Landmark location and identification,” including those previously seen, particularly in (3).
6. Information that “allows the traveller to build up a mental map, image, or schema for the chosen
route to be followed.” This point involves the study of what is frequently termed “cognitive
mapping” in blind individuals [83].
Whilst the first ETAs were oriented to the first category of information, solutions that placed
virtual sound sources over POIs easily covered points (4) and (5), and solutions based on artificial
vision could provide data in any category.
One key factor to be aware of in this context is the theory behind the development of sensory
substitution devices, which has been mentioned throughout the text when describing the “cognitive
Sensors 2019, 19, 3404 12 of 20
load” or “intuitiveness” of some user interfaces. At this point, the work in [84] is highlighted as it
introduces the basics.
In the first place, some major constraints to be considered are the difference of throughput data
capability between sensory modalities (bandwidth), and the compatibility with higher-nature
cognitive processes [84]. Two respective examples of these constraints would be the overloading of
touch seen in numerous attempts to convey visual perceptions [85], and the inability to decipher
visual representations of sounds, even though vision has comparatively more ‘bandwidth’ than
hearing.
Some other main factors would be the roles of synesthesia and neuroplasticity, or even how
intelligent algorithms can be used to filter the information needed in particular scenarios [84].
Once it was proven that distant elements can be recognized through perceptions induced by
sensory substitution devices of vision (Section 2.2), thus straying into the field of “distal attribution”
(e.g., [84,85]), it started an ambitious pursue of general-purpose visualtactile and visualauditory
devices. Several recent studies in neuroscience showed the high potential of this field [86,87], as areas
of the brain though to be associated to visual-type tasks, e.g., involved in shape recognition, showed
activity with visual-encoded auditory stimulation.
Nevertheless, given the limitations of the remaining senses to collect visual-type information, it
is usually necessary to focus on what users require to carry out specific tasks [88,89].
Lastly, the poor acceptance of past designs by their intended public should be taken into account;
a recent discussion on this topic can be found in [88]. In line with this, an aspect that was recently
taken advantage of is the growing penetration of technology in the daily routines of BVI people, with
an emphasis placed on the usage of smartphones.
Figure 8 shows the increasing growth of mobile phone and computer use, including how many
BVI people use these devices to access the Internet, a tendency likely to continue among younger
generations. This trend is also reflected in the creation of entities such as Amovil, which promotes
the accessibility of these devices to the BVI people, or the smartphone-compatible infrastructure of
London’s WayFindr [90] (similar to [91,92]), Bucharest’s Smart Public Transport [93], or Barcelona’s
NaviLens [93], which are oriented to boosting the autonomy of BVI individuals when using public
transportation. In line with this, Carnegie Mellon University’s NavCog, based on a BLE network,
recently added Pittsburgh International Airport to the list of supported locations [94].
Figure 8. Percentages of Spanish BVI users of mobile phones (blue) and computers (orange);
percentage of those who access the Internet (gray), and references to the overall population (green).
Data obtained from INE and [51] (2013).
Sensors 2019, 19, 3404 13 of 20
5. Availability of Technical Solutions
Finally, this last section will delve into some general aspects of potential architectures.
Functional requirements and their feasibility will be discussed according to past experiences, the
available technology, and user-related needs and constraints.
The discussion on this topic will be addressed according to three main functionalities of
navigation systems for the blind, namely positioning systems, environment monitoring and user
interface (Figure 9). The system coordinates the abovementioned modules with complementary data,
such as POIs (e.g., OpenStreetMap), maps, public transportation schedules, etc. which are available
via the web.
Figure 9. Architecture proposal for navigation assistance devices (examples included).
5.1. Positioning Systems
Focusing on assistance along a route, a navigation system needs positioning data, but its
specifications may differ according to the solution pursued. For example, applications like Lazzus
efficiently indicate the location and nature of POIs with accuracies of about 1 m. On the other hand,
projects that simulate virtual sound sources, such as Virtual Acoustic Space, usually need cm
accuracy positions, in addition to split-second time responses to match the HRTF output sounds with
head movements. These are typical constraints of current mixed reality applications.
Additionally, the design of navigation systems varies depending on whether it is oriented to
indoor or outdoor environments (see Section 4). This particularly affects positioning techniques,
which can be further classified into portable equipment, e.g., related to dead reckoning navigation
solutions, or external infrastructure that ranges from BLE beacons to GNSS. The technologies to be
applied would then be chosen according to the requirements of the targeted tasks, costs, etc.
Some of the most attractive solutions are those that take advantage of already deployed
infrastructure, which is reflected in the absolute prevalence of GNSS for outdoor location. It could
also be combined with mobile networks, or portable alternatives such as INS and/or the previously
discussed vision positioning. On the other hand, most of the indoor positioning techniques
encountered, including those currently available on the market, require a beacon infrastructure
deployment that easily pushes up costs, whereas usage would be extremely low.
At this point, portable devices for vision positioning show high promise for low-cost positioning,
both in outdoor and indoor environments (Sections 2.2 and 3.1). Additionally, vision-based solutions
could provide data on the users’ surroundings (Sections 2.2, 2.3, 3 and 5.2), and also play an important
role in the design of sensory substitution devices (Sections 2.2, 2.3, 3.1, 3.2, 4 and 5.3).
Whilst most GNSS and/or mobile networks can delimit user location within a few meters even
in indoor scenarios (e.g., 5G [95]), vision positioning further improves it to cm precision.
Furthermore, the same obstacles that degrade GNSS signals, e.g., buildings or bridges, could make
fine reference points for solutions based on image processing, making up for the accumulated error
characteristic of dead reckoning techniques. Some current drones, like DJI’s Phantom 4, stabilize their
movements through precise location feedback based on this kind of strategy.
Sensors 2019, 19, 3404 14 of 20
5.2. Environmental Monitoring
As seen in Section 4, navigation systems for BVI users need to gather specific data of the
environment for an efficient and safe guidance.
In this context, a first distinction to make is whether any object, feature, etc., in range is fixed to
a specific location, hereinafter referred to as static (e.g., stairways) or dynamic (e.g., pedestrians)
elements.
Static elements could be relatively easy to handle through records of their distribution and
relevant features in shared databases. This would be exemplified by Wayfindr, as the users’ closeness
to BLE beacons triggers guidance cues and notifications of nearby elements. Dynamic elements, on
the other hand, are to be managed with sensors such as cameras, sonar, LiDAR, etc., be they remote
installations or equipment carried by the user.
As for what technology should be used to capture those dynamic elements, it depends on the
specific application. Classic examples are Ultracane and Miniguide sonar-based obstacle detection
devices, or those that are vision or infrared-based, described in Section 3.
Nevertheless, these mobility aids usually face strict constraints of reliability and robustness, as
they could put users in potentially hazardous situations. Following this statement, three alternatives
will be discussed.
Firstly, in opposition to autonomous devices, these aids can make use of the users’ judgement.
Starting from the premise that raw measurement data of sensors do contain what is needed, e.g., to
detect and avoid an obstacle, the issue lies in whether the user could effectively and efficiently
analyze that flow of information. This delves into the domain of sensory substitution and
augmentation, with Virtual Haptic Radar as an example of the potential of extended touch [96] in a
context of mixed reality.
Secondly, not all orientation and mobility tasks require such extreme reliability. Common useful
features could be signal detection and recognition, the detection of nearby pedestrians, etc., most of
which are currently implemented in artificial vision technology. These solutions include precedent
systems going back to Tyflos, the recent and market-available Seeing AI and Orcam MyEye, or
current prototypes such as [97]. Also, the potential of vision-based systems would be even higher in
urban areas, as they are built placing great care on what elements are visible.
Third and finally, the reliability and robustness of mobility-related tasks can be inherited from
external resources, e.g., by leaning on urban monitoring infrastructures, as seen in Siemens’ InMobs
project.
5.3. User Interface
Once the relevant data for navigation are gathered, they are then passed onto the user. However,
this is one of the critical aspects in the design of products for BVI people, and usually acts as a
bottleneck of the information available in numerous navigation systems.
Speech interfaces can be applicable for several tasks, e.g., when providing brief descriptions of
the user surroundings, OCR, etc., as seen in Seeing AI. However, its use involves several constraints
and problems. Firstly, it could mean that the user may not hear or pay attention to the environment.
Simple, short messages are typically preferred, thus limiting the data provided. Secondly, the data
gathered must be analyzed and filtered according to the users’ requirements at each time and place,
a challenge similar to those of autonomous vehicles or drones. Thirdly, spatial cues are often non-
optimal, even in the case of simple left/right indications [34]. Most of these problems could be
extended to other linguistic interfaces (e.g., braille displays).
As for non-linguistic interfaces, the first limitation would be the extremely low data throughput
of hearing and touch in comparison with vision, followed by the need to match the data output with
“higher-nature cognitive processes” [84] (Section 4). Therefore, according to Giudice, Loomis,
Klatzky et al., developers should focus on helping users to perform specific and actually needed tasks,
minimizing the conveyed information, while taking advantage of the “perceptual and cognitive
factors associated with non-visual information processing” [84,88].
Sensors 2019, 19, 3404 15 of 20
These last factors can be exemplified by the natural cross-modal associations observed in the
project vOICe, such as volume-to-brightness and pitch-to-spatial height (see “weak synesthesia” in
[98]). This was even evident in Disney-supported research on colorvibration correspondences [99],
which came from the pursuit of more immersive experiences. Other illustrative cases include
individuals exploiting the spatial-rich information of sound to extreme levels, e.g., the echolocation
techniques shown by Daniel Kish. These techniques might be reminiscent of the first ETA described
in Section 1.
Another remarkable aspect to point out is the effect on distal attribution of the correspondence
between body movement and perceptions [100]. For example, in Bach-y-Rita’s et al. visualtactile
experiments, it was observed that users needed to manipulate the camera themselves to notice the
“contingencies between motor activity and the resulting changes in tactile stimulation” [84].
The use of these proprioception correspondences might be a fundamental element in the design
of future orientation and mobility aids, given the good performance of past projects.
Several of the mentioned projects incorporate mixed-reality-type user interfaces, such as the
virtual sound sources seen in UCBS PGS and Virtual Acoustic Space, or the virtual tactile objects of
Virtual Haptic Radar. Another system worth highlighting is Lazzus, which tracks the smartphone’s
position and orientation to trigger verbal descriptions according to which direction it is being pointed
in. As seen with Talking Signs, these approaches have users’ support [101].
Nevertheless, some of these solutions are also affected by technical limitations. While bone-
conduction earphones and head motion tracking techniques are sufficient for most sound-based
applications, portable haptic interfaces are heavily constrained. Even though haptic displays such as
those commercialized by Blitab could promote tactile-map approaches, portable alternatives are
limited to vibrational interfaces. These devices by no means exploit the full capabilities of touch, thus
hampering further exploration in fields such as the application of extended touch [96] in a context of
mixed reality. However, recent advances might boost the growth of a versatile classic solution known
as “electrotactile.”
This technology, which benefits from low cost, low power consumption, and lightweight design,
encompasses a wide range of virtual perceptions. Nevertheless, it has an insufficient theoretical
foundation in terms of neural stimulation, and several designs have revealed problems related to
poor electrical contact through the skin. This could be partially compensated for by choosing
placements with more adequate electrical conditions, such as the tongue (BrainPort), or by the use of
a hydrogel for better control of the flow of the electrical current (e.g., Forehead Retina System), etc.
Nowadays the same BrainPort makes a market-available device that shows the feasibility of this
haptic technology for some applications. In addition, over the years, subsequent prototypes have
strived for various improvements, such as combining electrotactile technology with mechanical
stimuli [102,103], stabilizing the transcutaneous electrodeneuron electrical contact, albeit with
closed-loop designs [104], or micro-needle interfaces [105,106], etc. Furthermore, the neural
stimulation theoretical basis continues to advance through research in related fields, e.g., when
developing myoelectric prostheses that provide a sense of touch via the electrical stimulation of
afferent nerves.
6. Conclusions
Numerous devices have been developed to guide and assist BVI individuals along
indoor/outdoor routes. However, they have not completely met the technical requirements and user
needs.
Most such unmet aspects are currently being answered separately in several research fields,
ranging from indoor positioning, computation offloading, or distributed sensing, to the analysis of
spatial-related perceptual and cognitive processes of BVI people. On the other hand, smartphones
and similar tools are rapidly making their way into their daily routines. In this context, old and novel
solutions have become feasible, some of which are currently available in the market as smartphone
applications or portable devices.
Sensors 2019, 19, 3404 16 of 20
In line with this, the present article attempts to provide a holistic, multidisciplinary view of the
research on navigation systems for this population. The feasibility of classic and new designs is then
briefly discussed according to a new architecture scheme proposal.
Author Contributions: Conceptualization, S.R. and A.A.; Methodology, S.R.; Formal Analysis, S.R.; Writing—
Original Draft Preparation, S.R.; Writing—Review & Editing, A.A.; Supervision, A.A.
Funding: This study received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Bourne, R.R.A.; Flaxman, S.R.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.;
Leasher, J.; Limburg, H.; et al. Magnitude, temporal trends, and projections of the global prevalence of
blindness and distance and near vision impairment: A systematic review and meta-analysis. Lancet Glob.
Health 2017, 5, e888–e897.
2. Tversky, B. Cognitive Maps, Cognitive Collages, and Spatial Mental Models; Springer; Berlin, Heidelberg,
Germany; 1993; pp. 14–24.
3. Tapu, R.; Mocanu, B.; Zaharia, T. Wearable assistive devices for visually impaired: A state of the art survey.
Pattern Recognit. Lett. 2018, doi:10.1016/j.patrec.2018.10.031.
4. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status,
challenges, and future directions. Sensors 2017, 17, 565.
5. Electronic Travel Aids; National Academies Press: Washington, DC, USA, 1986; ISBN 978-0-309-07791-0.
6. Benjamin, J.M. The laser cane. Bull. Prosthet. Res. 1974, 443–450.
7. Russel, L. Travel Path Sounder. In Proceedings of the Rotterdam Mobility Research Conference; American
Foundation for the Blind: New York, NY, USA, 1965.
8. Armstrong, J.D. Summary Report of the Research Programme on Electronic Mobility Aids; 1973.
9. Pressey, N. Mowat sensor. Focus 1977, 11, 35–39.
10. Heyes, A.D. The Sonic Pathfinder—A new travel aid for the blind. In High Technology Aids for the Disabled;
Elsevier: 1983; pp. 165–171.
11. Maude, D.R.; Mark, M.U.; Smith, R.W. AFB’s Computerized Travel Aid: Two Years of Research. J. Vis.
Impair. Blind. 1983, 77, 71, 74–75.
12. Collins, C.C. On Mobility Aids for the Blind. In Electronic Spatial Sensing for the Blind; Springer: Dordrecht,
The Netherlands, 1985; pp. 35–64.
13. Collins, C.C. Tactile Television-Mechanical and Electrical Image Projection. IEEE Trans. Man-Mach. Syst.
1970, 11, 65–71.
14. Rantala, J. Jussi Rantala Spatial Touch in Presenting Information with Mobile Devices; University of Tampere;
Tampere, Finland; 2014.
15. BrainPort, Wicab. Available online: https://www.wicab.com/brainport-vision-pro (accessed on 29 July
2019).
16. Grant, P.; Spencer, L.; Arnoldussen, A.; Hogle, R.; Nau, A.; Szlyk, J.; Nussdorf, J.; Fletcher, D.C.; Gordon,
K.; Seiple, W. The Functional Performance of the BrainPort V100 Device in Persons Who Are Profoundly
Blind. J. Vis. Impair. Blind. 2016, 110, 77–89.
17. Kajimoto, H.; Kanno, Y.; Tachi, S. Forehead electro-tactile display for vision substitution. In Proceedings of
the EuroHaptics; Paris, France; 2006.
18. Kajimoto, H.; Suzuki, M.; Kanno, Y. HamsaTouch: Tactile Vision Substitution with Smartphone and
Electro-Tactile Display. In Proceedings of the 32nd Annual ACM Conference on Human Factors in
Computing Systems: Extended Abstracts, Toronto, ON, Canada, 26 April–1 May 2014; pp. 1273–1278.
19. Cassinelli, A.; Reynolds, C.; Ishikawa, M. Augmenting spatial awareness with haptic radar. In Proceedings
of the 10th IEEE International Symposium on Wearable Computers (ISWC 2006); Montreux, Switzerland;
11–14 October 2006; pp. 61–64.
20. Kay, L. An ultrasonic sensing probe as a mobility aid for the blind. Ultrasonics 1964, 2, 53–59.
21. Kay, L. A sonar aid to enhance spatial perception of the blind: Engineering design and evaluation. Radio
Electron. Eng. 1974, 44, 605.
22. Sainarayanan, G.; Nagarajan, R.; Yaacob, S. Fuzzy image processing scheme for autonomous navigation of
Sensors 2019, 19, 3404 17 of 20
human blind. Appl. Soft Comput. J. 2007, 7, 257–264.
23. Ifukube, T.; Sasaki, T.; Peng, C. A blind mobility aid modeled after echolocation of bats. IEEE Trans. Biomed.
Eng. 1991, 38, 461–465.
24. Meijer, P.B.L. An Experimental System for Auditory Image Representations. IEEE Trans. Biomed. Eng. 1992,
39, 112–121.
25. Haigh, A.; Brown, D.J.; Meijer, P.; Proulx, M.J. How well do you see what you hear? The acuity of visual-
to-auditory sensory substitution. Front. Psychol. 2013, 4, doi:10.3389/fpsyg.2013.00330.
26. Ward, J.; Meijer, P. Visual experiences in the blind induced by an auditory sensory substitution device.
Conscious. Cognit. 2010, 19, 492–500.
27. Gonzalez-Mora, J.L.; Rodriguez-Hernaindez, A.F.; Burunat, E.; Martin, F.; Castellano, M.A. Seeing the
world by hearing: Virtual Acoustic Space (VAS) a new space perception system for blind people. In
Proceedings of the 2006 2nd International Conference on Information & Communication Technologies,
Damascus, Syria, 24–28 April 2006; Volume 1, pp. 837–842.
28. Hersh, M.A.; Johnson, M.A. Assistive Technology for Visually Impaired and Blind People; 2008; ISBN
9781846288661.
29. Ultracane. Available online: https://www.ultracane.com/ (accessed on 29 July 2019).
30. Tachi, S.; Komoriya, K. Guide dog robot. In Autonomous Mobile Robots: Control, Planning, and Architecture;
1985; pp. 360–367.
31. Borenstein, J. The Guidecane—A Computerized Travel Aid for the Active Guidance of Blind Pedestrians. In
Proceedings of the 1997 International Conference on Robotics and Automation (ICRA 1997); Albuquerque,
NM, USA; 1997; pp. 1283–1288 vol.2.
32. Shoval, S.; Borenstein, J.; Koren, Y. Mobile robot obstacle avoidance in a computerized travel aid for the
blind. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Diego,
CA, USA, 8–13 May 1994; pp. 2023–2028.
33. Loomis, J.M. Digital Map and Navigation System for the Visually Impaired; Unpublished work; Department of
Psychology, University of California-Santa Barbara; 1985.
34. Loomis, J.M.; Golledge, R.G.; Klatzky, R.L.; Marston, J.R. Assisting wayfinding in visually impaired
travelers. In Applied Spatial Cognition: From Research to Cognitive Technology; Lawrence Erlbaum Associates,
Inc; Mahwah, NJ, USA; 2007; pp. 179–203.
35. Crandall, W.; Bentzen, B.L.; Myers, L.; Brabyn, J. New orientation and accessibility option for persons with
visual impairment: Transportation applications for remote infrared audible signage. Clin. Exp. Optom. 2001,
84, 120–131.
36. Loomis, J.M.; Klatzky, R.L.; Golledge, R.G. Auditory Distance Perception in Real, Virtual, and Mixed
Environments. In Mixed Reality; Springer: Berlin/Heidelberg, Germany, 1999; pp. 201–214.
37. PERNASVIP—Final Report; 2011; Available online: pernasvip.di.uoa.gr/DELIVERABLES/D14.doc
(accessed on 1 August 2019).
38. Ran, L.; Helal, S.; Moore, S. Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service.
In Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications,
Orlando, FL, USA, 17–17 March 2004.
39. Harada, T.; Kaneko, Y.; Hirahara, Y.; Yanashima, K.; Magatani, K. Development of the navigation system
for visually impaired. In Proceedings of the 26th Annual International Conference of the IEEE Engineering
in Medicine and Biology Society; San Francisco, CA, USA; 1–5 September 2004; pp. 4900–4903.
40. Cheok, A.D.; Li, Y. Ubiquitous interaction with positioning and navigation using a novel light sensor-based
information transmission system. Pers. Ubiquitous Comput. 2008, 12, 445–458.
41. Bouet, M.; Dos Santos, A.L. RFID tags: Positioning principles and localization techniques. In Proceedings
of the 1st IFIP Wireless Days, Dubai, UAE, 24–27 November 2008; pp. 1–5.
42. Kulyukin, V.A.; Nicholson, J.; Kulyukin, V.; Nicholson, J. RFID in Robot-Assisted Indoor Navigation for
the Visually Impaired RFID in Robot-Assisted Indoor Navigation for the Visually Impaired. In Proceedings
of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28
September–2 October 2004; pp. 1979–1984 vol. 2.
43. Kulyukin, V.; Gharpure, C.; Nicholson, J. RoboCart: Toward robot-assisted navigation of grocery stores by
the visually impaired. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 2845–2850 .
44. Ganz, A.; Schafer, J.; Gandhi, S.; Puleo, E.; Wilson, C.; Robertson, M. PERCEPT Indoor Navigation System
Sensors 2019, 19, 3404 18 of 20
for the Blind and Visually Impaired: Architecture and Experimentation. Int. J. Telemed. Appl. 2012,
doi:10.1155/2012/894869.
45. Lanigan, P.; Paulos, A.; Williams, A.; Rossi, D.; Narasimhan, P. Trinetra: Assistive Technologies for Grocery
Shopping for the Blind. In Proceedings of the 2006 10th IEEE International Symposium on Wearable
Computers, Montreux, Switzerland, 11–14 October 2006; pp. 147–148.
46. Hub, A.; Diepstraten, J.; Ertl, T. Design and development of an indoor navigation and object identification
system for the blind. In Proceedings of the 6th International ACM SIGACCESS Conference on Computers
and Accessibility, Atlanta, GA, USA, 18–20 October 2004 pp. 147–152.
47. Hub, A.; Hartter, T.; Ertl, T. Interactive tracking of movable objects for the blind on the basis of environment
models and perception-oriented object recognition methods. In Proceedings of the Eighth International
ACM SIGACCESS Conference on Computers and Accessibility, Portland, OR, USA, 23–25 October 2006; p.
111-118.
48. Fernandes, H.; Costa, P.; Filipe, V.; Hadjileontiadis, L.; Barroso, J. Stereo vision in blind navigation
assistance. In Proceedings of the World Automation Congress, Kobe, Japan, 19–23 September 2010; pp. 1–
6.
49. Fernandes, H.; Costa, P.; Paredes, H.; Filipe, V.; Barroso, J. Integrating Computer Vision Object Recognition
with Location Based Services for the Blind; Springer; Switzerland; 2014; pp. 493–500.
50. Martinez-Sala, A.S.; Losilla, F.; Sánchez-Aarnoutse, J.C.; García-Haro, J. Design, implementation and
evaluation of an indoor navigation system for visually impaired people. Sensors 2015, 15, 32168–32187.
51. Riehle, T.H.; Lichter, P.; Giudice, N.A. An Indoor Navigation System to Support the Visually Impaired; In the
2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society;
Vancouver, BC, Canada; 2008, pp. 4435-4438.
52. Legge, G.E.; Beckmann, P.J.; Tjan, B.S.; Havey, G.; Kramer, K.; Rolkosky, D.; Gage, R.; Chen, M.;
Puchakayala, S.; Rangarajan, A. Indoor Navigation by People with Visual Impairment Using a Digital Sign
System. PLoS ONE 2013, 8, 14–15.
53. Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K. NavCog: A Navigational Cognitive Assistant for the Blind; In
Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices
and Services (MobileHCI '16); Florence, Italy; pp. 90 - 99, September 2016.
54. Murata, M.; Ahmetovic, D.; Sato, D.; Takagi, H.; Kitani, K.M.; Asakawa, C. Smartphone-Based Indoor
Localization for Blind Navigation across Building Complexes; In the 2018 IEEE International Conference on
Pervasive Computing and Communications (PerCom), Athens, 2018, pp. 1-10.
55. Giudice, N.A.; Whalen, W.E.; Riehle, T.H.; Anderson, S.M.; Doore, S.A. Evaluation of an Accessible, Free Indoor
Navigation System by Users Who Are Blind in the Mall of America; Journal of Visual Impairment & Blindness;
2019; pp. 140–155 vol. 113 issue 2.
56. Dakopoulos, D. Tyflos: A Wearable Navigation Prorotype for Blind & Visually Impaired; Design, Modelling and
Experimental Results; Wright State University and OhioLINK; Dayton, OH, USA; 2009.
57. Meers, S.; Ward, K. A vision system for providing 3D perception of the environment via transcutaneous
electro-neural stimulation. In Proceedings of the Eighth International Conference on Information
Visualisation, London, UK, 16–16 July 2004; pp. 546-552.
58. Meers, S.; Ward, K. A Substitute Vision System for Providing 3D Perception and GPS Navigation via
Electro-Tactile Stimulation. In Proceedings of the International Conference on Sensing Technology,
Palmerston North, New Zealand, November 2005; pp. 551-556.
59. Zöllner, M.; Huber, S.; Jetter, H.-C.; Reiterer, H. NAVI—A Proof-of-Concept of a Mobile Navigational Aid for
Visually Impaired Based on the Microsoft Kinect; Human-Computer Interaction – INTERACT; Springer; Berlin,
Heidelberg; 2011; pp. 584–587.
60. Zhang, H.; Ye, C. An Indoor Wayfinding System Based on Geometric Features Aided Graph SLAM for the
Visually Impaired. IEEE Trans. Neural Syst. Rehabilit. Eng. 2017, 25, 1592–1604.
61. Li, B.; Munoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-based Mobile Indoor
Assistive Navigation Aid for Blind People. IEEE Trans. Mob. Comput. 2019, 18, 702–714.
62. Jafri, R.; Campos, R.L.; Ali, S.A.; Arabnia, H.R. Visual and Infrared Sensor Data-Based Obstacle Detection
for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine.
IEEE Access 2017, 6, 443–454.
63. Neto, L.B.; Grijalva, F.; Maike, V.R.M.L.; Martini, L.C.; Florencio, D.; Baranauskas, M.C.C.; Rocha, A.;
Goldenstein, S. A Kinect-Based Wearable Face Recognition System to Aid Visually Impaired Users. IEEE
Sensors 2019, 19, 3404 19 of 20
Trans. Hum.-Mach. Syst. 2017, 47, pp. 52–64 vol. 47.
64. Hicks, S.L.; Wilson, I.; Muhammed, L.; Worsfold, J.; Downes, S.M.; Kennard, C. A Depth-Based Head-
Mounted Visual Display to Aid Navigation in Partially Sighted Individuals. PLoS ONE 2013, 8, e67695.
65. VA-ST Smart Specs—MIT Technology Review. Available online: https://www.technologyreview.com/s/
538491/augmented-reality-glasses-could-help-legally-blind-navigate/ (accessed on 29 July 2019).
66. Cassinelli, A.; Sampaio, E.; Joffily, S.B.; Lima, H.R.S.; Gusmo, B.P.G.R. Do blind people move more
confidently with the Tactile Radar? Technol. Disabil. 2014, 26, 161–170.
67. Zerroug, A.; Cassinelli, A.; Ishikawa, M. Virtual Haptic Radar. In Proceedings of the ACM SIGGRAPH
ASIA 2009 Sketches, Yokohama, Japan, 16–19 December 2009.
68. Fundación Vodafone España Acceso y uso de las TIC por las personas con discapacidad; Fundación Vodafone
España; Madrid, España; 2013. Available online: http://www.fundacionvodafone.es/publicacion/acceso-y-
uso-de-las-tic-por-las-personas-con-discapacidad (accessed on 1 August 2019)
69. Apostolopoulos, I.; Fallah, N.; Folmer, E.; Bekris, K.E. Integrated online localization and navigation for
people with visual impairments using smart phones. In Proceedings of the International Conference on
Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1322–1329.
70. BrainVisionRehab. Available online: https://www.brainvisionrehab.com/ (accessed on 29 July 2019).
71. EyeMusic. Available online: https://play.google.com/store/apps/details?id=com.quickode.eyemusic&hl=en
(accessed on 29 July 2019).
72. The vOICe. Available online: https://www.seeingwithsound.com/ (accessed on 29 July 2019).
73. Microsoft Seeing AI. Available online: https://www.microsoft.com/en-us/ai/seeing-ai (accessed on 29 July
2019).
74. TapTapSee—Smartphone application. Available online: http://taptapseeapp.com/ (accessed on 29 July
2019).
75. Moovit. Available online: https://company.moovit.com/ (accessed on 29 July 2019).
76. BlindSquare. Available online: http://www.blindsquare.com/about/ (accessed on 29 July 2019).
77. Lazzus. Available online: http://www.lazzus.com/en/ (accessed on 29 July 2019).
78. Seeing AI GPS. Available online: https://www.senderogroup.com/ (accessed on 29 July 2019).
79. Sunu Band. Available online: https://www.sunu.com/en/index.html (accessed on 29 July 2019).
80. Orcam MyEye. Available online: https://www.orcam.com/en/myeye2/ (accessed on 29 July 2019).
81. Project Blaid. Available online: https://www.toyota.co.uk/world-of-toyota/stories-news-events/toyota-
project-blaid (accessed on 29 July 2019).
82. Schinazi, V. Representing Space: The Development, Content and Accuracy of Mental Representations by
the Blind and Visually Impaired. Ph.D. Thesis, University College, London, UK, 2008.
83. Ungar, S. Cognitive Mapping without Visual Experience. In Cognitive Mapping: Past Present and Future;
Routledge: London, UK, 2000; pp. 221–248.
84. Loomis, J.M.; Klatzky, R.L.; Giudice, N.A. Sensory substitution of vision: Importance of perceptual and
cognitive processing. In Assistive Technology for Blindness and Low Vision; CRC Press; Boca Ratón, FL, USA;
2012; pp. 162–191.
85. Spence, C. The skin as a medium for sensory substitution. Multisens. Res. 2014, 27, 293–312.
86. Maidenbaum, S.; Abboud, S.; Amedi, A. Sensory substitution: Closing the gap between basic research and
widespread practical visual rehabilitation. Neurosci. Biobehav. Rev. 2014, 41, 3–15.
87. Proulx, M.J.; Brown, D.J.; Pasqualotto, A.; Meijer, P. Multisensory perceptual learning and sensory
substitution. Neurosci. Biobehav. Rev. 2014, 41, 16–25.
88. Giudice, N.A. Navigating without Vision: Principles of Blind Spatial Cognition; Handbook of Behavioral and
Cognitive Geography; Edward Elgar Publishing; Cheltenham, UK; Northampton, MA, USA; 2018; pp. 260-
288.
89. Giudice, N.A.; Legge, G.E. Blind Navigation and the Role of Technology. In Engineering Handbook of Smart
Technology for Aging, Disability, and Independence; John Wiley & Sons; Hoboken, NJ, USA; 2008; pp. 479–500.
90. Wayfindr. Available online: https://www.wayfindr.net/ (accessed on 29 July 2019).
91. Kobayashi, S.; Koshizuka, N.; Sakamura, K.; Bessho, M.; Kim, J.-E. Navigating Visually Impaired Travelers in
a Large Train Station Using Smartphone and Bluetooth Low Energy; In Proceedings of the 31st Annual ACM
Symposium on Applied Computing; Pisa, Italy; 2016; pp. 604–611.
92. Cheraghi, S.A.; Namboodiri, V.; Walker, L. GuideBeacon: Beacon-based indoor wayfinding for the blind,
visually impaired, and disoriented. In Proceedings of the IEEE International Conference on Pervasive
Sensors 2019, 19, 3404 20 of 20
Computing and Communications Workshops, Kona, HI, USA, 13–17 March 2017; pp. 121–130.
93. NaviLens—Smartphone Application. Available online: https://www.navilens.com/ (accessed on 29 July
2019).
94. NavCog Available online: http://www.cs.cmu.edu/~NavCog/navcog.html (accessed on 29 July 2019).
95. NGMN Alliance 5G White Paper; Next Generation Mobile Networks White Paper; NGMN Alliance;
Frankfurt, Germany; 2015; pp. 1–125.
96. Giudice, N.A.; Klatzky, R.L.; Bennett, C.R.; Loomis, J.M. Perception of 3-D location based on vision, touch,
and extended touch. Exp. Brain Res. 2013, 224, 141–153.
97. Lin, B.S.; Lee, C.C.; Chiang, P.Y. Simple smartphone-based guiding system for visually impaired people.
Sensors 2017, 17, 1371.
98. Martino, G.; Marks, L.E. Synesthesia: Strong and Weak; In Current Directions in Psychological Science; 2001;
pp. 61–65 vol. 10 Issue 2.
99. Delazio, A.; Israr, A.; Klatzky, R.L. Cross—Modal Correspondence between Vibrations and Colors; In the 2017
IEEE World Haptics Conference (WHC); Munich, Germany; 2017; pp. 219–224.
100. Briscoe, R. Bodily Action and Distal Attribution in Sensory Substitution; In Fiona Macpherson (ed), Sensory
substitution and augmentation; Proceedings of the British Academy; London, UK; 2015; pp. 1–13.
101. Golledge, R.G.; Marston, J.R.; Loomis, J.M.; Klatzky, R.L. Stated preferences for components of a personal
guidance system for nonvisual navigation. J. Vis. Impair. Blind. 2004, 98, 135–147.
102. D’Alonzo, M.; Dosen, S.; Cipriani, C.; Farina, D. HyVE-hybrid vibro-electrotactile stimulation-is an efficient
approach to multi-channel sensory feedback. IEEE Trans. Haptics 2014, 7, 181–190.
103. Yoshimoto, S.; Kuroda, Y.; Imura, M.; Oshiro, O. Material roughness modulation via electrotactile
augmentation. IEEE Trans. Haptics 2015, 8, 199–208.
104. Kajimoto, H. Electrotactile display with real-time impedance feedback using pulse width modulation. IEEE
Trans. Haptics 2012, 5, 184–188.
105. Kitamura, N.; Miki, N. Micro-Needle-Based Electro Tactile Display to Present Various Tactile Sensation; In the
28
th
IEEE International Conference on Micro Electro Mechanical Systems (MEMS); Estoril, Portugal; 2015;
pp. 649–650.
106. Tezuka, M.; Kitamura, N.; Tanaka, K.; Miki, N. Presentation of Various Tactile Sensations Using Micro-
Needle Electrotactile Display. PLoS ONE 2016, 11, dio:10.1371/journal.pone.0148410
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... In recent years, the rapid pace of technological advancements has led to the development of a wide range of assistive technologies aimed at aiding the visually impaired in navigating unfamiliar environments [19]. The primary objective of such technologies is to enable visually impaired individuals to detect obstacles and recognize their surroundings. ...
... In various computer vision applications, assistive technologies employing convolutional neural networks (CNN) have demonstrated state-of-the-art performance [19]. Specifically, a conventional image classification model employs hundreds of features extraction layers to generate and recognise patterns in an image. ...
Preprint
Full-text available
Individuals with visual impairments often face a multitude of challenging obstacles in their daily lives. Vision impairment can severely impair a person's ability to work, navigate, and retain independence. This can result in educational limits, a higher risk of accidents, and a plethora of other issues. To address these challenges, we present MagicEye, a state-of-the-art intelligent wearable device designed to assist visually impaired individuals. MagicEye employs a custom-trained CNN-based object detection model, capable of recognizing a wide range of indoor and outdoor objects frequently encountered in daily life. With a total of 35 classes, the neural network employed by MagicEye has been specifically designed to achieve high levels of efficiency and precision in object detection. The device is also equipped with facial recognition and currency identification modules, providing invaluable assistance to the visually impaired. In addition, MagicEye features a GPS sensor for navigation, allowing users to move about with ease, as well as a proximity sensor for detecting nearby objects without physical contact. In summary, MagicEye is an innovative and highly advanced wearable device that has been designed to address the many challenges faced by individuals with visual impairments. It is equipped with state-of-the-art object detection and navigation capabilities that are tailored to the needs of the visually impaired, making it one of the most promising solutions to assist those who are struggling with visual impairments.
... This perspective particularly reflects navigation systems for blind and visually impaired (BVI) individuals [3]. The purpose of such systems is to promote self-sufficiency when finding a destination point in a city, campus, etc., through orientation and mobility assistance. ...
Article
Full-text available
A navigation system for individuals suffering from blindness or visual impairment provides information useful to reach a destination. Although there are different approaches, traditional designs are evolving into distributed systems with low-cost, front-end devices. These devices act as a medium between the user and the environment, encoding the information gathered on the surroundings according to theories on human perceptual and cognitive processes. Ultimately, they are rooted in sensorimotor coupling. The present work searches for temporal constraints due to such human-machine interfaces, which in turn constitute a key design factor for networked solutions. To that end, three tests were conveyed to a group of 25 participants under different delay conditions between motor actions and triggered stimuli. The results show a trade-off between spatial information acquisition and delay degradation, and a learning curve even under impaired sensorimotor coupling.
... Regarding usability, in addition to the application of validated surveys and the analysis of is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint As for the literature, in addition to designers and engineers, it is proposed in specific cases to 486 work with clinicians, health professionals or rehabilitation professionals and with 487 Commercial specialists [51][52][53]. 488 From user centred to person centred 489 Literature addresses two streams of user-centred design, one in which the "user" is placed at 490 the centre of the design process and another, which focuses on the "person" [56]. The main 491 difference lies in the fact that the first considers the interaction between the user and the 492 product and "is concerned with ensuring that artifacts function as intended by the designers". ...
Preprint
Full-text available
Despite scientific and technological advances in the field of assistive technology (AT) for people with visual impairment (VI), technological designs are frequently based on a poor understanding of the physical and social context of use, resulting in devices that are less than optimal for their intended beneficiaries. To resolve this situation, user-centred approaches in the development process of AT have been widely adopted in recent years. However, there is a lack of systematization on the application of this approach. This systematic review registered in PROSPERO (CRD42022307466), assesses the application of the ISO 9241-210 human-centred design principles in allegedly “user-centred designed” AT developments for persons with VI. The results point to a wide variation of the depth of understanding of user needs, a poor characterization of the application of the User Centred Design (UCD) approach in the initial design phases or in the early prototyping, and a vague description of user feedback and device iteration. Among the principles set out in ISO 9241-210, the application of 5.6: "the design team includes multidisciplinary skills and perspectives" is the one for which the least evidence is found. The results show there is not enough evidence to fully assess the impact of UCD in 1. promoting innovation regarding AT products and practices, and 2. Judging if AT produced following such standards is leading to better user access, wellbeing outcomes and satisfaction. To address this gap it is necessary to, first, generate better implementation of UCD in AT development and second, to strengthen evidence regarding the implementation and outcomes of using UCD for AT. To better engage with the realities of persons with VI, we propose capacity building across development teams regarding UCD, its principles and components; better planning for UCD implementation; and cross-fertilization across engineering disciplines and social and clinical science.
... The study reveals the current benefits in technological areas to attain self-sufficiency for BVI people. The investigation discusses major flaws in the design of the navigation system, the diversity of assistive technology, and sensory disability insufficient to provide enough data to users [18]. Botzer and his co-workers developed a system that could help blind people with the distance through sounds of different frequencies and analyzed the system with the Hebb William maze. ...
Article
Full-text available
Visual impairment is a problem that is frequently getting worse everywhere. The World Health Organization estimates that 284 million individuals worldwide suffer from near-or distance vision impairment. The suggested work's goal is to create an Android application for blind persons that works with a smartphone and white cane. As the primary distinction between the proposed system and the current one, we use the cutting-edge "You Only Look Once: Unified, Real-Time Object Detection" technology. When compared to other algorithms, YOLOv4-tiny performs twice as quickly. To recognise the actual things in front of the visually impaired individual in real time, YOLOv4-tiny algorithm was trained on both Custom dataset and COCO dataset. Then determine how far that object is from the individual and produce an audio output. The camera is initialised using the OpenCV library, and it then starts taking frames and feeding them to the system. Other than that, we are using Python 3 for this project. The system then employs the YOLOv4-tiny algorithm, which has been trained on both the Custom dataset and the COCO dataset, to recognise and gauge the distance between the objects in front of the user. Text to voice conversion is then used to turn the objects that were detected into an audio segment. Our system outputs an audio segment that tells the user the name of the object and its distance from them. The user may now visualise the objects around him using this knowledge. The suggested method will even shield the user from running into nearby objects, keeping him safe from harm. An android-based application that represents the full system is available. The user of the Android application has the choice between an internal camera and an external camera. The esp32-cam is the external camera, whereas the internal camera is the one found in the android phone, which is attached to the white cane. Real-time video from their mobile phone camera or esp32-cam will be used to detect objects. The user opens the camera and feeds the real-time footage when he decides to begin the object detection procedure. The YOLOv4-tiny programme processes each frame, detecting objects and calculating their distances from people. Then, the label and distance of the object are converted into audio format by the audio system. The user can then hear the audio using their smartphone's speaker.
... Considering such numbers, it is clear there is a strong need for assistive navigation systems not only to help guide visually impaired pedestrians while walking on sidewalks, but also to offer additional useful information about the environment [4,5]. ...
Preprint
Full-text available
The World Health Organization (WHO) estimated in 2012 that globally 285 million people have low vision or are blind. A similar number of an estimated 253 million people was reported by the World Blind Union (WBU) in 2015. Most visually impaired or blind persons rely on white canes or service dogs to improve their daily mobility. However, the use of white canes limits the detection distance of objects to the length of the cane or approximately 1.5m. On the other hand, the number of guide dogs is limited, and the training of a service animal is very expensive. Therefore, there is a real need for systems that increase the mobility of the visually impaired, but at the same time are affordable, comfortable, and easy to use.This article presents a complete assistive navigation system, called SENSATION: Sidewalk Environment Detection System for Assistive Navigation, which is inexpensive, easy to wear and does not interfere with the additional usage of a white cane. It is a standalone system that does not depend on cloud computing and uses deep learning models on the device for the immediate detection of the environment along with image segmentation and an algorithm to correct drifting while walking.
Article
The aim of this study is to assess the extent to which visually impaired people are at risk of falling from railway station platforms and to identify opportunities for improvement. A barrier-finding fieldwork approach was used to conduct this study. A total of 412 stations’ platforms were examined to provide recommendations for enhancing platform safety.The study found that four major factors contribute to accidents in which visually impaired individuals may fall from railway station platforms. These factors include ''the spatial layout of the platform'', ''the warning tactile pavers'', ''the Fall Prevention Hoods'', and ''the fall prevention fences and platform screen doors ''.Based on the findings of the study, several measures are recommended to enhance the safety and accessibility of railway station platforms for visually impaired individuals. These measures include closing the gap between the platform and the train, installing Fall Prevention Hoods at lower heights near the feet so that they can be detected using a guide cane, and avoid the placement of directional tactile pavers close to the front end of the train cars or at the platform edge facing train car couplings, etc.
Chapter
People with blindness and low vision (pBLV) experience significant challenges when locating final destinations or targeting specific objects in unfamiliar environments. Furthermore, besides initially locating and orienting oneself to a target object, approaching the final target from one’s present position is often frustrating and challenging, especially when one drifts away from the initial planned path to avoid obstacles. In this paper, we develop a novel wearable navigation solution to provide real-time guidance for a user to approach a target object of interest efficiently and effectively in unfamiliar environments. Our system contains two key visual computing functions: initial target object localization in 3D and continuous estimation of the user’s trajectory, both based on the 2D video captured by a low-cost monocular camera mounted on in front of the chest of the user. These functions enable the system to suggest an initial navigation path, continuously update the path as the user moves, and offer timely recommendation about the correction of the user’s path. Our experiments demonstrate that our system is able to operate with an error of less than 0.5 m both outdoor and indoor. The system is entirely vision-based and does not need other sensors for navigation, and the computation can be run with the Jetson processor in the wearable system to facilitate real-time navigation assistance.
Article
Full-text available
Introduction This article describes an evaluation of MagNav, a speech-based, infrastructure-free indoor navigation system. The research was conducted in the Mall of America, the largest shopping mall in the United States, to empirically investigate the impact of memory load on route-guidance performance. Method Twelve participants who are blind and 12 age-matched sighted controls participated in the study. Comparisons are made for route-guidance performance between use of updated, real-time route instructions (system-aided condition) and a system-unaided (memory-based condition) where the same instructions were only provided in advance of route travel. The sighted controls (who navigated under typical visual perception but used the system for route guidance) represent a best case comparison benchmark with the blind participants who used the system. Results Results across all three test measures provide compelling behavioral evidence that blind navigators receiving real-time verbal information from the MagNav system performed route travel faster (navigation time), more accurately (fewer errors in reaching the destination), and more confidently (fewer requests for bystander assistance) compared to conditions where the same route information was only available to them in advance of travel. In addition, no statistically reliable differences were observed for any measure in the system-aided conditions between the blind and sighted participants. Posttest survey results corroborate the empirical findings, further supporting the efficacy of the MagNav system. Discussion This research provides compelling quantitative and qualitative evidence showing the utility of an infrastructure-free, low-memory demand navigation system for supporting route guidance through complex indoor environments and supports the theory that functionally equivalent navigation performance is possible when access to real-time environmental information is available, irrespective of visual status. Implications for designers and practitioners Findings provide insight for the importance of developers of accessible navigation systems to employ interfaces that minimize memory demands.
Chapter
Full-text available
This chapter considers what it means to learn and navigate the world with limited or no vision. It investigates limitations of blindness research, discusses traditional theories of blind spatial abilities, and provides an alternative perspective of many of the oft-cited issues and challenges underlying spatial cognition of blind people. Several provocative assertions pertaining to visual impairment and spatial abilities are advanced that help to better understand navigation without vision, provide greater explanatory power relevant to many of the current debates, and offer some needed guidance on the development of new spatial learning strategies and technological solutions that will ultimately have a significant positive impact on the independence and quality of life of this demographic. An underlying and related theme of the chapter emphasizes the importance of ‘space’ in spatial cognition research, rather than vision as its principle mechanism. There is no debate that vision is an amazing conduit of spatial information, but it is also important to remember that it does not have a monopoly on space. Indeed, all of our senses encode spatial information to one degree or another, and as we will discuss, this commonality allows for equivalent performance on many of the same spatial behaviors, independent of whether they originate from visual or nonvisual perception.
Conference Paper
Full-text available
Continuous and accurate smartphone-based localization is a promising technology for supporting independent mobility of people with visual impairments. However, despite extensive research on indoor localization techniques, they are still not ready for deployment in large and complex environments, like shopping malls and hospitals, where navigation assistance is needed. To achieve accurate, continuous, and real-time localization with smartphones in such environments, we present a series of key techniques enhancing a probabilistic localization algorithm. The algorithm is designed for smartphones and employs inertial sensors on a mobile device and Received Signal Strength (RSS) from Bluetooth Low Energy (BLE) beacons. We evaluate the proposed system in a 21,000 m2 shopping mall which includes three multi-story buildings and a large open underground passageway. Experiments in this space validate the effect of the proposed technologies to improve localization accuracy. Field experiments with visually impaired participants confirm the practical performance of the proposed system in realistic use cases.
Article
Full-text available
A novel visual and infrared sensor data-based system to assist visually impaired users in detecting obstacles in their path while independently navigating indoors is presented. The system has been developed for the recently introduced Google Project Tango Tablet Development Kit equipped with a powerful graphics processor and several sensors which allow it to track its motion and orientation in 3D space in real-time. It exploits the inbuilt functionalities of the Unity engine in the Tango SDK to create a 3D reconstruction of the surrounding environment, then associates a Unity collider component with the user and utilizes it to determine his interaction with the reconstructed mesh in order to detect obstacles. The user is provided with audio feedback consisting of obstacle warnings. An extensive empirical evaluation of the obstacle detection component has yielded favorable results, thus, confirming the potential of this system for future development work. (Open access article: full text available at DOI: 10.1109/ACCESS.2017.2766579)
Article
Full-text available
Background: Global and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment. Methods: We did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12). Findings: Globally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9–65·4) were blind (crude prevalence 0·48%; 80% UI 0·17–0·87; 56% female), 216·6 million (80% UI 98·5–359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34–4·89; 55% female), and 188·5 million (80% UI 64·5–350·2) had mild visual impairment (2·57%, 80% UI 0·88–4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1–1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9–997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9–57·3) in 1990 to 36·0 million (80% UI 12·9–65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (−36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3–270·0) in 1990 to 216·6 million (80% UI 98·5–359·1) in 2015. Interpretation: There is an ongoing reduction in the age-stan dardised prevalence of blindness and visual impairment, yet the growth and ageing of the world's population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels.
Article
Describes progress in developing an effective, versatile, and inexpensive computerized travel aid (CTA) by combining Polaroid's ultrasonic ranging technology with a microprocessor.
Chapter
According to proponents of the sensorimotor contingency theory of perception, active control of camera movement is necessary for the emergence of distal attribution in tactile-visual sensory substitution (TVSS) because it enables the subject to acquire knowledge of the way stimulation in the substituting modality varies as a function of self-initiated, bodily action. This chapter, by contrast, approaches distal attribution as a solution to a causal inference problem faced by the subject’s perceptual systems. Given all of the endogenous and exogenous evidence available to those systems, what is the most probable source of stimulation in the substituting modality? From this perspective, active control over the camera’s movements matters for rather different reasons. Most importantly, it generates proprioceptive and efference-copy based information about the camera’s body-relative position necessary to make use of the spatial cues present in the stimulation that the subject receives for purposes of egocentric object localization.
Article
Recent statistics of the World Health Organization (WHO), published in October 2017, estimate that more than 253 million people worldwide suffer from visual impairment (VI) with 36 million of blinds and 217 million people with low vision. In the last decade, there was a tremendous amount of work in developing wearable assistive devices dedicated to the visually impaired people, aiming at increasing the user cognition when navigating in known/unknown, indoor/outdoor environments, and designed to improve the VI quality of life. This paper presents a survey of wearable/assistive devices and provides a critical presentation of each system, while emphasizing related strengths and limitations. The paper is designed to inform the research community and the VI people about the capabilities of existing systems, the progress in assistive technologies and provide a glimpse in the possible short/medium term axes of research that can improve existing devices. The survey is based on various features and performance parameters, established with the help of the blind community that allows systems classification using both qualitative and quantitative measu.res of evaluation. This makes it possible to rank the analyzed systems based on their potential impact on the VI people life.
Article
This paper presents a new holistic vision-based mobile assistive navigation system to help blind and visually impaired people with indoor independent travel. The system detects dynamic obstacles and adjusts path planning in real-time to improve navigation safety. First, we develop an indoor map editor to parse geometric information from architectural models and generate a semantic map consisting of a global 2D traversable grid map layer and context-aware layers. By leveraging the visual positioning service (VPS) within the Google Tango device, we design a map alignment algorithm to bridge the visual area description file (ADF) and semantic map to achieve semantic localization. Using the on-board RGB-D camera, we develop an efficient obstacle detection and avoidance approach based on a time-stamped map Kalman filter (TSM-KF) algorithm. A multi-modal human-machine interface (HMI) is designed with speech-audio interaction and robust haptic interaction through an electronic SmartCane. Finally, field experiments by blindfolded and blind subjects demonstrate that the proposed system provides an effective tool to help blind individuals with indoor navigation and wayfinding.
Book
Equal access to services and public places is now required by law in many countries. In the case of the visually impaired, it is often the use of assistive technology that facilitates their full participation in many societal activities ranging from meetings and entertainment to the more personal activities of reading books or making music. In this volume, the engineering techniques and design principles used in many solutions for vision-impaired and blind people are described and explained. Features: • a new comprehensive assistive technology model structures the volume into groups of chapters on vision fundamentals, mobility, communications and access to information, daily living, education and employment, and finally recreational activities; • contributions by international authors from the diverse engineering and scientific disciplines needed to describe and develop the necessary assistive technology solutions; • systematic coverage of the many different types of assistive technology devices, applications and solutions used by visually impaired and blind people; • chapters open with learning objectives and close with sets of test questions and details of practical projects that can be used for student investigative work and self-study. Assistive Technology for Vision-impaired and Blind People is an excellent self-study and reference textbook for assistive technology and rehabilitation engineering students and professionals. The comprehensive presentation also allows engineers and health professionals to update their knowledge of recent assistive technology developments for people with sight impairment and loss.