Conference PaperPDF Available

Learning to use electronic travel aids for visually impaired in virtual reality

Authors:
Learning to use electronic travel aids for visually impaired in
virtual reality
Fabiana Sofia Riccia, Alain Boldinib, John-Ross Rizzoa,b,c,d, and Maurizio Porfiria,b,e
aDepartment of Biomedical Engineering, New York University Tandon School of Engineering,
Six MetroTech Center, Brooklyn, NY 11201, USA
bDepartment of Mechanical and Aerospace Engineering, New York University Tandon School
of Engineering, Six MetroTech Center, Brooklyn, NY 11201, USA
cDepartment of Rehabilitation Medicine, New York University Langone Health, 240 East 38th
Street, NY 10016, USA
dDepartment of Neurology, New York University Langone Health, 240 East 38th Street, NY
10016, USA
eCenter for Urban Science and Progress, New York University Tandon School of Engineering,
370 Jay Street, Brooklyn, NY 11201, USA
ABSTRACT
Vision loss ranks among the top causes of disability in the United States. Visual impairment (VI) remains an
urgent public health priority, causing loss of independence in almost every aspect of daily life. Mobility of the
visually impaired requires skill, effort and training. In this vein, orientation and mobility training (O&M) is
currently offered to prepare the visually impaired for independent travel in any familiar or unfamiliar environment,
as well as to use primary and secondary mobility aids, such as white canes and electronic travel aids (ETAs),
respectively. Even though O&M training is an established practice, it is not free of risks, as trainees may be
exposed to potential harm. To address this issue, we developed a virtual reality (VR) environment for training
with an ETA previously developed by our team (on which we presented at this meeting last year). The ETA
comprises a belt, serving as a haptic feedback wearable device that senses the surroundings through computer
vision and sends vibro-tactile feedback to the user’s abdomen about the position of nearby obstacles. The belt is
interfaced with VR, enabling trainees to immerse in a computer-simulated, safe, three-dimensional environment.
Our integrated system brings the visually impaired into virtual scenarios where they can perform specific tasks-
either simply to measure their performance or to train certain skills for rehabilitation purposes. The ultimate goal
of our effort is to establish a training and evaluation framework that could mitigate the immobility consequences
of visual impairment, improve quality of life, enhance social participation, and reduce societal costs, while creating
an engaging and appealing experience to the visually impaired.
Keywords: Assistive technology, electronic travel aid, mobility training, virtual reality, visual impairment
1. INTRODUCTION
“Visual impairment” (VI) is a condition of reduced visual acuity and peripheral field deficit that cannot be
remedied by corrective lenses or surgery.1Globally, an estimated 43.3 million people are blind, 295 million
people have moderate and severe vision impairment, and 258 million have mild vision impairment.2By 2050,
such estimates are predicted to increase due to the rising life expectancy across our global population, along
with poor access to health care in low-and-middle-income countries.2
VI is a considerable public health concern that negatively affects quality of life and restricts equitable access
to and achievement in education and the workplace.3,4VI is not only a sensory disability, but also a significant
deterrent to mobility; difficulties with mobility and access to public spaces are the most significant impediments
Further author information: (Send correspondence to M.P.)
M.P.: E-mail: mporfiri@nyu.edu, Telephone: +1 646 997 3681
Nano-, Bio-, Info-Tech Sensors, and Wearable Systems 2022, edited by Jaehwan Kim,
Kyo D. Song, Ilkwon Oh, Maurizio Porfiri, Proc. of SPIE Vol. 12045, 1204504
© 2022 SPIE · 0277-786X · doi: 10.1117/12.2612097
Proc. of SPIE Vol. 12045 1204504-1
among people with VI.5Previous studies have examined the consequences of VI of immobility, reporting that
people with VI had slower walking speeds, more falls, and more mobility difficulties than individuals with no
VI.69In addition, many aspects of daily life are affected by struggles with independent travel, including social
inclusion, employment prospects, and overall quality of life.9,10 Social and physical exclusion, in turn, reinforces
experiences of disablement in people with VI. Therefore, it is critical to examine and address the issues that
inhibit mobility of people with VI.
Today, technological advancements are reshaping the everyday mobility of people with VI and their inter-
actions with different types of environments. In particular, electronic travel aids (ETAs) are technologically
advanced mobility solutions developed to overcome some of the limitations of traditional orientation and mobil-
ity (O&M) aids, such as white canes.11 While traditional aids are an excellent solution for a near-ground obstacle,
hazards that are lower than ground or higher than knee-level often remain undetected.12 On the contrary, ETAs
are equipped with measurement systems that can provide basic information about the presence of objects or
provide complex spatial information, such as the shape of individual objects and spatial relationships between
them.13 ETAs use different feedback approaches such as vibration or auditory output, thus enabling people with
VI to travel safely and securely.14 Although further research is needed to identify optimal solutions for a wide
range of users, through high-performing ETAs, people with VI can be more self-independent, experience less
accidents, and improve their mobility, consequentially enhancing their quality of life.
Current ETAs have some major limitations, such as sizeable testing campaigns to objectively evaluate their
performance, high production costs, and low availability of training programs with mobility specialists.15,16
Widespread use of better and more consistent measures of mobility performance would facilitate and minimize
experimental sessions, which could prove frustrating, discouraging, and even dangerous for people with VI.17,18
In fact, people with VI could be exposed to major hazards, such as falls and injuries, especially when testing
initial prototypes that are yet to be optimized. The development and dissemination of standardized assessment
measures and methods could in turn improve training strategies and encourage the use of ETAs.18
Recently, augmented and virtual reality (AR/VR) applications have been used to support development,
testing, and training with ETAs as well as to supplement O&M training.19 By exploring virtual environments,
people with VI can practice their skills while receiving multisensory feedback similar to real-life navigation.
VR environments can be used to supplement traditional O&M training by allowing people with VI to build
cognitive maps of unfamiliar routes. Additionally, simulated VI through VR allows for modeling different and
even rare forms of VI, at different stages of severity, in healthy subjects. Virtual environments integrated within
O&M training programs with ETAs can be valuable to support independent mobility by enabling “walking” the
virtual route and familiarization with both the surroundings and the device.20 Also, the integration of a virtual
stimulation can be effective in learning routes and in reducing frustration, fear, and risk/threats of navigating
public paths.21
In this work, we propose a new evaluation platform for ETAs based on VR. As a representative ETA, we
consider a haptic feedback wearable device that was previously designed by our group.22 Our system leverages
ten piezo-based actuators, mounted on a belt, to provide the user with vibrotactile feedback when obstacle(s)
is/are in close proximity. As actuators are distributed on a 2X5 grid, our ETA allows a complex and yet intuitive
representation of the environment with a registered worl grid (each frame is divided into an analogous 2X5
grid), whereby the presence of an obstacle in one direction is mapped to a vibration of the actuator in the same
direction, within an egocentric reference frame. We integrate this haptic feedback device with a VR platform,
built upon our previous efforts in Ref. 23. The platform allows users to perform an obstacle avoidance task
while experiencing a simulated VI. We study how our ETA, interfaced with the VR platform, enhances the
performance of a user with simulated VI in an obstacle avoidance task.
2. MATERIALS AND METHODS
2.1 Virtual reality environment
The VR platform was built using Unity, a cross-platform engine developed by Unity Technologies (San Francisco,
California, United States), for the VR headset Oculus Rift CV1 and Touch Controllers.
Proc. of SPIE Vol. 12045 1204504-2
In VR, we simulated age-related macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma
since they represent the fastest-growing and leading causes of VI among people of working age in recent times
(Fig. 1). The initial symptoms of AMD often consist of distorted vision and visual loss in the center of the
visual field that progresses slowly leading to a complete loss of central vision.24 Patients with AMD often have
increasing difficulty reading and recognizing objects and faces; spatial orientation, however, is preserved through
the intact functioning of peripheral vision.24 DR damages retinal blood vessels resulting in macular edema.25 The
edema makes patients experience blurred vision, floaters, distortions, and visual acuity loss in advanced stages.25
Glaucoma is a “silent” blinding disease due to an increase in intraocular pressure.26 Glaucoma patients with
moderate-advanced visual field loss report symptoms such as tunnel vision, blurriness, need of more light and
their blind spots as blurred, missing or grey.26 AMD symptoms were simulated by means of a culling mask
provided with gray spots to obscure the center of the visual field, a Gaussian blur shader, and a distortion
shader. DR symptoms were recreated by means of a culling mask provided with dark spots dispersed throughout
the visual field and a Gaussian blur shader. The symptoms of glaucoma were obtained by means of a culling
mask that compromises the peripheral visual field and a Gaussian blur shader.
To detect obstacles in the virtual environment, we utilized the Raycast function which projects different
groups of invisible rays coming from the user into the scene. Specifically, two rays are used to detect obstacles
on the right and left of the player, while four group of eight rays are used to detect obstacles that are in front of
the player. All the rays were projected so as to divide the visual field into a grid which is analogous to the one
formed by the actuators on the ETA (2X5). The actuator corresponding to right and left rays, and to each group
of the frontal rays would vibrate when at least one of the rays hits an obstacle in the virtual environment. The
range of action of the ETA was determined by the length of the rays which can be easily increased or decreased
through the Raycast function in VR.
(a) (b) (c)
Figure 1. Examples of the simulations of the three visual impairments in the VR platform: (a) severe age-related macular
degeneration, (b) severe diabetic retinopathy, and (c) severe glaucoma.
2.2 ETA description
As ETA for assisting people with VI during navigation and obstacle avoidance, we considered our previously
developed system that consists of a haptic feedback wearable device interfaced with a computer vision plat-
form.27,28 This platform comprises an algorithm that allows for obstacles detection by means of a stereo camera.
The transmission of environmental information collected by the computer vision platform to the user is made
possible by the haptic feedback wearable device. Specifically, the wearable device is composed of a belt with ten
integrated piezo-based discrete actuators, which can provide vibrotactile stimulation on users’ abdomen. The
actuators notify the user about the proximity of an obstacle and its position defined with respect to an egocentric
reference frame.27,28
2.3 ETA-VR interface
In our experiment, the haptic feedback device was directly interfaced with the VR environment through an
Arduino Mega 2560, which controlled custom-printed astable multivibrator circuits using op-amps to drive the
high-voltage amplifiers.22 These amplifiers, in turn, controlled each of the ten actuators on the belt independently.
A potentiometer was used to regulate the frequency of vibration in the multivibrator circuit, based on the distance
of the object from the user in VR.
Proc. of SPIE Vol. 12045 1204504-3
3. EXPERIMENT
We devised an experiment with the aim of evaluating whether or not our ETA can relay information about the
surrounding environment to the users. More specifically, we examined the feasibility of our ETA to improve
mobility performance of healthy subjects performing an obstacle avoidance task in a virtual environment while
experiencing a simulated VI. In this preliminary study, we tested our ETA with only one person (the first author).
The experiment comprises two conditions, each with three trials. The fixed factor distinguishing the two
conditions is whether the participant performed the task with or without the aid of the ETA. Each trial consisted
of performing the obstacle avoidance task while experiencing one of the three simulated VIs.
During the obstacle avoidance task, the participant was asked to cross from one side to the other in a
rectangular park in the least time possible, avoiding collisions with obstacles in the path, such as benches,
fences, and trash bins (Fig. 2). While performing the task, the participant utilized the Oculus Touch controllers
to navigate the environment, and received vibration feedback by the ETA and by the right controller. In the
other condition, the ETA provided vibrotactile feedback on the abdomen to help the user avoid the obstacles
before hitting them, thus anticipating potential collisions. The controller vibrated only after the collision had
occurred, to mimic sensory feedback from touch and assist participants in correcting their path.
Figure 2. Examples of obstacle avoidance task implemented in the virtual environment
For each trial, we acquired two metrics through the VR system: the time for completing the obstacle avoidance
task and the number of collisions with obstacles. By analyzing these two metrics we evaluated the feasibility of
the ETA to convey environmental information to the participants, thus improving their mobility performance
for each of the tested simulated VIs.
4. RESULTS
The results achieved by the preliminary study are presented in Tab. 1, in the form of the average and standard
deviation of the time to complete the task and number of collisions recorded during the test, conducted for each
of the two conditions tested.
Condition Time to completion Number of collisions
AMD severe w/ belt 91s 4
AMD severe w/o belt 187s 43
DR severe w/ belt 120s 1
DR severe w/o belt 165s 40
Glaucoma severe w/ belt 104s 3
Glaucoma severe w/o belt 122s 27
Table 1. Results of the preliminary experiments for the two conditions tested.
Proc. of SPIE Vol. 12045 1204504-4
The preliminary results show that our ETA may have a very positive impact on both the metrics considered.
The experimental subject had no prior experience of using the wearable device. For each of the three simulated
VIs, there is a decrease in the task completion time when she wears the ETA. This outcome confirmed the ability
of our ETA to effectively relay environmental information which explains the experimental subject’s ability to
respond more steadily to the haptic stimulus without the need of decreasing the navigation speed.
Results on collisions reveal that the condition in which the participant performed the task with the aid of the
ETA leads to less collisions. Thus, our ETA can be considered to be feasible as a platform to deliver feedback
that may aid in the detection and avoidance of obstacles. The considerable difference in the collisions number can
be attributed to the ETA reliability and promptness in conveying information about the immediate surroundings
to the user.
5. CONCLUSIONS
In this paper, we highlighted the potential of virtual reality (VR) in simulating visual impairment and testing
electronic travel aids (ETAs) for the visually impaired. As an initial investigation, we implemented the three
most widespread forms of visual impairment (that is, age-related macular degeneration, diabetic retinopathy,and
glaucoma) in a VR platform and employed them to evaluate the effectiveness of an ETA in conveying environ-
mental information to an end user. The interface between the VR platform and the ETA was used to perform
tests in a highly controllable environment. The long-term goal of the pro ject is to establish new paradigms for
training and performance assessments of ETAs based on AR/VR technologies.
The preliminary analyses showed in this paper require a follow-up study that addresses some of their critical
weaknesses, including the limited number of participants and the lack of a statistically grounded approach.
Finally, we foresee further developments and improvements of the system, such that it could be employed for
education within outreach activities with the general public and for training visually impaired subjects regarding
use and best practices for ETAs.
ACKNOWLEDGMENTS
This research was supported by the National Science Foundation under Grant Nos. ECCS-1928614, CNS-
1952180, and CBET-2037878, by the National Eye Institute and Fogarty International Center under Grant No.
R21EY033689, as well as by the U.S. Department of Defense under Grant No. VR200130. New York University
(NYU) and John-Ross Rizzo (JRR) have financial interests in related intellectual property. NYU owns a patent
licensed to Tactile Navigation Tools. NYU, JRR are equity holders and advisors of said company.
REFERENCES
[1] Naipal, S. and Rampersad, N., “A review of visual impairment,” African Vision and Eye Health 77 (01
2018).
[2] Bourne, R., Steinmetz, J. D., Flaxman, S., Briant, P. S., Taylor, H. R., Resnikoff, S., Casson, R. J., Abdoli,
A., Abu-Gharbieh, E., Afshin, A., and et al., “Trends in prevalence of blindness and distance and near
vision impairment over 30 years: an analysis for the global burden of disease study,” The Lancet Global
Health 9(2), e130–e143 (2021).
[3] Park, S. J., Ahn, S., and Park, K. H., “Burden of Visual Impairment and Chronic Diseases,” JAMA
Ophthalmology 134, 778–784 (07 2016).
[4] Haegele, J. A. and Zhu, X., “Physical activity, self-efficacy and health-related quality of life among adults
with visual impairments,” Disability and Rehabilitation 43(4), 530–536 (2021). PMID: 31230474.
[5] Wong, S., “Traveling with blindness: A qualitative space-time approach to understanding visual impairment
and urban mobility,” Health & Place 49, 85–92 (2018).
[6] Broman, A. T., West, S. K., Munoz, B., Bandeen-Roche, K., Rubin, G. S., and Turano, K. A., “Divided
visual attention as a predictor of bumping while walking: the salisbury eye evaluation,” Investigative oph-
thalmology & visual science 45(9), 2955–2960 (2004).
Proc. of SPIE Vol. 12045 1204504-5
[7] Patel, I., Turano, K. A., Broman, A. T., Bandeen-Roche, K., Munoz, B., and West, S. K., “Measures
of visual function and percentage of preferred walking speed in older adults: the salisbury eye evaluation
project,” Investigative ophthalmology & visual science 47(1), 65–71 (2006).
[8] Freeman, E. E., Munoz, B., Rubin, G., and West, S. K., “Visual field loss increases the risk of falls in
older adults: the salisbury eye evaluation,” Investigative ophthalmology & visual science 48(10), 4445–4450
(2007).
[9] Miyata, K., Yoshikawa, T., Harano, A., Ueda, T., and Ogata, N., “Effects of visual impairment on mobility
functions in elderly: Results of fujiwara-kyo eye study,” Plos one 16(1), e0244997 (2021).
[10] Lubin, A. and Deka, D., “Role of public transportation as job access mode: Lessons from survey of people
with disabilities in new jersey,” Transportation research record 2277(1), 90–97 (2012).
[11] Farcy, R., Leroux, R., Jucha, A., Damaschini, R., Gr´egoire, C., and Zogaghi, A., “Electronic travel aids and
electronic orientation aids for blind people: technical, rehabilitation and everyday life points of view,” in
[Proceedings of the Conference and Workshop on Assistive Technology for Vision and Hearing Impairment],
Citeseer (2006).
[12] Romlay, M. R. M., Toha, S. F., Ibrahim, A. M., and Venkat, I., “Methodologies and evaluation of electronic
travel aids for the visually impaired people: a review,” Bulletin of Electrical Engineering and Informat-
ics 10(3), 1747–1758 (2021).
[13] Ball, E. M., [Electronic Travel Aids: An Assessment], 289–321, Springer London, London (2008).
[14] oijezon, U., Prellwitz, M., Ahlmark, D. I., van Deventer, J., Nikolakopoulos, G., and Hyypp¨a, K., “A
haptic navigation aid for individuals with visual impairments: Indoor and outdoor feasibility evaluations of
the lasernavigator,” Journal of Visual Impairment & Blindness 113(2), 194–201 (2019).
[15] Petrie, H., Johnson, V., Strothotte, T., Raab, A., Fritz, S., and Michel, R., “Mobic: Designing a travel aid
for blind and elderly people,” The Journal of Navigation 49(1), 45–52 (1996).
[16] Gitlin, L., Mount, J., Lucas, W., Weirich, L., and Gramberg, L., “The physical costs and psychosocial
benefits of travel aids for persons who are visually impaired or blind,” Journal of Visual Impairment &
Blindness 91(4), 347–359 (1997).
[17] Mount, J., Gitlin, L. N., and Howard, P. D., “Musculoskeletal consequences of travel aid use among visually
impaired adults: directions for future research and training,” Technology and Disability 6(3), 159–167 (1997).
[18] Council, N. R. et al., “Summary and highlights,” in [Electronic Travel AIDS: New Directions for Research],
National Academies Press (US) (1986).
[19] Zhang, L., Wu, K., Yang, B., Tang, H., and Zhu, Z., “Exploring virtual environments by visually impaired
using a mixed reality cane without visual feedback,” in [2020 IEEE International Symposium on Mixed and
Augmented Reality Adjunct (ISMAR-Adjunct)], 51–56 (2020).
[20] Basch, M.-E. and Bogdanov, I., “Work directions and new results in electronic travel aids for blind and
visually impaired people,” (2011).
[21] Pissaloux, E., Velazquez, R., and Maingreaud, F., “On 3d world perception: towards a definition of a
cognitive map based electronic travel aid,” in [The 26th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society], 1, 107–109, IEEE (2004).
[22] Boldini, A., Garcia, A. L., Sorrentino, M., Beheshti, M., Ogedegbe, O., Fang, Y., Porfiri, M., and Rizzo,
J.-R., “An inconspicuous, integrated electronic travel aid for visual impairment,” ASME Letters in Dynamic
Systems and Control 1(4), 041004 (2021).
[23] Boldini, A., Ma, X., Rizzo, J.-R., and Porfiri, M., “A virtual reality interface to test wearable electronic
travel aids for the visually impaired,” in [Nano-, Bio-, Info-Tech Sensors and Wearable Systems], 11590,
115900Q, International Society for Optics and Photonics (2021).
[24] Stahl, A., “The diagnosis and treatment of age-related macular degeneration,” Deutsches ¨
Arzteblatt Inter-
national 117(29-30), 513 (2020).
[25] Khan, Z., Khan, F. G., Khan, A., Rehman, Z. U., Shah, S., Qummar, S., Ali, F., and Pack, S., “Diabetic
retinopathy detection using vgg-nin a deep learning architecture,” IEEE Access 9, 61408–61416 (2021).
[26] Gagrani, M., Ndulue, J., Anderson, D., Kedar, S., Gulati, V., Shepherd, J., High, R., Smith, L., Fowler,
Z., Khazanchi, D., Nawrot, M., and Ghate, D., “What do patients with glaucoma see: a novel ipad app to
improve glaucoma patient awareness of visual field loss,” British Journal of Ophthalmology (2020).
Proc. of SPIE Vol. 12045 1204504-6
[27] Phamduy, P., Rizzo, J.-R., Hudson, T. E., Torre, M., Levon, K., and Porfiri, M., “Communicating through
touch: Macro fiber composites for tactile stimulation on the abdomen,” IEEE transactions on haptics 11(2),
174–184 (2017).
[28] Boldini, A., Rizzo, J.-R., and Porfiri, M., “A piezoelectric-based advanced wearable: obstacle avoidance
for the visually impaired built into a backpack,” in [Nano-, Bio-, Info-Tech Sensors, and 3D Systems IV],
11378, 1137806, International Society for Optics and Photonics (2020).
Proc. of SPIE Vol. 12045 1204504-7
... ETAs are often complex and even counter-intuitive, such that their use requires a significant amount of training [18]. To address these problems, there is an immediate need for a platform that can: i) support the development, testing, and refinement out of the prototype stage of ETAs, targeted to end-users; and ii) train persons with VIs to use ETAs within realistic scenarios, while limiting their risk of injury in the process [22][23][24]. ...
Article
Full-text available
Visual impairment represents a significant health and economic burden affecting 596 million globally. The incidence of visual impairment is expected to double by 2050 as our population ages. Independent navigation is challenging for persons with visual impairment, as they often rely on non-visual sensory signals to find the optimal route. In this context, electronic travel aids are promising solutions that can be used for obstacle detection and/or route guidance. However, electronic travel aids have limitations such as low uptake and limited training that restrict their widespread use. Here, we present a virtual reality platform for testing, refining, and training with electronic travel aids. We demonstrate the viability on an electronic travel aid developed in-house, consist of a wearable haptic feedback device. We designed an experiment in which participants donned the electronic travel aid and performed a virtual task while experiencing a simulation of three different visual impairments: age-related macular degeneration, diabetic retinopathy, and glaucoma. Our experiments indicate that our electronic travel aid significantly improves the completion time for all the three visual impairments and reduces the number of collisions for diabetic retinopathy and glaucoma. Overall, the combination of virtual reality and electronic travel aid may have a beneficial role on mobility rehabilitation of persons with visual impairment, by allowing early-phase testing of electronic travel aid prototypes in safe, realistic, and controllable settings.
... Several researchers have designed virtual/augmented reality (VR/AR) systems for training persons with VI in the navigation of unknown environment [17][18][19][20] and in the proper use of ETAs. [21][22][23][24][25] By creating customized visual and auditory cues, VR/AR offers the unique possibility to train and assist users with a specific task in virtual environments, which are safe, controllable, realistic, and engaging. In fact, VR/AR improve the training process by offering virtual settings in which real stimuli are perceived through multi-sensory feedback modalities. ...
Article
Full-text available
Background Visual disability is a growing problem for many middle-aged and older adults. Conventional mobility aids, such as white canes and guide dogs, have notable limitations that have led to increasing interest in electronic travel aids (ETAs). Despite remarkable progress, current ETAs lack empirical evidence and realistic testing environments and often focus on the substitution or augmentation of a single sense. Objective This study aims to (1) establish a novel virtual reality (VR) environment to test the efficacy of ETAs in complex urban environments for a simulated visual impairment (VI) and (2) evaluate the impact of haptic and audio feedback, individually and combined, on navigation performance, movement behavior, and perception. Through this study, we aim to address gaps to advance the pragmatic development of assistive technologies (ATs) for persons with VI. Methods The VR platform was designed to resemble a subway station environment with the most common challenges faced by persons with VI during navigation. This environment was used to test our multisensory, AT-integrated VR platform among 72 healthy participants performing an obstacle avoidance task while experiencing symptoms of VI. Each participant performed the task 4 times: once with haptic feedback, once with audio feedback, once with both feedback types, and once without any feedback. Data analysis encompassed metrics such as completion time, head and body orientation, and trajectory length and smoothness. To evaluate the effectiveness and interaction of the 2 feedback modalities, we conducted a 2-way repeated measures ANOVA on continuous metrics and a Scheirer-Ray-Hare test on discrete ones. We also conducted a descriptive statistical analysis of participants’ answers to a questionnaire, assessing their experience and preference for feedback modalities. Results Results from our study showed that haptic feedback significantly reduced collisions (P=.05) and the variability of the pitch angle of the head (P=.02). Audio feedback improved trajectory smoothness (P=.006) and mitigated the increase in the trajectory length from haptic feedback alone (P=.04). Participants reported a high level of engagement during the experiment (52/72, 72%) and found it interesting (42/72, 58%). However, when it came to feedback preferences, less than half of the participants (29/72, 40%) favored combined feedback modalities. This indicates that a majority preferred dedicated single modalities over combined ones. Conclusions AT is crucial for individuals with VI; however, it often lacks user-centered design principles. Research should prioritize consumer-oriented methodologies, testing devices in a staged manner with progression toward more realistic, ecologically valid settings to ensure safety. Our multisensory, AT-integrated VR system takes a holistic approach, offering a first step toward enhancing users’ spatial awareness, promoting safer mobility, and holds potential for applications in medical treatment, training, and rehabilitation. Technological advancements can further refine such devices, significantly improving independence and quality of life for those with VI.
Article
Full-text available
Blind or low-vision (BLV) individuals often have reduced independent mobility, yet new aids fails in increasing it, are not adopted enough, or both. A major cause is a severe deficiency in how mobility aids are assessed in the field: there are no established methods or measures and those used often have poor relevancy, insight affordances, and reproducibility; probing how actual BLV participants regard a proposed aid and how they compare to current aids is rare; and crucially, tests feature too few BLV participants. In this work two tools are introduced to alleviate this: a portable, large-scale-exploration, virtual reality (VR) system; and a comprehensive, aid-agnostic questionnaire focused on BLV mobility. The questionnaire has been validated once with eight orientation and mobility experts and six BLV respondents. Further, both it and the VR system have been applied in aid assessment with 19 BLV participants in a separate study. The VR system is to our knowledge the first in the field designed for portable evaluation, helping considerably in recruiting adequate numbers of BLV participants, for instance by allowing for testing in participants’ homes; while also supporting reproducible and motivated tests and analyses. The questionnaire provides a systematic method to investigate respondents’ views of numerous important facets of a proposed mobility aid, and how they relate to other aids. These tools should assist in achieving a widely adopted aid that meaningfully improves its users’ mobility.
Article
Full-text available
Technological advancements have widely contributed to navigation aids. However, their large-scale adaptation for navigation solutions for visually impaired people haven’t been realized yet. Less participation of the visually impaired subject produces a designer-oriented navigation system which overshadows consumer necessity. The outcome results in trust and safety issues, hindering the navigation aids from really contribute to the safety of the targeted end user. This study categorizes electronic travel aids (ETAs) based on experimental evaluations, highlights the designer-centred development of navigation aids with insufficient participation of the visual impaired community. First the research breaks down the methodologies to achieve navigation, followed by categorization of the test and experimentation done to evaluate the systems and ranks it by maturity order. From 70 selected research articles, 51.4% accounts for simulation evaluation, 24.3% involve blindfolded-sighted humans, 22.9% involve visually impaired people and only 1.4% makes it into production and commercialization. Our systematic review offers a bird’s eye view on ETA development and evaluation and contributes to construction of navigational aids which really impact the target group of visually impaired people.
Article
Full-text available
Diabetic retinopathy (DR) is a disease that damages retinal blood vessels and leads to blindness. Usually, colored fundus shots are used to diagnose this irreversible disease. The manual analysis (by clinicians) of the mentioned images is monotonous and error-prone. Hence, various computer vision hands-on engineering techniques are applied to predict the occurrences of the DR and its stages automatically. However, these methods are computationally expensive and lack to extract highly nonlinear features and, hence, fail to classify DR’s different stages effectively. This paper focuses on classifying the DR’s different stages with the lowest possible learnable parameters to speed up the training and model convergence. The VGG16, spatial pyramid pooling layer (SPP) and network-in-network (NiN) are stacked to make a highly nonlinear scale-invariant deep model called the VGG-NiN model. The proposed VGG-NiN model can process a DR image at any scale due to the SPP layer’s virtue. Moreover, the stacking of NiN adds extra nonlinearity to the model and tends to better classification. The experimental results show that the proposed model performs better in terms of accuracy, computational resource utilization compared to state-of-the-art methods.
Article
Full-text available
The aim of this study was to determine whether there is a significant association between a visual impairment (VI) and mobility functions in an elderly Japanese cohort. The subjects of this study were part of the Fujiwara-kyo Eye Study, a cross sectional epidemiological study of elderly individuals conducted by Nara Medical University. Participants were ≥70-years who lived in the Nara Prefecture. All underwent comprehensive ophthalmological examinations, and a VI was defined as a best-corrected visual acuity (BCVA) worse than 20/40 in the better eye. The associations between the BCVA and walking speed and one-leg standing time were determined. The medical history and health conditions were evaluated by a self-administered questionnaire. A total of the 2,809 subjects whose mean age was 76.3 ± 4.8 years (± standard deviation) were studied. The individuals with a VI (2.1%) had significantly slower walking speeds and shorter one-leg standing times than that of the non-VI individuals (1.5±0.4 vs 1.7±0.4 m/sec, P<0.01; 17.1±19.6 vs 27.6±21.3 sec, P<0.01, respectively). Univariate logistic regression found that the odds ratio (OR) for the slower walking speed (<1 m/sec) in the VI individuals was significantly higher at 7.40 (3.36–16.30;95% CI, P <0.001) than in non-VI individuals. It was still significantly higher at 4.50 (1.87–10.85;95% CI, P = 0.001) in the multivariate logistic regression model after adjusting for the BCVA, age, sex, current smoking habit, and health conditions. Our results indicate that the walking speed and one-leg standing times were significantly associated with VI.
Article
Full-text available
Purpose Glaucoma patients with peripheral vision loss have in the past subjectively described their field loss as ‘blurred’ or ‘no vision compromise’. We developed an iPad app for patients to self-characterise perception within areas of glaucomatous visual field loss. Methods Twelve glaucoma patients with visual acuity ≥20/40 in each eye, stable and reliable Humphrey Visual Field (HVF) over 2 years were enrolled. An iPad app (held at 33 cm) allowed subjects to modify ‘blur’ or ‘dimness’ to match their perception of a 2×2 m wall-mounted poster at 1 m distance. Subjects fixated at the centre of the poster (spanning 45° of field from centre). The output was degree of blur/dim: normal, mild and severe noted on the iPad image at the 54 retinal loci tested by the HVF 24-2 and was compared to threshold sensitivity values at these loci. Monocular (Right eye (OD), left eye (OS)) HVF responses were used to calculate an integrated binocular (OU) visual field index (VFI). All three data sets were analysed separately. Results 36 HVF and iPad responses from 12 subjects (mean age 71±8.2y) were analysed. The mean VFI was 77% OD, 76% OS, 83% OU. The most common iPad response reported was normal followed by blur. No subject reported dim response. The mean HVF sensitivity threshold was significantly associated with the iPad response at the corresponding retinal loci (For OD, OS and OU, respectively (dB): normal: 23, 25, 27; mild blur: 18, 16, 22; severe blur: 9, 9, 11). On receiver operative characteristic (ROC) curve analysis, the HVF retinal sensitivity cut-off at which subjects reported blur was 23.4 OD, 23 OS and 23.3 OU (dB). Conclusions Glaucoma subjects self-pictorialised their field defects as blur; never dim or black. Our innovation allows translation of HVF data to quantitatively characterise visual perception in patients with glaucomatous field defects.
Article
Full-text available
Summary Background To contribute to the WHO initiative, VISION 2020: The Right to Sight, an assessment of global vision impairment in 2020 and temporal change is needed. We aimed to extensively update estimates of global vision loss burden, presenting estimates for 2020, temporal change over three decades between 1990–2020, and forecasts for 2050. Methods We did a systematic review and meta-analysis of population-based surveys of eye disease from January, 1980, to October, 2018. Only studies with samples representative of the population and with clearly defined visual acuity testing protocols were included. We fitted hierarchical models to estimate 2020 prevalence (with 95% uncertainty intervals [UIs]) of mild vision impairment (presenting visual acuity ≥6/18 and <6/12), moderate and severe vision impairment (<6/18 to 3/60), and blindness (<3/60 or less than 10° visual field around central fixation); and vision impairment from uncorrected presbyopia (presenting near vision <N6 or <N8 at 40 cm where best-corrected distance visual acuity is ≥6/12). We forecast estimates of vision loss up to 2050. Findings In 2020, an estimated 43·3 million (95% UI 37·6–48·4) people were blind, of whom 23·9 million (55%; 20·8–26·8) were estimated to be female. We estimated 295 million (267–325) people to have moderate and severe vision impairment, of whom 163 million (55%; 147–179) were female; 258 million (233–285) to have mild vision impairment, of whom 142 million (55%; 128–157) were female; and 510 million (371–667) to have visual impairment from uncorrected presbyopia, of whom 280 million (55%; 205–365) were female. Globally, between 1990 and 2020, among adults aged 50 years or older, age-standardised prevalence of blindness decreased by 28·5% (–29·4 to –27·7) and prevalence of mild vision impairment decreased slightly (–0·3%, –0·8 to –0·2), whereas prevalence of moderate and severe vision impairment increased slightly (2·5%, 1·9 to 3·2; insufficient data were available to calculate this statistic for vision impairment from uncorrected presbyopia). In this period, the number of people who were blind increased by 50·6% (47·8 to 53·4) and the number with moderate and severe vision impairment increased by 91·7% (87·6 to 95·8). By 2050, we predict 61·0 million (52·9 to 69·3) people will be blind, 474 million (428 to 518) will have moderate and severe vision impairment, 360 million (322 to 400) will have mild vision impairment, and 866 million (629 to 1150) will have uncorrected presbyopia. Interpretation Age-adjusted prevalence of blindness has reduced over the past three decades, yet due to population growth, progress is not keeping pace with needs. We face enormous challenges in avoiding vision impairment as the global population grows and ages.
Article
With a globally aging population, visual impairment is an increasingly pressing problem for our society. This form of disability drastically reduces the quality of life and constitutes a large cost to the health care system. Mobility of the visually impaired is one of the most critical aspects affected by this disability, and yet it relies on low-tech solutions, such as the white cane and guide dogs. However, many avoid solutions entirely. In part, reluctance to use these solutions may be explained by their obtrusiveness, which is also a strong deterrent for the adoption of many new devices. In this paper, we leverage new advancements in artificial intelligence, sensor systems, and soft electroactive materials toward an electronic travel aid with an obstacle detection and avoidance system for the visually impaired. The travel aid incorporates a stereoscopic camera platform, enabling computer vision, and a wearable haptic device that can stimulate discrete locations on the user's abdomen to signal the presence of surrounding obstacles. The proposed technology could be integrated into commercial backpacks and support belts, thereby guaranteeing a discreet and unobtrusive solution.
Article
Background: Age-related macular degeneration (AMD) is thought to cause approximately 9% of all cases of blindness worldwide. In Germany, half of all cases of blindness and high-grade visual impairment are due to AMD. In this review, the main risk factors, clinical manifestations, and treatments of this disease are presented. Methods: This review is based on pertinent publications retrieved by a selective search in PubMed for original articles and reviews, as well as on current position statements by the relevant specialty societies. Results: AMD is subdivided into early, intermediate, and late stages. The early stage is often asymptomatic; patients in the other two stages often have distorted vision or central visual field defects. The main risk factors are age, genetic predisposition, and nicotine consumption. The number of persons with early AMD in Germany rose from 5.7 million in 2002 to ca. 7 million in 2017. Late AMD is subdivided into the dry late form of the disease, for which there is no treatment at present, and the exudative late form, which can be treated with the intravitreal injection of VEGF inhibitors. Conclusion: More research is needed on the dry late form of AMD in particular, which is currently untreatable. The treatment of the exudative late form with VEGF inhibitors is labor-intensive and requires a close collaboration of the patient, the ophthalmologist, and the primary care physician.