ArticlePDF Available

Abstract and Figures

This paper documents the design, implementation and evaluation of the Unfolding Space Glove – an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback – all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features.
Content may be subject to copyright.


Citation: Kilian, J.; Neugebauer, A.;
Scherffig, L.; Wahl, S. The Unfolding
Space Glove: A Wearable
Spatio-Visual to Haptic Sensory
Substitution Device for Blind People.
Sensors 2022,22, 1859. https://
doi.org/10.3390/s22051859
Academic Editor: Andrea Cataldo
Received: 27 January 2022
Accepted: 22 February 2022
Published: 26 February 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
The Unfolding Space Glove: A Wearable Spatio-Visual to
Haptic Sensory Substitution Device for Blind People
Jakob Kilian 1,2 , Alexander Neugebauer 2, Lasse Scherffig 1and Siegfried Wahl 2,3,*
1Köln International School of Design, TH Köln, 50678 Köln, Germany; jkilian@kisd.de (J.K.);
lscherff@kisd.de (L.S.)
2ZEISS Vision Science Laboratory , Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
a.neugebauer@uni-tuebingen.de
3Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
*Correspondence: siegfried.wahl@uni-tuebingen.de; Tel.: +49-7071-29-84512
Abstract:
This paper documents the design, implementation and evaluation of the Unfolding Space
Glove—an open source sensory substitution device. It transmits the relative position and distance
of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to
haptically explore the depth of their surrounding space, assisting with navigation tasks such as
object recognition and wayfinding. The prototype requires no external hardware, is highly portable,
operates in all lighting conditions, and provides continuous and immediate feedback—all while
being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed
structured training and obstacle courses with both the prototype and a white long cane to allow
performance comparisons to be drawn between them. The subjects quickly learned how to use the
glove and successfully completed all of the trials, though still being slower with it than with the cane.
Qualitative interviews revealed a high level of usability and user experience. Overall, the results
indicate the general processability of spatial information through sensory substitution using haptic,
vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities
after extensive training and to derive a fully functional navigation aid from its features.
Keywords:
tactile vision sensory substitution; blind; visuallyimpaired; mobility; navigation;
orientation
;
locomotion; open source; haptic; wearable
1. Introduction
1.1. Navigation Challenges for Blind People
Vision is the modality of human sensory perception with the highest information
capacity [
1
,
2
]. Being born blind or losing one’s sight later in life involves great challenges.
The ability to cope with everyday life independently, to be mobile in unfamiliar places, to
absorb information and, as a result, to participate equally in social, public and economic
life can be severely hampered, ultimately affecting one’s quality of life [
3
6
]. Society can,
of course, address this issue on many levels, for example by ensuring accessibility of
information or by designing public spaces to meet the specific needs of blind individuals
or, more generally, visually impaired people (VIPs). In addition, technical aids, devices and
apps are constantly being developed to assist VIPs with certain tasks [
7
,
8
]. These aids can
essentially be divided into three aspects: obtaining information of the surroundings (What
is written here? What is the object ahead like?), interfacing with machines and computers
(input and output) and navigation (How do I get there? Can I walk this way?). While there
are certainly large overlaps between these three aspects, this paper exclusively focuses on
the third—navigation.
Following the definition of Montello and Sas [
9
], navigation (often used synonymously
with mobility, orientation or wayfinding [
10
,
11
]) is the ability to perform “coordinated
Sensors 2022,22, 1859. https://doi.org/10.3390/s22051859 https://www.mdpi.com/journal/sensors
Sensors 2022,22, 1859 2 of 33
and goal-directed movement[s] through the environment”. It can be divided into two sub-
components: wayfinding (long-range navigation to a target destination, spatial orientation,
knowledge about surroundings and landmarks outside the immediate environment) and
locomotion (short-range navigation, moving in the intended direction without colliding or
getting stuck, obstacle detection and development of strategies to overcome them). This
paper will concentrate on the latter and discuss how VIPs can be supported in this by
technical aids—that is, without the help of human companions.
1.2. Ways to Assist Locomotion
One theoretical solution would be to rehabilitate (rudimentary) vision: since the 1990s,
research has been conducted into surgical measures in both retinal implants stimulating
the optic nerve and brain implants attached directly to the visual cortex. The quality of
vision restored by this kind of surgery varies widely and, even in the best cases, represents
only a fraction of the visual acuity of people with ordinary eyesight. Together with high
costs, this leads to the fact that invasive measures of this kind are still far from widespread
use and have so far only been tested on a small number of people. In the medium term,
however, improving the quality of implants and simplifying surgical procedures could
make the technology available to a wider public [
12
,
13
]. With regard to navigation, however,
it is questionable whether the mere provision of visual information to the brain would
adequately address the problem. Even if sighted individuals can rely on the visual modality
for the most part, navigation and the acquisition of spatial knowledge required for it are by
no means dedicated visual tasks. Blind people (but also sighted people to some extent) use
multiple modalities and develop various strategies to master navigation and locomotion,
which can be described as multimodal or even amodal task-specific functions. There is an
ongoing discussion about how exactly VIPs—a very heterogeneous group with varying
capacities for absorbing environmental information—obtain and cognitively process spatial
knowledge [14,15].
Two well established and commercially available aids that do not attempt to assist
locomotion through vision restoration alone are the white long cane (which provides basic
spatial information about objects in close proximity) and the guide dog (which uses the
dog’s spatial knowledge to assist navigation).
Figures for prevalence of the white cane among VIPs vary greatly depending on the age
of the study population, the severity of their visual impairment and other factors [
16
19
],
but in one example (USA, 2008) it is as low as ~10% [
16
]. Even though the white cane, once
mastered, has proven to be an extremely helpful (and probably the most popular) tool for
VIPs, it comes with the drawback of having only a limited range (approx. 1 m radius) and
not being able to recognise aerial obstacles (tree branches, protruding objects) [
20
]. Smart
canes that give vibratory feedback, capable of recognising objects above waist height or
further away, could offer a remedy, but have, to the best of our knowledge, not yet achieved
widespread use. The reasons for this, as well as their advantages and disadvantages, have
been discussed in various publications [2024].
Guide dogs, on the other hand, are another promising option that can bring further
advantages for blind people besides solving navigation tasks [
25
28
]. However, they are
even less widespread (e.g., only 1% of the blind people in USA, 2008 [
16
] or 2.4% of “the
registered blind” in the U.K. [
26
] owned one). The reasons for this and the drawbacks of
the guide dog as a mobility aid are manifold and have been frequently discussed in the
literature.
Even among the users of these two aids, 40% reported head injuries at least once a
year (18% once a month) and as a result, 34% leave their usual routes only once or several
times a month, while 6% never do [19].
With an estimated worldwide total number of 295 million people with moderate
or severe visual impairment in 2020, of which 36 million are totally blind [
29
], it can be
assumed that the locomotion navigation tasks addressed—among others—still represent a
global problem for VIPs and pose an important field of research.
Sensors 2022,22, 1859 3 of 33
1.3. Sensory Substitution as an Approach
The concept of Sensory Substitution (SS) offers a promising approach to address this
shortcoming. The basic assumption of SS is that the function of a missing or impaired
sensory modality can be replaced by stimulating another sensory modality using the
missing information. This only works because the brain is so plastic that it learns to
associate the new stimuli with the missing modality, as long as they fundamentally share
the same characteristics [
30
]. Surgical intervention would not be necessary because existing
modalities or sensory organs can be used instead.
There is a great amount of scientific work on the topic of SS, specifically dealing with
the substitution of visual information. The research field was established in the 1960s by a
research group around Paul Bach-y-Rita; they developed multiple variations of a Sensory
Substitution Device (SSD) stimulating the sense of touch in order to replace missing vision,
commonly called Tactile-Vision Substitution Systems (TVSS) [
31
33
]. Many influential
publications followed this example and developed SSDs for the tactile modality [
34
37
], one
even suggesting using a glove as tactile interface [
38
]; others addressed further modalities,
such as the auditory with so-called Auditory-Vision Substitution Systems (AVSS) [
39
43
].
While there are summaries of existing approaches [
44
,
45
], there are also many other, smaller
publications on the topic in the literature—often only looking at a sub-area of SS.
It should be noted that there is further work on so-called Electronic Travel Aids
(ETAs) [
24
,
44
,
46
]. SSDs differ from them in the sense that they pass largely unprocessed
information from the environment on to the substituting sensory modality and leave
the interpretation to the user, while ETAs provide pre-interpreted abstract information
(e.g., when to turn right or where a certain object is located).
Additionally, there is also research on SS-like assistive devices outside the scientific
context [4750].
1.4. Brain Plasticity and Sensory Substitution
The theoretical basis of SS is summarised under the term of brain plasticity. Although
the focus of this paper is not on the neurophysiological discussion of this term, a brief
digression is nevertheless helpful in order to understand some of the design decisions
made in this project.
In general, brain plasticity describes the “adaptive capacities of the central nervous sys-
tem” and “its ability to modify its own structural organization and functioning” [
30
]. While
neuroscience has long assumed a fixed assignment of certain sensory and motor functions
to specific areas of the brain, we today know that the brain is capable of reorganising itself,
e.g., after brain damage [
51
] and, moreover, is capable of learning new sensory stimuli
not only in early development but throughout life [
52
]. For sensory substitution to work
and for the new neural correlate to be learned, a number of conditions are nevertheless
necessary; Bach-Y-Rita et al. [34] point out that there has to be
“(a) functional demand, (b) the sensor technology to fill that demand, and (c) the
training and psychosocial factors that support the functional demand”.
An SSD only needs a sensor that picks up the information, an interface that transmits
it to human receptors, and finally and very importantly, the possibility for the user to
modify the sensory stimulus by motor action in order to determine the initial origin of the
information [
34
]. The importance of the latter close dependence between motor action and
sensory perception has been emphasised in many publications [
33
,
34
] and is assumed to be
the basis of any sensory experience [53].
There still is a vital discussion across disciplines about how cognitive processing
of sensory stimuli is carried out by the brain. Worth mentioning here is the Theory of
Sensorimotor Contingencies (SMCs) dismissing longstanding representational models and
describing the supposedly passive perception of environmental cues as an active process
that relies on regularities between action and reception that have to be learned [
54
]. The
literature on SSDs and the SMC theory mutually refer to each other [54,55].
Sensors 2022,22, 1859 4 of 33
1.5. Pitfalls of Existing Substitution Systems
Despite the long tradition of research on the topic of SS and numerous publications
with promising results, the concept has not yet achieved a real breakthrough. The exact
reasons for the low number of available devices and users have often been discussed and
made the subject of proposals for improvement [30,40,44,55,56].
Certain prerequisites that an SSD must meet in order to be used by a target group can
be gathered from both existing literature and methods of interaction design. These aspects
are of a very abstract nature and their implementation in practice is challenging and often
only partially achievable. The following 14 aspects or prerequisites—the first ten of which
were originally formulated as problems of existing SSDs by Chebat et al. [
55
]—were taken
into account in the design and evaluation of the proposed SSD: learning, training, latency,
dissemination, cognitive load, orientation of the sensor, spatial depth, contrast, resolution,
costs, motor potential, preservation of sensory and motor habits, user experience and joy of
use, and aesthetic appearance. See Appendix Afor a discussion.
The motivation to deal with this field arose from the technological progress since
Bach-y-Rita’s early pioneering work and his analogue experimental set-ups; improvements
in price, portability and processing power of modern digital computing devices, sensors
and other components necessary for the development of an SSD have opened up many
new possibilities in the field and facilitate the implementation of these 14 aspects, now
more than ever. Recent literature on assistive technology for VIPs also suggests that the
field is “gaining increasing prominence owing to an explosion of new interest in it from
disparate disciplines” having a “very relevant social impact” [57].
2. The Unfolding Space Glove
2.1. Previous Work
The Unfolding Space Glove is an SSD; it translates spatio-visual depth images into
information that can be sensed haptically. In a broader sense, it is a TVSS, with the original
term tactile perception deliberately being changed to haptic because of the high level of
integration of the motor system, and the term visual being changed to spatio-visual to
describe the input more accurately.
It was first drafted in previous work by Kilian in 2018 [
58
,
59
] with a focus on Interaction
Design (only available in German). However, the first prototypes of the glove were still a bit
cumbersome, heavy, had higher latencies and were prone to errors. Nevertheless, they were
able to prove the functional principle and demonstrate that more research on this device is
worthwhile. Through the course of this project, the device was refined and the prototype
tested in this study was ultimately able to deliver promising results in the empirical tests
(see results and discussion section) and meets many of the previously defined prerequisites
(see discussion section). For more details and background information on the project please
also see the project website https://www.unfoldingspace.org (accessed on 26 January 2022).
Code, hardware, documentation and building instructions are open source and available
in the public repository https://github.com/jakobkilian/unfolding-space (accessed on
26 January 2022). Consider Release v0.2.1 for the stable version used in this study and
consider more recent commits in which the content has been revised for better accessibility.
2.2. Structure and Technical Details
The Unfolding Space Glove (Figure 1) essentially consists of two parts: a USB power
bank that is worn on the upper arm (or elsewhere) and the glove itself, holding the camera,
actuators, computing unit and associated technology. The only connection between them
is the power supply USB cable. A list of the required components and photographic
documentation of the assembly process is attached in the Supplementary Materials S1 and
can be found in the aforementioned Github repository. The mere material costs of the entire
set-up are about $600, of which about two thirds go to the camera alone (see Appendix B).
Sensors 2022,22, 1859 5 of 33
Figure 1. The final prototype with labeled components.
Structurally, a TVSS typically consists of three components: the input (a camera that
captures images of the environment), the computing unit (translating these images into
patterns suitable for tactile perception), and finally the output (a tactile interface that
provides this information to the user).
The selected input system gathering 3D information on the environment is the “Pico
Flexx” Time of Flight (ToF) camera by pmdtechnologies [
60
]. ToF refers to a technique that
determines the distance to objects by measuring the time that actively emitted light signals
take to reach them (and bounce back to the sensor). For quite a while, SS research focused
on conventional two-dimensional greyscale images, and it is only in the last few years that
the use of 3D images or data, especially from ToF cameras, has been investigated. Due to
the advantages of today’s ToF technology compared to other 3D imaging methods (such
as structured light or stereo vision) [
61
], it seemed reasonable to make efforts to explore
exactly this combination. See Appendix Bfor a more detailed discussion of existing 3D
SSDs and the choice of the ToF camera technology.
The computing unit, a Raspberry Pi Compute Module 4, is attached to the glove as
part of the Unfolding Space Carrier Board which is described more closely in Appendix C.
The output is a 3
×
3 matrix of (vibrating) linear resonant actuators (LRAs) placed on
the back of the hand in the glove. The choice of actuators and the reasons for positioning
them on the back of the hand are described in Appendix D.
2.3. Algorithm
The SSD’s translation algorithm takes the total of 224
×
171 (depth) pixels from the
ToF camera and calculates the final values of the glove’s 3
×
3 vibration pattern. Each motor
represents the object that is closest to the glove within the corresponding part of the field of
view of the camera. A detailed explanation of the algorithm can be found in Appendix E.
The code files and their documentation are attached as zip files in the Supplementary
Materials S2 but can also be accessed in the aforementioned Github repository.
2.4. Summary
The resulting system achieves a frame rate of 25 fps and a measured latency of about
50 ms. About 10 ms of this is due to the rise time of the LRAs, 3 ms to the image-processing
on the Raspberry Pi and an unknown part to the operations of the Pico Flexx specific library.
The system has a horizontal field of view of 62
°
and a vertical of 45
°
with a detection
range from 0.1 m to 2 m [
60
]. The beginning of the detection range is determined by the
limitations of the ToF method, while the 2 m maximum distance was just fixed for this study
and could be adjusted in the future (maximum of the camera is 4 m [
60
] with decreasing
quality in the far range). The glove by itself weighs 120 g with all its components, the
power bank and bracelet weigh 275 g together, giving a total system weight of 395 g. The
Sensors 2022,22, 1859 6 of 33
glove was produced in two sizes, each with a spare unit, in order to guarantee a good fit
for all subjects and a smooth conduct of the study.
Now that the physical system has been described in detail, the next section will explain
the study design.
3. Experimental Methods
3.1. Ethics
In order to evaluate the SSD described, a quasi-experiment was proposed and ap-
proved by the ethics committee of the faculty of medicine at the university hospital of the
Eberhard-Karls-University Tübingen in accordance with the 2013 Helsinki Declaration. All
participants were informed about the study objectives, design and associated risks and
signed an informed consent form to publish pseudonymous case details. The individuals
shown in photographs in this paper have explicitly consented to the publication of those
by signing an additional informed consent form.
3.2. Hypotheses
The study included training and testing of the white long cane. This was not done
with the intention of pitting the two aids against each other or eventually replacing the cane
with the SSD. Rather, the aim was to be able to discuss the SSD comparatively with respect
to a controlled set of navigation tasks. In fact, the glove is designed in a way that it could be
worn and used in combination with the cane in the other hand. Testing not only both aids
but also the combination of both would, however, introduce new unknown interactions
and confounding factors. The main objective of this study thus reads: “The impact of
the studied SSD on the performance of the population in both navigation and obstacle
detection is comparable to that of the white long cane.” The hypothesis derived from this is
complex due to one problem: blind subjects usually have at least basic experience with the
white cane or have been using it on a daily basis for decades. A newly learned tool such
as the SSD can therefore hardly be experimentally compared with an internalised device
like the cane. A second group of naive sighted subjects was therefore included to test two
separate sub-hypotheses:
Hypothesis 1. Performance.
Sub-Hypothesis H1a. Non-Inferiority of SSD Performance:
after equivalent structured train-
ing with both aids, sighted subjects (no visual impairment but blindfolded, no experience with the
cane) achieve a non-inferior performance in task completion time (25 percentage points margin)
with the SSD compared to the cane in navigating an obstacle course.
Sub-Hypothesis H1b. Equivalency of Learning Progress across Groups:
at the same time,
blind subjects who have received identical training (here only with the SSD) show equivalent
learning progress with the SSD (25 percentage points margin) as the sighted group.
With both sub-hypotheses confirmed, one can therefore, in simple terms, make as-
sumptions about the effect of the SSD on the navigation of blind people compared to the
cane, if both had been learned similarly. In addition, two secondary aspects should be
investigated:
Hypothesis 2. Usability & Acceptance.
The device is easy to learn, simple to use, achieves a
high level of user enjoyment and satisfaction and thus strong acceptance rates.
Hypothesis 3. Distal Attribution.
Users report unconscious processing of stimuli and describe
the origin of these haptic stimuli distally in space at the actual location of the observed object.
Sensors 2022,22, 1859 7 of 33
3.3. Study Population
A total of 14 participants were recruited mainly through calls at the university and
through local associations for blind and visually impaired people in Cologne, Germany.
Appendix Fcontains a summary table with the subject data presented below. The complete
data set is also available in the study data in Supplementary Materials S3.
Six of the subjects were normally sighted and had a visual acuity of 0.3 or higher; eight
were blind (congenitally and late blind), thus had a visual acuity of less than 0.05 and/or
a visual field of less than 10
°
(category 3–5 according to ICD-10 H54.9 definition) on the
better eye. Participants’ self-reports about their visual acuity were confirmed with a finger
counting test (1 m distance) and, if passed, with the screen based Landolt visual acuity test
“FrACT” [62] (3 m distance) using a tactile input device (Figure 2A).
Figure 2.
(
A
) Carrying out the “FrACT” Landolt visual acuity test with screen and tactile input device
(lower right corner). (
B
) The obstacle course in the study room. (
C
) Close up of the grid system used
for quick rebuilding of the layouts.
Two subjects were excluded from the evaluation despite having completed the study:
on average, subject f (cane = 0.75, SSD = 2.393) and subject z (cane = 1.214, SSD = 1.500)
caused a remarkably higher number of contacts (two to three-fold) with both aids than the
average of the remaining blind subjects (cane = 0.375, SSD = 0.643). For the former this
can be explained by a consistently high level of nervousness when walking through the
course. With both aids, the subject changed their course very erratically in the event of
a contact, causing further contacts or even collisions right away. The performance of the
latter worsened considerably towards the end of the study, again with both aids, so much
so that the subject was no longer able to fulfil the task of avoiding obstacles at all, citing
“bad form on the day” and fatigue as reasons. In order to not influence the available data
by this apparent deviation, these two subjects were excluded from all further analysis.
The age of the remaining participants (six female, five male, one not specified) aver-
aged 45
±
16.65 years and ranged from 25 to 72 years. All were healthy—apart from visual
impairments—and stated that they were able to assess and perform the physical effort of
the task; none had prior experience with the Unfolding Space Glove or other visual SSDs.
All participants in the blind group have been using the white cane on a daily basis
and for at least five years and/or did an Orientation and Mobility (O&M) training. Some
occasionally use technical aids like GPS-based navigation and one even had prior experience
using the feelspace belt (for navigation reasons only, not for augmentation of the Earth’s
magnetic field). Two reported to use Blind Square from time to time and one used a
monocular.
None of the sighted group had prior experience with the white cane.
3.4. Experimental Setup
The total duration of the study per subject differs between the blind and the sighted
group, as the sighted have to do the training with both aids and the blind with the SSD
only (since one inclusion criterion was experience in using the cane). The total length thus
was about 4.5 h in the blind group and 5.5 h in the sighted group.
Sensors 2022,22, 1859 8 of 33
In addition to paper work, introduction and breaks, participants of the sighted group
received 10 min of an introductory tutorial on both aids, had 45 min of training with them,
spent 60 min using them during the trials (varied slightly due to the time required for
completion) and thus reached a total wearing time of about 2 h with each aid. In the blind
group, the wearing time of the SSD was identical, while the wearing time of the cane is
lower due to the absence of tutorial and training sessions with it.
The study was divided into three study sessions, which took place at the Köln In-
ternational School of Design (TH Köln, Cologne, Germany) over the span of six weeks.
In the middle of a 130 square meter room, a 4 m wide and 7 m long obstacle course was
built (Figure 2B), bordered by 1.80 m high cardboard side walls and equipped with eight
cardboard obstacles (35
×
31
×
171 cm) distributed on a 50 cm grid (Figure 2C) according
to the predefined course layouts.
3.5. Procedure
Before the first test run (baseline), the participants received a 10-min Tutorial in which
they were introduced to the handling of the devices. Directly afterwards, they had to
complete the first Trial Session (TS). This was followed by total of three Practices Sessions
(PS), each of them being followed by another TS—making the Tutorial, four TS and three PS
in total. The study concluded with a questionnaire at the end of the third study session after
completion of the fourth and very last TS. An exemplary timetable of the study procedure
can be found in the Supplementary Materials S4.
3.5.1. Tutorial
In the 10-min Tutorial, participants were introduced to the functional design of the
device, its components and its basic usage such as body posture and movements while
interacting with the device. At the end, the participants had the opportunity to experience
one of the obstacles with the aid and to walk through a gap between two of these obstacles.
3.5.2. Trial Sessions
Each TS consisted of seven consecutive runs in the aid condition cane and seven runs
in the condition SSD, with a flip of a coin in each TS deciding which condition to start with.
The task given verbally after the description of the obstacle course read:
“You are one meter from the start line. You are not centered, but start from an
unknown position. Your task is to cross the finish line seven meters behind the
start line by using the aid. There are eight obstacles on the way which you should
not touch. The time required for the run is measured and your contacts with
the objects are counted. Contacts caused by the hand controlling the aid are not
counted. Time and contacts are equally weighted—do not solely focus on one.
You are welcome to think out loud and comment on your decisions, but you
won‘t get assistance with finishing the task.”
Contacts with the cane were not included in the statistics, as an essential aspect of
its operation is the deliberate induction of contact with obstacles. In addition, for both
aids, contacts caused by the hand guiding it were not included in the statistics as well in
order to motivate the subjects to freely interact with the aids. There was a clicking sound
positioned at the end of the course (centred and 2 m behind the finish line) to roughly
guide the direction. There was no help or other type of interference while participants
were performing the courses. Only when they accidentally turned more than 90 degrees
away from the finish line were they reminded to pay attention to the origin of the clicking
sound. Both task completion time and obstacle contacts (including a rating in mild/severe
contacts) were entered into a macro-assisted Excel spreadsheet on a hand-held tablet by the
experimenter, who was following the subjects at a non-distracting distance. The data of all
runs can be found in the study data in Supplementary Materials S3.
A total of 14 different course layouts were used (Figure 3), seven of which were
longitudinal axis mirror images of the other seven. The layout order within one aid
Sensors 2022,22, 1859 9 of 33
condition (SSD/cane) over all TS was the same for all participants and predetermined by
drawing all 14 possible variations for each TS without laying back. This means that all
participants went through the 14 layouts four times each, but in a different order for each
TS and with varying aids, so that a memory effect can be excluded.
The layouts were created in advance using an algorithm that distributed the obstacles
over the 50 cm grid. A sequence of 20 of these layouts was then evaluated in self-tests and
with pre-subjects, leaving the final seven equally difficult layouts (Figure 3).
The study design and the experimental setup were inspired by a proposal of a stan-
dardised obstacle course for assessment of “visual function in ultra low vision and artificial
vision” [
63
] but has been adapted due to spatial constraints and selected study objectives
(e.g., testing with two groups and limited task scope only). There are two further studies
suggesting a very similar setup for testing sensory substitution devices [
64
,
65
] that were
not considered for the choice of this study design.
Layout 7
Layout 1 Layout 2 Layout 3 Layout 4 Layout 5 Layout 6
Figure 3. The seven obstacle course layouts used in the study (in non-mirrored variant).
3.5.3. Practice Session
The practice sessions were limited to 15 min and followed a fixed sequence of topics
and interaction patterns to be learned with the two aids (Figure 4A–C). In the training
sessions obstacles were arranged in varying patterns by the experimenter. Subjects received
support as needed from the experimenter and were not only allowed to touch objects in
their surroundings, but were even encouraged to do so in order to compare the stimuli
perceived by the aid with reality.
In the case of the SSD training, after initially learning the body posture and movement,
the main objective was to understand exactly this relationship between stimuli and real
object. For this purpose, the subjects went through, for example, increasingly narrow
passages with the aim of maintaining a safe distance to the obstacles on the left and right.
Later, the tasks increasingly focused on finding strategies to find ways through course
layouts similar to the training layouts.
Figure 4.
(
A
) A subject practising with the SSD in the tutorial at the beginning of the study (
B
) A
typical task in the practice sessions: passing a gap between two obstacles (
C
) A sighted subject
training with the cane during a practice session.
Sensors 2022,22, 1859 10 of 33
While the training with the white cane (in the sighted group) took place in comparable
spatial settings, here the subjects learned exercises from the cane programme within an
O&M training (posture, swing of the cane, gait, etc.). The experimenter himself received a
basic cane training from an O&M trainer in order to be able to carry it out in the study. The
sighted subjects were therefore not trained by an experienced trainer, but all by the same
person. At the same time, the SSD was not trained “professionally” either, as there are no
standardised training methods specifically for the device yet.
3.5.4. Qualitative Approaches
In addition to the quantitative measurements, the subjects were asked to think aloud,
to comment on their actions and to describe why they made certain decisions, both during
training and during breaks between trials. These statements were written down by hand
by the experimenter.
After completion of the last trial, the subjects were asked to fill out the final question-
naire. It consisted of three parts: firstly, the 10 statements of the System Usability Score
(SUS) Test [
66
] on a 0–4 Likert agreement scale; Secondly, 10 further custom statements
on handling and usability on the same 0–4 Likert scale; And finally seven questions on
different scales and in free text about perception, suggestions for improvement and the
possibility to leave a comment on the study. The questions of part one and two were always
asked twice: once for the SSD and once for the cane. The subjects could complete this part
of the questionnaire either by handwriting or with the help of an audio survey with haptic
keys on a computer. This allowed both sighted and blind subjects to answer the questions
without being influenced by the presence of the investigator. The third part, on the other
hand, was read out to the blind subjects by the investigator, who noted down the answers.
Due to the small number of participants, the results of the questionnaire are not suitable
for drawing statistically significant conclusions, but should rather serve the qualitative
comparison of the SSD with the cane and support further developments on this or similar
SSDs. In the study data in Supplementary Materials S3 there is a list with all questionnaire
items and the Likert scale answers of items 1–20. In the Results section and in Appendix H
relevant statements made in the free text questions are included.
3.6. Analysis and Statistical Methods
A total of 784 trials in 14 different obstacle course layouts were performed by every
subject over all sessions. The dependent variables were task completion time (in short time)
and number of contacts (in short contacts).
Fixed effects were:
group: between-subject, binary (blind/sighted)
aid: within-subject, binary (SSD/cane)
TS: within-subject, numerical and discrete (the four levels of training)
Variables with random effects were: layout as well as the subjects themselves, nested
within their corresponding group.
The quantitative data of the dependent variable task completion time was analysed by
means of parametric statistics using a linear mixed model (LMM). In order to check whether
the chosen model corresponds to the established assumptions for parametric tests, the data
were analysed according to the recommendation of Zuur et al. [
67
]. The time variable itself
has been normalised in advance using a logarithmic function to meet those assumptions
(referred to in the following as log time). With the assumptions met, all variables were
then tested for their significance to the model and their interactions with each other. See
Appendix Gfor details on the model, its fitting procedure, the assumption and interactions
tests and corresponding plots.
Most statistical methods only test for the presence of differences between two treat-
ments and not for their degree of similarity. To test the sub-hypotheses of H1, a non-
inferiority test (H1a) and an equivalence test (H1b) were thus carried out. These check
whether the least squares (LS) means and corresponding confidence intervals (CI) of a
Sensors 2022,22, 1859 11 of 33
selected contrast exceed a given range (here 25 percentage points in both sub-hypotheses)
either in the lower or in the upper direction. In order to confirm the latter, equivalence,
both directions must be significant; For non-inferiority only the “worse” (in this case the
slower side) has to be significant (since it would not falsify the sub-hypothesis if the SSD
were unequally faster) [68].
No statistical tests were performed on the contacts data. Since the data structure is
zero-inflated and poisson distributed, non-parametric tests such as a generalised linear
mixed model would be required, resulting in low statistical power given the sample size.
Nevertheless, descriptive statistical plots of these data alongside the analysis of the log
time statistics are to be included in the next section.
All analyses have been executed using the statistical computing environment Rand
the graphical user interface RStudio. The lme4 package was used to run LMMs. To calculate
Least LS means, their CI and the non-inferiority/equivalency tests, the emmeans and the
emtrends package was used. In this paper averages are shown as arithmetic mean with the
corresponding standard deviation. For all statistical tests an alpha level of 0.05 was chosen.
4. Results
4.1. Overview
To give an impression of the study procedure, a series of videos was made available in
high resolution at https://vimeo.com/channels/unfoldingspace (accessed on 26 January
2022). A selection of lower resolution clips is also attached to Supplementary Materials
S5. The corresponding subject identifier and the trial number can be found in the opening
credits and the descriptive text. All test persons shown here have explicitly agreed to the
publication of these recordings.
To get an overview of the gathered data, Figure 5shows log time (the normalised
time) for both aid conditions over the four TS horizontally split by group. The plot contains
all individual samples as dots in the background, summarised in boxplots including the
mean (marked with the “
×
” sign) in addition to the common median line. The solid
line represents the linear regression of the fitted LMM. In this and in following plots the
variables aid and group are shown as combination with the levels sighted & SSD (S&S),
sighted & cane (S&C), blind & SSD (B&S) and blind & cane (B&C).
Blind Sighted
1 2 3 4 1 2 3 4
2
3
4
5
10
20
40
80
160
Trial Session (TS)
Log Time
Time (s)
Mean per TS
Blind & SSD
Blind & Cane
Sighted & SSD
Sighted & Cane
LMM Regression
Blind & SSD
Blind & Cane
Sighted & SSD
Sighted & Cane
Boxplots
Blind & SSD
Blind & Cane
Sighted & SSD
Sighted & Cane
Figure 5.
Trial session (TS) vs. normalised task completion time (log time) of all samples as boxplots,
means and regression lines from the linear mixed model. Split horizontally by group (blind/sighted)
and subdivided by aid in colour (SSD/cane).
Contacts show a similar picture (Figure 6): in the last TS, sighted subjects touched
an average of 0.38
±
0.73 objects per run with the SSD and only 0.12
±
0.33 with the cane.
Blind subjects also showed a comparable response in the last TS, touching an average of
Sensors 2022,22, 1859 12 of 33
0.45
±
0.89 objects per run with the SSD, while touching only an average of 0.4
±
0.63 objects
with the cane. As mentioned, these differences cannot be reasonably tested.
Blind Sighted
0.00
0.25
0.50
0.75
1.00
Mean per TS
Blind & SSD
Blind & Cane
Sighted & SSD
Sighted & Cane
Rolling Average
Over 14 Samples
Blind & SSD
Blind & Cane
Sighted & SSD
Sighted & Cane
Blind Sighted
1 2 3 4 1 2 3 4
0.10
0.15
0.20
0.25
0.30
0.35
Trial Sessions (TS)
Running Average Window
Average Contacts per Run
Figure 6.
Trial sessions (TS) vs. average number of contacts with obstacles per run. The upper
half shows the means per TS; The lower one shows a running average (running over all samples)
calculated to better visualise trends. Note the wide window of 14 samples per data point. The plot is
split horizontally by group (blind/sighted) and subdivided by aid in colour (SSD/cane).
4.2. Learning Effect
As a basic assumption for the subsequent hypothesis tests, the learning effect on
performance has to be investigated. A significant effect of TS on log time was expected
in all combinations except B&C, in which the subjects were already familiar with the aid.
Still, with habituation to the task and familiarity with the conditions of the course (size,
type of obstacles, etc.), a negligible effect could be expected in all four conditions, i.e., also
in the case of B&C. The test was carried out by adjusting the base level of the LMM to
the four different combinations. TS shows a statistically significant effect on log time in
Condition S&S (intercept = 4.37, slope =
0.13, SE = 0.04 , p= 0.007), S&C (intercept = 4.03,
slope = 0.15, SE = 0.04 , p= 0.002) and in B&S (intercept = 3.91, slope = 0.16, SE = 0.04 ,
p= 0.001) but not in B&C (intercept = 3.08, slope =
0.06, SE = 0.04 , p= 0.137). The expected
learning progress was thus confirmed by the tests; the general habituation slope over the
course of the study for all subjects and aids was around 0.06 s on log scale.
4.3. Hypothesis H1|Performance
Given the knowledge of the significant effects of group, aid and TS and their interac-
tions, the two sub-hypothesis H1a and H1b could be tested.
4.3.1. H1a|Non-Inferiority of SSD Performance
In order to accept H1a, two separate tests were carried out: firstly, a pairwise compari-
son using Tukey’s HSD test between conditions S&S and S&C—both under the condition
of TS being 4 (after last training): using the Kenward–Roger approximation, a significant
difference (p< 0.001) was found, with the log time LS means predicted to be 54.43% slower
with the SSD (3.87 s, SE = 0.13) than with the cane (3.26 s, SE = 0.13). Back transformed to
the response scale, this gives a predicted time of 47.9 s (95% CI [64.2 s, 35.8 s]) for the SSD
and 31.0 s (95% CI [41.6 s, 23.2 s]) for the cane to complete one obstacle course run. Sec-
ondly, the test for non-inferiority between these two conditions (using the Kenward–Roger
approximation and the Šidák correction) was performed and found to be non-significant
(pvalue 1). This means that the SSD is significantly different from the cane and could be
considered inferior within the predefined tolerance range of 25%. H0 of H1a thus could not
be rejected. The difference between SSD and cane under the condition investigated can also
be observed in Figure 7A.
Sensors 2022,22, 1859 13 of 33
For contacts, as mentioned, a statistical analysis is not feasible. Still, the results can be
compared descriptively in previous plot Figure 6. As already mentioned, the difference in
measured mean contacts per run differed from 0.38
±
0.73 objects per run with the SSD and
only 0.12 ± 0.33 with the cane.
4.3.2. H1b|Equivalence of Learning Progress
To accept H1b, the learning progress of the SSD had to be compared between the two
groups, again by using two tests: firstly, the estimated effect of TS on log time differed
by only 0.04 s (SE = 0.06) between S&S (
0.12 s, SE = 0.04) and B&S (
0.16 s, SE = 0.04)
condition, while not being significant (pvalue = 0.89). This means that there is no proof at
this point that the learning progress between the groups is different. Secondly, to examine
the degree of similarity of the given contrast an equivalence test was carried out (again
using Kenward–Roger approximation and Šidák correction): a significant pvalue of .016
indicated the presence of equivalence of learning progress in both groups with the SSD
(within the predefined tolerance range of 25%). H0 of H1b thus could be rejected. The
learning progress of the SSD across the groups can also be observed in Figure 7B.
Again for contacts, a statistical analysis is not feasible. Nevertheless, it appears useful
to compare the progress of the sighted and blind curve with the SSD in previous plot
Figure 6. In particular, the running average described in this figure suggested a quite
similar progress between those two.
3.0
3.5
4.0
4.5
5.0
Sighted & SSD Sighted & Cane
Combination
LogTime
Least Square Mean &
Confidence Interval
Sighted & SSD
Sighted & Cane
A) Test 1: SSD vs. Cane in Sighted TS4
20
40
80
160
1 2 3 4
Trial Session (TS)
Time (s)
Samples
Sighted & SSD
Blind & SSD
Mean per TS
Sighted & SSD
Blind & SSD
LMM Regression
Sighted & SSD
Blind & SSD
B) Test 2: SSD Learning Curve Sighted vs. Blind
Figure 7.
Graphical representations of the two main hypothesis tests. (
A
) Sub-hypothesis H1a:
performance with the two devices (SSD/cane) after the last training (only Trial Session 4). Least
square means and the corresponding confidence interval extracted from the linear mixed model are
shown to demonstrate the test. (
B
) Sub-hypothesis H1b: the learning curves with the SSD in both
groups (sighted/blind) over the course of the training (all Trial Sessions) are shown. The test only
compares the linear regression slopes, but means and the underlying data points are plotted here as
well.
4.4. Hypothesis H2|Usability & Acceptance
In Figure 8, one can find a tabular evaluation and a graphical representation of all
Likert scale questions of the first and second part of the questionnaire (including the
SUS). In general, one can see that the degree of coverage between SSD and cane was
comparatively high. The discussion section therefore looks at the questions that show
the greatest average deviations and discusses them in a classifying manner. There is no
graphical representation of questions 21–27 as they were in free text or on other scales.
4.4.1. System Usability Score
The System Usability Score, which was queried in the first 10 questionnaire items,
results from the addition of all scores multiplied by 2.5 and thus ranges from 0 to a
maximum of 100 possible points. The SSD achieved an average SUS of 50 in this study,
while the cane scored quite similarly at 53. As expected, the cane performed slightly better
in the blind group (54) than in the sighted (52), while the SSD performed better in the
Sensors 2022,22, 1859 14 of 33
sighted (51) than in the blind (49). The differences are rather negligible due to the sample
size but can be seen as an indicator of a quite comparable assessment of both systems.
Cane
SSD
Cane
SSD
Cane
SSD
Cane
SSD
System Usability Score Questions – Both Groups
Custom Questions – Both Groups
Response Highly Disagree Disagree Neutral Agree Highly Agree
Question 16: The aid was very
disturbing for me.
Question 17: In the first hour of use,
the aid required a lot of
concentration.
Question 18: Now at the end of the
study, the aid still requires a lot of
concentration.
Question 19: In the first hour of use,
I paid very conscious attention to
changes in the stimuli that I could
perceive through the aid.
Question 20: Now at the end of the
study, I still pay very conscious
attention to changes in the stimuli
that I can perceive through the aid.
Question 11: I had a lot of fun using
the aid. Question 12: I felt very comfortable
while using the aid. Question 14: I think I could identify
myself with the aid very well.
Question 15: I think my fellow human
beings can easily recognise my visual
impairment because of the aid.
Question 13: Posture and movements
during use seemed very natural to me.
Question 6: I thought there was too
much inconsistency in this system.
Question 7: I would imagine that most
people would learn to use this system
very quickly.
Question 8: I found the system very
cumbersome to use. Question 9: I felt very confident
using the system.
Question 10: I needed to learn a lot
of things before I could get going
with this system.
Question 1: I think that I would like
to use this system frequently. Question 2: I found the system
unnecessarily complex. Question3: I thought the system was
easy to use.
Question 4: I think that I would need
the support of a technical person to
be able to use this system.
Question 5: I found the various
functions in this system were well
integrated.
Figure 8.
A summary of all Likert scale questionnaire items. The results are summarised for both
groups (sighted/blind) and split per question according to SSD and cane.
4.4.2. Clustered Topics in Subjects’ Statements
Statements expressed in the free interviews after each session, during the aloud
reflection in the training sessions and in the free text questions of the questionnaires were
grouped into five main topics (with the number of subjects mentioning them in brackets):
Cognitive Processing of the Stimuli (9).
From the statements it is quite clear that the
use of the SSD at the beginning of the study required a considerably higher cognitive
effort than the use of the cane (Appendix H, Table A3, ID 1–4). Towards the end of
the study, the subjects still reported a noticeable cognitive effort. However, they often
also noted that the experience felt different than it did at the beginning of the training
and that they could imagine that further training would reduce the effort even more
(Appendix H, Table A3, ID 5 & 6). The subjects’ reports towards the end of the study
also suggested that deeper and more far-reaching experiences might be possible with
the SSD than with the cane (Appendix H, Table A3, ID 7–9).
Perception of Space and Materiality (6).
The topic of how subjects perceived space
and its materiality is undoubtedly related to the previously cited statements about the
processing of stimuli: it is noteworthy how often spatial or sometimes visual accounts
were assigned to the experiences with the SSD, while the cane was rather described as
a tool for warning of objects ahead (Appendix H, Table A4).
Wayfinding Processes (5).
It was mentioned as an advantage of the glove that, in
contrast to the cane, an obstacle can already be detected before contact is made—
i.e., earlier and from a greater distance; A different path can then be taken in advance
in order to avoid a collision with this object. In addition, some described an unpleas-
ant feeling of actively bumping into obstacles with the cane just to get information.
However, these were mainly sighted people who were not yet used to handling the
cane (Appendix H, Table A5).
Enjoyment of Use (3).
The cane is described by some as easy to learn but therefore
less challenging and less fun (Appendix H, Table A6).
Feeling Safe and Comfortable (3).
On the other hand, subjects also report that they
feel safer and more comfortable with the cane (Appendix H, Table A7).
Sensors 2022,22, 1859 15 of 33
4.4.3. Advantages of the SSD
In question 25 (Q25) the subjects could name pros and cons of both devices. These
were summarised to topics, being described from the perspective of the advantages of
the Unfolding Space Glove. An advantage of the cane, for example, was thus evaluated
as a disadvantage of the SSD. The most frequently mentioned (by three or four subjects)
advantages of the SSD were the following: more spatial awareness is possible; one can
survey a wider distance; the handling is more subtle and quiet. Frequent disadvantages
were: the higher learning effort for the SSD and the fact that one can obtain less information
about the type of objects due to missing acoustic feedback.
4.4.4. Suggestions for Improvement from Subjects
In Q26, the subjects were encouraged to list their suggestions for improvement for
a future version of the same SSD—even if these may not be technically feasible. The
two biggest wishes addressed two well-known problems of the prototype: detection of
objects close to the ground (e.g., steps, thresholds, unevenness, . . .) was requested by
five subjects and the detection of objects closer than 10 cm (where the prototype currently
cannot measure and display anything) was requested by four of them. Both would probably
have been mentioned even more frequently if they had not already been pointed out as
well-known problems at the beginning of the study. Additionally, subjects (number in
brackets) wished that they did not have to wear a battery on their arm (3) and wished
that the device was generally more comfortable (2). Some individuals mentioned that they
would like to customise the configuration (e.g., adjust the range). Some wished for the
detection of certain objects (e.g., stairs) or characteristics of the room (brightness/darkness)
to be communicated to them via vibration patterns or voice output.
4.5. Hypothesis H3|Distal Attribution
H3 could be rejected, as there were no specific indications of distal attribution of
perceptions in the subjects’ statements. However, some of the statements strongly suggest
that such patterns were already developing in some subjects (Appendix H, Table A8), which
is why this topic will be addressed in the discussion.
5. Discussion
The results presented above demonstrate not only the perceptibility and processibility
of 3D images by means of vibrotactile interfaces for the purpose of navigation, but also
the feasibility, learnability and usefulness of the novel Unfolding Space Glove—a haptic
spatio-visual sensory substitution system.
Before discussing the results, it has to be made explicitly clear that the study design
and the experimental set-up do not yet allow generalisations to be made about real-life
navigational tasks for blind people. In order to be able to define the objective of the
study precisely, many typical, everyday hazards and problems were deliberately excluded.
These include objects close to the ground (thresholds, tripping hazards and steps) or the
recognition of approaching staircases. Furthermore, auditory feedback from the cane,
which allows conclusions to be drawn about the material and condition of the objects in
question, were omitted. In addition, there is the risk of a technical failure or error, the
limit of a single battery charge and other smaller everyday drawbacks (waterproofness,
robustness, etc.) that the prototype currently still suffers from. Of course, many of the
points listed here could be solved technically and could be integrated into the SSD at a
later stage. However, they would require development time, would have to be evaluated
separately and can therefore not simply be taken for granted in the present state.
With that being said, it is possible to draw a number of conclusions from the data
presented. First of all, some technical aspects: the prototype withstood the entire course
of the study with no technical problems and was able to meet the requirements placed on
it that allow the sensory experience itself to be assessed as independently of the device
as possible. These include, for example, intuitive operation of the available functions,
Sensors 2022,22, 1859 16 of 33
sufficient wearing comfort, easy and quick donning and doffing, sufficient battery life and
good heat management.
The experimental design can also be pointed out: components such as data collection
via tablet, the labelled grid system for placing the obstacles plus corresponding set-up
index cards and the interface for real-time monitoring of the prototype enabled the sessions
to be carried out smoothly with only one experimenter. An assistant helped to set up
and dismantle the room, provided additional support (e.g., by reconfiguring the courses
and documented the study in photos and videos), but neither had to be, nor was present
at every session. Observations, ratings and participant communication were carried out
exclusively by the experimenter.
Turning now to the sensory experience under study, it can be deduced that 3D infor-
mation of the environment is very direct and easy to learn. Not only were the subjects able
to successfully complete all 392 trials with the SSD, but they also showed good results as
soon as the first session and thus after only a few minutes of wearing the device. This is, to
the best of our knowledge, in contrast to many other SSDs in the literature, which require
several hours of training before the new sensory information can be used meaningfully (in
return, usually offering higher information density).
Nevertheless, the cane outperforms the SSD in time and contacts, in both groups in
the first TS and at every other stage of the study (also see Figures 5and 6). Apart from the
measurements, the fact that the cane seems to be even easier to access than the glove is
also shown in the results of the questionnaire among the sighted subjects, for whom both
aids were new: while many answers between the SSD and the cane only differed slightly
on average (
0.5), the deviations are greatest in questions about the learning progress.
Sighted subjects thought the cane was easier to use (Q3,
= 1.0), could imagine that “most
people would learn to use this system” more quickly (Q7,
= 0.8) and stated that they had
to learn less things before they could use the system (Q10, = 0.7) compared to the SSD.
5.1. Hypothesis 1 and Further Learning Progress
At the end of the study and after about 2 h of wearing time, H1a states that the SSD
is still about 54% slower than the cane. Even though this difference in walking speed
could be acceptable if (and only if) other factors gave the SSD an advantage, H1 had to
be rejected: the deviation exceeds the predefined 25% tolerance range and thus can no
longer be understood as an “non-inferior performance”. H1b, however, can be accepted,
indicating that due to a “equivalent learning progress” between the groups these results
would also apply to blind people who have not yet learned either device.
Left unanswered and holding potential for further research is the question of what the
further progression of learning would look like. The fact that the cane (in comparison to
the SSD) already reached a certain saturation in the given task spectrum at the end of the
study is indicated by several aspects: looking at the performance of B&C, one can roughly
estimate the time and average contacts that blind people need to complete the course with
their well-trained aid. At the same time, the measurements of S&C are already quite close
to those of B&C at the end of the study, so that it can be assumed that only a few more hours
of training would be necessary for the sighted to align with them (within the mentioned
limited task spectrum of the experimental setup). The assumption that the learning curve
of the SSD is less saturated than that of the cane at the end of the study is supported by
sighted subjects stating that the cane required less concentration at the end of the study
(Q18, 1.8) than the SSD (Q18, 2.7). Furthermore, they expected less learning progress for
the cane with “another 3 h of training” (Q23, 2.2) in contrast to the SSD (Q23, 3.5) and also
in contrast to the learning progress they already had with the cane during the study (Q22,
3.3). Therefore, a few exciting research questions are whether the learning progress of the
SDD would continue in a similar way, at which threshold value it could come to rest and
whether this value would eventually be equal to or better than that of the cane. Note that
the training time of 2 h in this study is far below that of many other publications on SS; one
Sensors 2022,22, 1859 17 of 33
often-cited study e.g., reached 20–40 h of training with most and 150 h with one particular
subject [32].
Another aspect that confounds the interpretation of the data is the presence of a
correlation between the two independent variables time/log time and contacts. The reason
for this is quite simple: faster walking paces lead to a higher number of contacts. A slow
walking pace, on the other hand, allows more time for the interpretation of information
and, when approaching an obstacle, to come to a halt in time or to correct the path and
not collide with the obstacle. The subjects were asked to consider time and contacts as
being equivalent. Yet these variables lack any inherent value that would allow comparing a
potential contact with loss of time, for example. Personal preference may also play a role in
the weighting of the two variables: fear of collisions with objects (possibly overrepresented
in sighted people due to unfamiliarity with the task) may lead to slower speeds. At the
same time, the motivation to complete the task particularly quickly may lead to faster
speeds but higher collision rates. Several subjects reported that towards the end of the
study, they felt that they had learned the device to the point where contacts were completely
avoidable for them given some focus. This attitude may have led to a bias in the data,
which can be observed in the fact that time increased in the sighted group with both aids
towards the end of the study while the collisions continued to fall. It seems to be difficult
to solve this problem only by changing the formulation of the task and without expressing
the concrete value of an obstacle contact in relation to time (e.g., a collision would add 10 s
to the time or leads to the exclusion of this trial from the evaluation).
5.2. Usability & Acceptance
While there is an ongoing debate about how to interpret SUS and what value the
scoring system has in the first place, the scores of the SSD (50) and the cane (53) are
comparably low. Therefore they can be interpreted as being “OK” only and, in the context of
all the systems tested with this score, they tend to be in the 20% or even 10% percentile [
69
].
It should be noted, however, that the score is rarely used to evaluate assistive devices at all,
which may partly explain a generally poorer performance in those. The score is, however,
suitable to “compare two versions of an application” [
69
]: the presented results therefore
indicate that usability in the two tested aids does not fundamentally differ in the somewhat
small experimental group. This equivalence can also be assessed from other questionnaire
items with most having very few deviations:
Looking at the Likert scale averages of the entire study population (blind & sighted),
biggest deviations (
0.5) can be observed in the expected recognisability of a “visual
impairment because of the aid” that is stated to be much lower (Q15,
= 2.0) with the SSD
(Q15, 1.8) than with the cane (Q15, 3.8). Just as in the sighted group, the average of both
groups stated that the cane was easier to use (Q3,
= 0.8) and required less concentration
at the beginning of the study (Q17,
= 0.8), whereas this difference becomes negligible by
the end of the study (Q18,
= 0.3). Last but not least, the two aids differed in Q11 (“I had a
lot of fun using the aid”), in which the SSD scored = 0.7 points better.
Exemplary statements have already been presented in the results section, summarised
and classified into topics. They support the theses that the Unfolding Space Glove achieves
its goal of being easier and quicker to learn than many other SSDs while providing users
with a positive user experience. However, the sample size and the survey methods are not
sufficient for more in-depth analyses. The presentation should rather serve the purpose of
completeness and provide insights into how the learning process was perceived by the test
persons in the course of the study.
5.3. Distal Attribution and Cognitive Processing
The phenomenon of distal attribution (sometimes externalisation of the stimulus) in
simplified terms describes when users of an SSD report to no longer consciously perceive
the stimulus at the application site on the body (here e.g., the hand), but instead refer to
the perceived objects in space (distal/outside the body). This can also be observed, for
Sensors 2022,22, 1859 18 of 33
example, when sighted people describe visual stimuli and do not describe the perception
of stimuli on their retina, but instead the things they see at their position in space. Distal
attribution was first mentioned in early publications of Paul Bach-y-Rita [
31
,
33
] in which
participants received 20–40 h of training (one individual even 150 h) with a TVSS and
has also been described in other publications e.g., about AVSS devices [
40
]. Ever since
it has been discussed in multifaceted and sometimes even philosophical discourses and
has been topic of many experimental investigations [
70
73
]. As already described in the
results section, the statements do not indicate the existence of this specific attribution.
However, they do show a high degree of spatio-motor coupling of the stimuli and suggest
the emergence of distal-like localisation patterns. The wearing time of only 2 h, however,
was comparatively short and studies with longer wearing times would be of great interest
on this topic.
5.4. Compliance with the Criteria Set
In the introduction to this paper, 14 criteria were defined that are important for a
successful development of an SSD. See also Appendix Afor a description of those. Chebat
et al. [
55
], who collected most of them, originally did so to show problems of known SSD
proposals. In the design and development process of the Unfolding Space Glove these
criteria did play a crucial role from the very start. Now, with the findings of this study in
mind, it is time to examine to what degree it can meet the list of criteria by classifying six of
its key aspects (with numbers of the respective criteria in parentheses):
Open Source and Open Access.
The research, the material, the code and all blueprints
of the device, are open source and open access. In the long run, this can lead to lower
costs (1) of the SSD and a higher dissemination (4), as development expenses are
already eliminated; the device can theoretically even be reproduced by the users
themselves.
Low Complexity of Information.
The complexity and thus the resolution of the
information provided by an SSD is on a low level in order to offer low entry barriers
when learning (1) the SSD. The user experience and enjoyment (13) is not affected much
by the cognitive load (5). This requirement is of course in contrast to the problems of
low resolution (9).
Usage of 3D Input.
The use of spatial depth (7) images from a 3D camera inherently
provides suitable information for locomotion, reduces the cognitive load (5) of extracting
them from conventional two-dimensional images and is independent of lighting
situations and bad contrast (8).
Using Responsive Vibratory Actuators as Output.
Vibration patterns can be felt
(in different qualities) all over the body, are non-invasive, non-critical and have no
medically proven harmful effect. Linear resonance actuators (LRA), offer low latency
(3), are durable, do not heat up too much and are still cost (10) effective, although they
require special integrated circuits to drive them.
Positioning at the Back of the Hand.
The site of stimulation on the back of the hand,
although disadvantaged by other factors such as total available surface or density of
vibrotactile receptors [
74
], proved to be a suitable site of stimulation with regard to
several aspects: a fairly natural posture of the hand when using the device enables a
discrete body posture, does not interfere with the overall aesthetical appearance (14) and
preserves sensory and motor habits (12). The orientation of the Sensor (6) on the back of the
hand is hoped to be quite accurate as we can use our hands for detailed motor actions
and have a high proprioceptive precision in our elbow and shoulder [
75
]. Last but not
least, the hand has a high motor potential (11) (rotation and movement in three axes),
facilitating the sensorimotor coupling process.
Thorough Product & Interaction Design.
A good design does not only consist of the
visible shell. Functionality, interaction design and product design must be considered
holistically and profoundly, and in the end they pay off on many aspects apart from
Sensors 2022,22, 1859 19 of 33
the aesthetic appearance (14) itself, such as almost all of the key aspects discussed in this
section and on user experience and joy of use (13).
6. Conclusions
The Unfolding Space Glove, a novel wearable haptic spatio-visual sensory substitution
system, has been presented in this paper. The glove transforms three-dimensional depth
images from a time of flight camera into vibrotactile stimuli on the back of the hand. Blind
users can thus haptically explore the depth of the space surrounding them and obstacles
contained therein by moving their hand. The device, in its somewhat limited functional
scope, can already be used and tested without professional support and without the need of
external hardware or specific premises. It already is highly portable and offers a continuous
and very immediate feedback, while its design is unobtrusive and discreet.
In a study with eight blind and six sighted (but blindfolded) subjects, the device was
tested and evaluated in obstacle courses. It could be shown that all subjects were able
to learn the device and successfully complete the parcours presented to them. Handling
has low entry barriers and can be learned almost intuitively in a few minutes, with the
learning progress between blind and sighted subjects being fairly comparable. However, at
the end of the study and after about 2 h of wearing the device, the sighted subjects were
significantly slower (by about 54%) in solving the courses with the glove compared to the
white long cane they had worn and trained for the same amount of time.
The device meets many basic requirements that a novel SSD has to fulfil in order
to be accepted by the target group. This is also reflected in the fact that the participants
reported a level of user satisfaction and usability that is—despite its different functions and
complexity—quite comparable to that of the white long cane.
The results in the proposed experimental set-up are promising and confirm that depth
information presented to the tactile system can be cognitively processed and used to
strategically solve navigation tasks. It remains open how much improvement could be
achieved in another two or more hours of training with the Unfolding Space Glove. On
the other hand, the results are of limited applicability to real-world navigation for blind
people: too many basic requirements for a navigation aid system (e.g., detection of ground
level objects) are not yet included in the functional spectrum of the device and would have
to be implemented and tested in further research.
Supplementary Materials:
The following supporting information can be downloaded at: https:
//www.mdpi.com/article/10.3390/s22051859/s1, S1: GitHub hardware release 0.2.1; S2: GitHub
software release 0.2.2; S3: Study data; S4: Exemplary timetable; S5: Exemplary videos (low resolution);
S1: GitHub monitor release 1.4. Code, hardware, documentation and building instructions are also
available in the public repositories https://github.com/jakobkilian/unfolding-space (accessed on 26
January 2022). Consider Release v0.2.1 for the stable version used in this study and consider more
recent commits in which the content has been revised for better accessibility. High resolution video
clips of subjects completing trials can also be found at: https://vimeo.com/channels/unfoldingspace
(accessed on 26 January 2022). For more information and updates on the project please also see https:
//www.unfoldingspace.org (accessed on 26 January 2022). All content of the project, including this
paper, is licensed under the Creative Commons Attribution (CC-BY-4.0) licence (https://creativecom-
mons.org/licenses/by/4.0/ (accessed on 26 January 2022)). The source code itself is under the MIT
licence. Please refer to the LICENSE file in the root directory of the Github repository for detailed
information.
Author Contributions:
Conceptualization, J.K.; methodology, J.K. and A.N.; software, J.K.; validation,
S.W. and L.S.; formal analysis, J.K. and A.N.; investigation, J.K.; resources, J.K. and L.S.; data curation,
J.K.; writing—original draft preparation, J.K.; writing—review and editing, A.N., S.W. and L.S.;
visualization, J.K.; supervision, A.N., S.W. and L.S.; project administration, J.K.; funding acquisition,
S.W. All authors have read and agreed to the published version of the manuscript.
Funding:
Funding to conduct the study was received from University of Tübingen (ZUK 63) as part
of the German Excellence initiative from the Federal Ministry of Education and Research–Germany
(BMBF). This work was done in an industry-on-campus-cooperation between the University of
Sensors 2022,22, 1859 20 of 33
Tübingen and Carl Zeiss Vision International GmbH. The authors received no specific funding
for this work. The funder did not have any role in the study design, data collection and analysis,
decision to publish, or preparation of the manuscript. In addition, the prototype construction was
funded—also by the BMBF—as part of the Kickstart@TH Cologne project of the StartUpLab@TH
Cologne programme (“StartUpLab@TH Cologne”, funding reference 13FH015SU8). We furthermore
acknowledge support by Open Access Publishing Fund of University of Tübingen.
Institutional Review Board Statement:
The study was conducted according to the guidelines of
the Declaration of Helsinki, and approved by the Ethics Committee of the faculty of Medicine at
the University Hospital of the Eberhard-Karls-University Tübingen (project number 248/2021BO2,
approved on 10 May 2021).
Informed Consent Statement:
Written informed consent has been obtained from the subjects to
publish this paper. Individuals shown in photographs in this paper have explicitly consented to the
publication of the photographs by signing an informed consent form.
Data Availability Statement:
The data presented in this study are available in supplementary
materials S3 at: https://www.mdpi.com/article/10.3390/s22051859/s1
Acknowledgments:
The authors would like to thank Kjell Wistoff for his active support in setting
up, dismantling and rebuilding the study room, organising the documents and documenting the
study photographically; Trainer Regina Beschta for a free introductory O&M course and the loan of
the study long cane; Tim Becker and Matthias Krauß from Press Every Key for their open ear when
giving advice on software and hardware; The Köln International School of Design (TH Köln) and the
responsible parties for making the premises available over this long period of time; pmdtechnologies
ag for providing a Pico Flexx camera; Munitec GmbH, for providing glove samples; Connor Shafran
for the proofreading of this manuscript; All those who provided guidance in the development of the
prototype over the past years and now in the implementation and evaluation of the study. And last
but not least to the participants for volunteering for this purpose.
Conflicts of Interest:
We declare that Siegfried Wahl is scientist at the University of Tübingen and
employee of Carl Zeiss Vision International GmbH, as detailed in the affiliations. There were no
conflict of interest regarding this study. The funders had no role in the design of the study; in the
collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to
publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
AIC Akaike Information Criterion
AVSS Auditory-Vision Substitution Systems
B&C blind & cane
B&S blind & SSD
CI confidence interval
ERM eccentric rotating mass
ETA electronic travel aids
LMM linear mixed model
LRA linear resonant actuator
LS least square (means)
MLE Maximum Likelihood Estimate
H0 null hypothesis
H1...n hypothesis 1 to n (numbered)
O&M orientation & mobility (training)
PS practice sessions
Q1. . .n question 1 to n (numbered)
S&C sighted & cane
S&S sighted & SSD
SMC Sensorimotor Contingencies
SS Sensory Substitution
Sensors 2022,22, 1859 21 of 33
SSD Sensory Substitution Device
SUS System Usability Score
ToF Time of Flight
TS trial session
TVSS Tactile-Vision Substitution Systems
VIP visually impaired people
Appendix A. Design Requirements for New SSDs
The following 14 aspects are crucial for the development and evaluation of new SSDs.
Points 1 to 10 originate from Chebat et al. [
55
] while points 11–14 were added by the
authors.
1. Learning:
since mastering an SSD often requires many hours of training, many
prospective users are discouraged. In some cases, the motor functions learned by
Visually Impaired People (VIPs) to orient themselves are contradictory to functions of
SSDs. For example, blind people keep their heads straight in Orientation & Mobility
(O&M) training, while using an SSD with a head-mounted camera requires turning
the head to get the best scan of the scene. The resulting conflict and fear of losing
established and functioning systems through training with SSDs therefore represents
a major obstacle.
2. Training:
most SSD vendors offer no or too little training material for end users.
There is also no standardised test procedure that would allow comparisons between
systems. A standardised obstacle course proposed by Nau et al. [
63
] to test low vision
or artificial vision is a promising approach, but has, to our knowledge, not yet reached
widespread use.
3. Latency:
some systems, especially AVSS, suffer from high latency between changes in
the input image (e.g., by shifting the camera’s perspective) and the generated sensory
feedback. The resulting low immediacy between motor action by the user and his/her
sensory perception, hampers the coupling process with the substituted modality.
4. Dissemination:
information on available SSDs is simply not very widespread yet.
Scientific publications on the topic are often difficult to obtain or not accessible in a
barrier-free way for VIPs.
5. Cognitive load:
with many systems, especially those with high resolution or band-
width and many output actuators, the interpretation of the stimuli requires a high
level of attention and concentration, which is then lacking for other areas of processing,
orientation or navigation.
6. Orientation of the Sensor:
VIPs often find it difficult to determine the real position
of objects in the room based on the images perceived with the help of the actuators.
This is because the assignment between input and output is not always clear, or it is
not apparent which part of the scene they are looking at.
7. Spatial depth:
for many locomotion navigation tasks, the most important aspect is
spatial depth. Extracting this feature from the greyscale image of a complex scene
is time-consuming and only possible with good contrast and illumination. If the 3D
information is provided directly, there is—in theory—no need for this extraction.
8. Contrast:
in traditional TVSS systems using conventional passive colour or B/W
cameras, illumination and contrast of the scene matter. Using depth information from
an actively illuminating 3D cameras (described later on) eliminates this.
9. Resolution:
by down sampling the resolution of the visual information to fit the
physiological limits of the stimulated modality, a lot of acuity and therefore crucial
information can be lost. A zoom function is one way to encounter this loss.
10.
Costs:
as the development of SSDs can take many years of research and development,
commercially available devices are often expensive, which can deter potential users.
Costs can be reduced by using widespread eletronic components instead of specialised
parts. Another way to reduce costs is to use existing devices like smartphones and
Sensors 2022,22, 1859 22 of 33
their features like the camera, the battery and the calculating power. One example for
this is the AVSS “the vOICe” [40].
11.
Motor Potential:
as mentioned in the brain plasticity section, a crucial factor for the
success of sensorimotor coupling in learning SSDs is movement itself. If the potential
for movements with the sensor (hence active motor influence on the sensory input)
is low, this enactive process might be hindered. Positioning the sensor or camera on
head or trunk, as seen in some examples, influences this potential considerably, as it
only has limited rotation and very low potential for movement. If the head is kept
straight, as commonly practised in O&M trainings, this potential might be reduced
even more.
12.
Preservation of Sensory and Motor Habits:
there is a wide range of possible restric-
tions of everyday sensory and motor habits that body-worn devices can cause. The
overall weight of the device, its form factor and positioning on the body need to be
carefully considered and tested, taking into account those habits and needs of VIPs.
Blocking the auditory sensory channel by using headphones can, for example, hinder
danger-recognition, being addressed and orienting by auditory information. As the
hands play a key role in object recognition when feeling objects at close range, their
mobility should also be ensured.
13.
User Experience and Joy of Use:
the stimuli should provide a pleasant experience,
should not exceed pain thresholds and should not lead to overstraining, irritating
or disturbing side effects, even with prolonged use. In order to get used to a new
device, not only the purely functional benefit must be convincing. The use of the
device should also be enjoyable and trigger a positive user experience. Otherwise, the
basic acceptance could be reduced and the learning process could be hindered.
14.
Aesthetic Appearance:
even if functionality is in the foreground, VIPs in particular
should not be offered devices that do not meet basic aesthetic standards. A device
with a discrete design, which possibly fits stylistically and colour-wise into the outer
appearance of the VIP, leads to confidence in it and greater pleasure in using it. At the
same time it prevents unnecessary stigmatisation.
Appendix B. Selection of the Input System
ToF cameras have been around since the early 2000s [
76
], but prices were many times
higher than today due to their exclusive use in the industrial segment [
77
79
]. A few years
ago, ToF cameras were increasingly integrated as back-cameras into smartphones to be
used for AR applications [
80
,
81
]; just recently they (next to similar 3D cameras) became
important as a front-camera as well for a more secure biometric face recognition to unlock
the device [82].
While there certainly has been work addressing three-dimensional 3D (x, y, depth)
input from the environment to aid navigation, the topic is not very prevalent in SS research:
some only dealt with sub-areas of a 3D input [
83
85
] or described ETAs that transmit some
kind of simplified depth information, while not being an SSDs in its proper sense [
86
90
].
In addition, there is work that actually did propose SSDs that use depth data, yet they
often had problems implementing or testing the systems in practice because the technology
was not yet advanced enough (too slow, heavy and/or expensive) [
91
97
]. In recent years,
however, papers have been published (most of them using sound as an output) that actually
proposed and tested applied 3D systems [
97
107
]. This includes the Sound of Vision device,
which today probably ranks among the most advanced and sophisticated, which has also
been evaluated in several studies. The project started with auditory feedback only, but now
also uses body-worn haptic actuators [101,102,106,108].
Overall, only very few experimented with ToF cameras [
89
,
93
] or an array of ToF
sensors [
103
] at all and to the best of our knowledge there is no project that uses a low-cost
and comparably fast new generation ToF camera for this, let alone implementing it in a
tactile/haptic SSD.
Sensors 2022,22, 1859 23 of 33
Due to its price, size, form factor, frame rate and resolution the “Pico Flexx” ToF
camera development kit ($389 today [79]) was chosen for the prototype (Figure A1).
Figure A1. The chosen ToF camera development kit Pico Flexx from pmdtechnologies
Appendix C. Details on the Setup
Both the bracelet—usually used to attach a smartphone for sporting activities—and the
10 Ah power bank are commercially available components. The attachment interface was
glued to the power bank to be compatible with the bracelet. In the current configuration,
the battery lasts about 8 h in ordinary use, with the majority being spent on the computing
unit, which has potential to be even more power efficient.
The glove consists of two layers of fabric (modified commercially available gloves,
mainly made of polyamide), between which the motors are glued and the cables are sewn.
Each motor is covered with a protective sleeve made of heat-shrink tubing to prevent
damage to the soldered joints exposed to heavy stress.
Finally there is the Unfolding Space Carrier Board attached to the outside using velcro
(removable and exchangeable): a printed circuit board specifically made for the project
containing the drivers for controlling the motors, the ToF camera attached via USB 3.0
Micro B connector and elastic band, and finally the computing unit—a Raspberry Compute
Module 4—connected via a 100-pin mezzanine connector. The CM4101016 configuration of
the Compute Module (16 GB Flash, 1 GB RAM, Wifi) that currently is in operation has very
little load on flash and RAM and a cheaper version could also be used.
Appendix D. Design of the Actuator System
Once the input side was set up, a suitable interface had to be found to pass on the
processed information to a sensory modality using the predefined medium of vibration.
Conventional eccentric rotating mass (ERM) actuators are the first choice for vibratory
output in many projects; they are easy to handle, affordable, but not very responsive (rise
time starting at 50 ms, usually even higher), noisy and not very durable (100–600 h life
time) due to the wear of parts [
109
]. A series of self-tests with different ERM motors, in
different arrangements and on different parts of the body confirmed this and also revealed
that the ERMs quickly become uncomfortably hot in continuous operation.
Linear resonant actuators (LRAs) instead provided a remedy in the following tests:
with only 10 ms rise time [
110
] and a higher lifetime (833 h tested, “thousands of hours”
possible) [
109
,
110
] these are much better suited for this claim; they furthermore consume
less power at the same amount of acceleration (which is important for mobile devices) and
can apply a higher maximum acceleration [
109
,
110
] while in tests remaining cool enough
for direct application to the skin.
To keep the complexity on a low level a 3
×
3 LRA matrix proved to be a good set-up
in tests. Figure A2 shows this structure on an early prototype.
Sensors 2022,22, 1859 24 of 33
Figure A2.
The matrix of 3
×
3 LRA vibration motors forming the haptic interface of the prototypes—
in this case an early prototype is shown.
At this point, it should be mentioned that there is relatively recent research on the
perceived differences between these two actuator types when attached to the skin [
111
,
112
],
which to some extent contradicts these assumptions and instead suggests ERM motors for
haptic interfaces. It is yet to be seen what further research in this area will reveal.
Also note that there is a new generation of brush-less direct current (BLDC) ERM
motors available not taken into account in this summary, that outperforms classic ERM
motors while being more expensive but less energy efficient [109].
Appendix E. Details on the Algorithm
Each depth image is first checked for reliability and then divided into 3
×
3 tiles,
each of which is used to create a histogram. Starting from the beginning of the measuring
range (~10 cm), these histograms are scanned for objects at each level of depth (0–255) from
near to far. If the number of pixels within a range of five depth steps respectively ~ 4 cm
exceeds the threshold, the algorithm saves the current distance value of this tile. If however
the number remains below the threshold, it is assumed that there is no object within this
image tile at this depth level or that there is only image noise. In this case, the algorithm
increments the depth step by one and performs the threshold comparison again until it
finally arrives at the furthest depth step. The nine values of the resulting 3
×
3 vibration
pattern are finally passed on to the vibratory actuators as the amplitude. Each motor thus
represents the object that is closest to the camera within the corresponding tile and hence
to the hand of the person using the camera.
In Table A1 one can find a summary of the translated modalities. For better illustration,
the three-dimensional extension of the field of view of a ToF camera is described as a
frustum Figure A3.
Table A1. The translation principle of the algorithm.
Visual Information Translation into Haptic Stimuli
x-axis of the frustrum (horizontal extension) x-axis of the motor matrix in 3 levels
y-axis of the frustrum (vertical extension) y-axis of the motor matrix in 3 levels
z-axis of the frustrum (distance to the camera) amplitude/vibration strength
To be as platform-independent as possible, a monitoring tool was developed in the
Unity 3D game development environment. It receives data from the glove via udp protocol,
as long as they are connected to the same (Wifi) network, and displays them visually. This
includes the depth image and motor values in real time as well as various technical data
such as processing speed, temperature of the Raspberry Pi core and others. It can also be
used to control the glove, switch it off temporarily, or test the motors individually.
Sensors 2022,22, 1859 25 of 33
Figure A3. The Volume of the Field of View of a 3D Camera Described as a Frustum.
The files for the Raspberry Pi Code (C++), the monitoring app (Unity 3D, C#) and re-
spective documentation are open source and available on https://github.com/jakobkilian/
unfolding-space (accessed on 26 January 2022). Consider Release v0.2.1 for the stable versi
n used in this study and consider more recent commits in which the content has been
revised for better accessibility.
Appendix F. Data on Subjects
The following table shows a summary of the subject data from the Supplementary
Materials S3. Beyond the information given, none of the subjects used smart canes, none
used a guide dog and none had experience with another SSD.
Table A2.
Summary of the subject data. Column name abbreviations: ID = subject identifier; VI Level
= level of visual impairment according to ICD-10 H54.9 definition; Cane Exp. = years of experience
with the cane; O&M = Orientation & Mobility training completed; VA = measured decimal visual
acuity (NLP = no light perception, LP = light perception); D. Age = age at diagnosis of full blindness.
ID Age Gender VI Level Cane Exp. O&M VA D. Age Description
v 25 m 4 blind > 10 years yes LP 1 Cone-rod dystrophy
o 30 f 4 blind > 10 years yes NLP 1
Retinal detachment due to premature birth; treatment with oxygen
z 34 f 4 blind >10 years yes LP 1 Retinal detachment
j 37 m 3 blind 1–10 years yes 0.18 10
Retinitis pigmentosa; cataract (treated); macular edema; visual field
10 degree; visual acuity about 0.2; 10° field of view (IDC Level 3)
h 51 m 5 blind >10 years yes NLP 1 Blindness due to retinal tumour; glass eyes
c 56 m 4 blind 1–10 years yes LP 53
Premature birth; incubator; retinopathy affected the eyes; first blind-
ness in the right eye in the 6th year; left eye still VA of 0.1–0.15 for a
long time; Blindness in both eyes for 2 years.
y 64 m 5 blind >10 years yes NLP 4 Blindness due to a vaccination
f 65 f 5 blind >10 years yes NLP 0 Birth-blind; toxoplasmosis in pregnancy of mother
u 26 o 0 no VI - no 1.25 -
t 29 f 0 no VI - no 1.25 -
i 32 m 0 no VI - no 1.25 -
e 45 f 0 no VI - no 1.25 -
q 64 m 0 no VI - no 1.25 -
b 72 f 0 no VI - no 0.37 -
Sensors 2022,22, 1859 26 of 33
Appendix G. Testing Assumptions for Parametric Tests
The model was fitted according to the top to down procedure by Zuur et al. [
113
]
starting from the full model below including all variables that could be of reasonable
relevance and piecewise removing those found to be non-significant.
(logTime Group Aid TS +Order + (TS|Group :Subject) + (1|Layout))
Before the model has been reduced, the following assumptions (Figures A4A6) have
been tested in order to be able to use a linear mixed effects model in the first place and later
apply parametric tests to it [67]:
0
20
40
60
−1.0 −0.5 0.0 0.5 1.0
Residual
Frequency
−1.0
−0.5
0.0
0.5
1.0
−2 0 2
Theoretical Quantiles
Sample Quantiles
−1.0
−0.5
0.0
0.5
1.0
0 200 400 600
Observation Order
Residual
Distribution of LMM Residuals
A B C
Figure A4.
Normal distribution of residuals shown in (
A
) histogram and (
B
) QQ-Plot. Additionally
scatterplot in (C) shows that there is no major correlation between the residuals.
Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96
p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73p value = : 0.73
−0.5
0.0
0.5
−1 0 1
Subject Intercept Residual
Sample Quantiles
Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98Shapiro−Wilk test = : 0.98
p value = : 0.97p value = : 0.97p value = : 0.97pvalue = : 0.97p value = : 0.97p value = : 0.97p value = : 0.97p value = : 0.97p value = : 0.97p value = : 0.97p value = : 0.97p value = : 0.97
−0.2
−0.1
0.0
0.1
0.2
−1 0 1
Subject Slope Residual
Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96Shapiro−Wilk test = : 0.96
p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74p value = : 0.74
−0.05
0.00
0.05
−1 0 1
Layout Intercept Residual
Distribution of Intercepts and Slopes of Random Factors
A B C
Figure A5.
QQ-Plots showing the distribution of the residuals of random effects: (
A
) intercept of
subjects, (B) slope of subjects (on TS), (C) intercept of layout
Sighted Blind
SSD
Cane
SSD
Cane
−1.0
−0.5
0.0
0.5
1.0
Residuals
Sighted Blind
b e i q t u c h j o v y
−1.0
−0.5
0.0
0.5
1.0
Subjects
Residuals
−1.0
−0.5
0.0
0.5
1.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Layouts
Residuals
−1.0
−0.5
0.0
0.5
1.0
2.5 3.0 3.5 4.0 4.5
Fitted
Residuals
Homoscedasticity of Residuals
AB DC
Figure A6.
Graphical representations of variance between different variables to show homoscedas-
ticity/homogeneity of variance: (
A
) group and aid, (
B
) subjects and (
C
) layouts. Furthermore the
residuals vs. fittet scatterplot in (
D
) allows to check for non-linearity, unequal error variances and
outliers. Even though the variances of the error terms seem to be slightly left-slanted, the data can be
seen as reasonably homoscedastic and homogeneous.
With these assumptions met, all variables were next tested individually for their effect
size on the model [
113
]. The Akaike Information Criterion (AIC), which is included in
Maximum Likelihood Estimates (MLE), was used to identify models that exclude certain
variables and therefore show a better fit. Those candidates then have been checked in direct
comparisons with a corresponding null model using the Chi-Square p-value. First the
order effect (which aid has been tested first in a TS) could be excluded from the full model
Sensors 2022,22, 1859 27 of 33
(p= 0.886). Layout has been found to be non-significant as well (p= 0.144), however it
remains in the model because of reasonable concern about difference in difficulty. Subjects
being hierarchically nested within group (due to the unique assignment to one of the two
groups) has a significant effect on the model (p< 0.001), so do group itself (p< 0.001), aid
(p< 0.001) and TS (p< 0.001).
Lastly, several interactions can be observed (Figure A7) in the data (e.g., the blind
group already having experience with the cane). The interaction between all three fixed
factors (group, aid and TS) has to be included (p< 0.001) as well as between TS and aid
(p< 0.001), between group and aid (p< .001) and also between aid and TS (p= 0.007).
Combination Mean Sighted & SSD Sighted & Cane Blind & SSD Blind & Cane Group Mean Sighted Blind Aid Mean SSD Cane
Interaction Between Factors
2
3
4
5
1 2 3 4
Group & Aid vs. TS
logTime
1 2 3 4
Group vs. TS
1 2 3 4
Aid vs. TS
Sighted Blind
Aid vs. Group
AB C D
Figure A7.
Interaction plot for several variable pairs show interactions in (
A
) group & aid vs. TS,
(B) in group vs. aid, (C) a small interaction in aid vs. TS as well as one in (D) aid vs. group.
With order being dropped and the other variables as well as the interactions being
kept, the final model was this:
(logTime Group Aid TS + (TS|Grou p :Subject) + (1|Layout))
Appendix H. Qualitative Statements
In the following appendix, the statements made by the subjects (verbally or in the
final questionnaire) are listed in the form of tables, classified by general topics on which
the statements are based. All these topics are referred to in the results section as well.
Table A3.
Abbreviations used in this and the following tables: ID = statement identifier; Subj =
subjects; TS = Trial Session.
ID Statements on Topic 1: Cognitive Processing of the Stimuli Subj TS
1 “The glove requires more concentration.” b 1
2 “I’m more occupied with the glove.” i 1
3 “The glove needs a training phase.” q 1
4 “With the glove I need even more time. The cane is easy to handle, but the glove takes longer.” u 1
5 “You could not yet talk, think or do anything else while using the SSD.” c 3
6 “Talking at the same time [when using the SSD] would still be too exhausting.” h 4
7
“You experience the size and distance of objects a bit like touching them. With the cane, on the contrary, I don’t imagine myself
touching the object.” h 4
8
“I could imagine that with more training it feels like a fabric that gets thicker and thicker the deeper you go. Then you just take
the path of least resistance. I even made involuntary movements with my hand at the end of the study.” v 4
9
“Sometimes I just reacted too late [when colliding with an obstacle] and I asked myself: yes, he [the SSD] did warn me, but why
didn’t I react? With experience, it should become easier.” o 4
Sensors 2022,22, 1859 28 of 33
Table A4
ID Statements on Topic 2: Perception of Space and Materiality Subj TS
1
“Structures are now being formed with the glove: you walk towards something, vibration becomes stronger, an imaginary wall
is created. With the cane, on the other hand, it’s really there. [But how you can walk around it] can only be determined by three
more strokes [with the cane]. With the glove, on the other hand, you directly have the whole picture. If you were to draw it,
with the stick it would be points, with the glove it would be lines (with distance). So by structure I mean material or resistance.”
v 2
2
“[With the SSD I had ] more the feeling of actually seeing. Feeling for distances and gaps was better. More detailed and
differentiated perception of the objects.” j 3
3
“[With the SSD] I was able to estimate well how far away the objects were. Partially, a spatial perception of the space in front of
me was also possible. [With the SSD] I was able to perceive only what was directly in front of me, not preemptively.” j 4
4 “With the glove I can just look at what’s back there on the right.” e 3
5 “Almost »spatial« vision is possible.” e 4
6
“I imagine the object and run my hand along it with the SSD to feel its corners, edges, shape and to know where it ends. [With
the glove] I imagine less the object itself, more that there is an obstacle.” t 4
Table A5
ID Statements on Topic 3: Wayfinding Processes Subj TS
1 “I use the glove strategically: I can look further to the left or right and use it to find my way.” v 1
2 “With the glove, you can orientate yourself earlier and make decisions sooner.” i 3
3 “It is more a matter of seeking and finding the obstacles with the cane.” y 4
4
“With the glove you can anticipate better. With the cane you are dependent on the collision of the obstacle with the cane (bang
bang) and then have to run somewhere else without knowing what is coming. With the glove you run directly towards the spot
that is free: you first check the surroundings and then think about where you are going and don’t just start running and see
what is coming.”
j 1
5 “With the cane, you only notice it when you hit it. With the glove you can react before that.” o 4
6 “The cane is more like hand-to-hand combat.” c 1
7 “The cane makes you feel clumsy. It’s annoying that you make a loud noise with it” u 1
Table A6
ID Statements on Topic 4: Enjoyment of Use Subj TS
1 “With the stick it’s not so much fun because you can already master it safely.” t 3
2 [Talking about the glove:] “That was fun!” i 3
3 “I could imagine running only with the glove in certain situations. The glove is fun. You can really immerse yourself in it.” v 4
Table A7
ID Statements on Topic 5: Feeling Safe and Comfortable Subj TS
1
“With the cane I’m not nervous, I walked fast, I’m very confident. In the last two runs with the glove I also felt more confident.”
t 3
2 “Cane was always safe, glove got better towards the end and I was less afraid.” u 1
3
“With the cane I already felt comfortable the last time. Can run faster with stick without anything happening. More unsafe
with glove.” i 3
Table A8
ID Statements on Topic 6: Distal Attribution Subj TS
1 “I imagine the object and run my hand along it with the SSD to feel its corners, edges, shape and to know where it ends.” t 4
2 “Got more of a feeling of really seeing [With the SSD].” j 4
3
“[With the SSD] I was able to estimate well how far away the objects were. Partially, a spatial idea of the space in front of me
was also possible.” j 4
4 “This time I became aware of the change of space through my movement” i 3
5 “Structures are forming [...]. You start to decode the information and build a space.” v 2
6 “[With the SSD] you instinctively get the feeling »there’s something ahead« and you want to get out of the way.” t 4
References
1.
Proulx, M.J.; Gwinnutt, J.; Dell’Erba, S.; Levy-Tzedek, S.; de Sousa, A.A.; Brown, D.J. Other Ways of Seeing: From Behavior to
Neural Mechanisms in the Online “Visual” Control of Action with Sensory Substitution. Restor. Neurol. Neurosci.
2016
,34, 29–44.
https://doi.org/10.3233/RNN-150541.
2.
Jacobson, H. The Informational Capacity of the Human Eye. Science
1951
,113, 292–293. https://doi.org/10.1126/science.113.2933.292.
Sensors 2022,22, 1859 29 of 33
3.
Montarzino, A.; Robertson, B.; Aspinall, P.; Ambrecht, A.; Findlay, C.; Hine, J.; Dhillon, B. The Impact of Mobility and Public Transport
on the Independence of Visually Impaired People. Vis. Impair. Res. 2007,9, 67–82. https://doi.org/10.1080/13882350701673266.
4.
Taipale, J.; Mikhailova, A.; Ojamo, M.; Nättinen, J.; Väätäinen, S.; Gissler, M.; Koskinen, S.; Rissanen, H.; Sainio, P.; Uusitalo, H.
Low Vision Status and Declining Vision Decrease Health-Related Quality of Life: Results from a Nationwide 11-Year Follow-up
Study. Qual. Life Res. 2019,28, 3225–3236. https://doi.org/10.1007/s11136-019-02260-3.
5.
Brown, R.L.; Barrett, A.E. Visual Impairment and Quality of Life Among Older Adults: An Examination of Explanations for the
Relationship. J. Gerontol. Ser. B 2011,66B, 364–373. https://doi.org/10.1093/geronb/gbr015.
6.
Hersh, M.A.; Johnson, M.A. Disability and Assistive Technology Systems. In Assistive Technology for Visually Impaired and Blind
People; Hersh, M.A., Johnson, M.A., Eds.; Springer: London, UK, 2008; pp. 1–50. https://doi.org/10.1007/978-1-84628-867-8_1.
7. Hersh, M.A.; Johnson, M.A. Assistive Technology for Visually Impaired and Blind People; Springer: London, UK, 2008.
8.
Pissaloux, E.; Velazquez, R. (Eds.) Mobility of Visually Impaired People: Fundamentals and ICT Assistive Technologies; Springer
International Publishing: Cham, Switzerland, 2018.
9.
Montello, D.R.; Sas, C.; Human Factors of Wayfinding in Navigation. In International Encyclopedia of Ergonomics and Human
Factors; CRC Press/Taylor & Francis, Ltd.: London, UK, 2006; pp. 2003–2008.
10.
Goldschmidt, M. Orientation and Mobility Training to People with Visual Impairments. In Mobility of Visually Impaired
People: Fundamentals and ICT Assistive Technologies; Pissaloux, E., Velazquez, R., Eds.; Springer International Publishing: Cham,
Switzerland, 2018; pp. 237–261. https://doi.org/10.1007/978-3-319-54446-5_8.
11.
Loomis, J.M.; Klatzky, R.L.; Golledge, R.G.; Cicinelli, J.G.; Pellegrino, J.W.; Fry, P.A. Nonvisual Navigation by Blind and Sighted:
Assessment of Path Integration Ability. J. Exp. Psychol. Gen. 1993,122, 73–91. https://doi.org/10.1037//0096-3445.122.1.73.
12.
Kusnyerik, A.; Resch, M.; Kiss, H.; Nemeth, J. Vision Restoration with Implants. In Mobility of Visually Impaired People: Fundamentals
and ICT Assistive Technologies; Pissaloux, E., Velazquez, R., Eds.; Springer International Publishing: Cham, Switzerland, 2018;
pp. 617–630. https://doi.org/10.1007/978-3-319-54446-5_20.
13.
Edwards, T.L.; Cottriall, C.L.; Xue, K.; Simunovic, M.P.; Ramsden, J.D.; Zrenner, E.; MacLaren, R.E. Assessment of the Electronic
Retinal Implant Alpha AMS in Restoring Vision to Blind Patients with End-Stage Retinitis Pigmentosa. Ophthalmology
2018
,
125, 432–443. https://doi.org/10.1016/j.ophtha.2017.09.019.
14.
Schinazi, V.R.; Thrash, T.; Chebat, D.R. Spatial Navigation by Congenitally Blind Individuals. Wiley Interdiscip. Rev. Cogn. Sci.
2016,7, 37–58. https://doi.org/10.1002/wcs.1375.
15.
Giudice, N.A. Navigating without Vision: Principles of Blind Spatial Cognition. In Handbook of Behavioral and Cognitive Geography;
Edward Elgar Publishing: Northampton, MA, USA, 2018; pp. 260–288. https://doi.org/10.4337/9781784717544.00024.
16.
Roentgen, U.R.; Gelderblom, G.J.; Soede, M.; de Witte, L.P. Inventory of Electronic Mobility Aids for Persons with Visual
Impairments: A Literature Review. J. Vis. Impair. Blind. 2008,102, 702–724. https://doi.org/10.1177/0145482X0810201105.
17.
Gold, D.; Simson, H. Identifying the Needs of People in Canada Who Are Blind or Visually Impaired: Preliminary Results of a
Nation-Wide Study. Int. Congr. Ser. 2005,1282, 139–142. https://doi.org/10.1016/j.ics.2005.05.055.
18.
Leonard, R.M.; Gordon, A.R. Statistics on Vision Impairment: A Resource Manual; Lighthouse International: New York, NY,
USA, 2001.
19.
Manduchi, R. Mobility-Related Accidents Experienced by People with Visual Impairment. Insight Res. Pract. Vis. Impair. Blind.
2011,4, 44–54.
20.
Kim, S.Y.; Cho, K. Usability and Design Guidelines of Smart Canes for Users with Visual Impairments. Int. J. Des.
2013
,7, 99–110.
21.
Farmer, L.W. Mobility Devices. In Foundations of Orientation and Mobility; Welsh, R.L., Blasch, B.B., Eds.; American Foundation for
the Blind: New York, NY, USA, 1980; pp. 357–412.
22.
Buchs, G.; Simon, N.; Maidenbaum, S.; Amedi, A. Waist-up Protection for Blind Individuals Using the EyeCane as a Primary and
Secondary Mobility Aid. Restor. Neurol. Neurosci. 2017,35, 225–235. https://doi.org/10.3233/RNN-160686.
23.
dos Santos, A.D.P.; Medola, F.O.; Cinelli, M.J.; Garcia Ramirez, A.R.; Sandnes, F.E. Are Electronic White Canes Better than
Traditional Canes? A Comparative Study with Blind and Blindfolded Participants. Univ. Access Inf. Soc.
2021
,20, 93–103.
https://doi.org/10.1007/s10209-020-00712-z.
24.
Dakopoulos, D.; Bourbakis, N.G. Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey. IEEE Trans. Syst. Man
Cybern. Part C Appl. Rev. 2010,40, 25–35. https://doi.org/10.1109/TSMCC.2009.2021255.
25.
Robert H. Whitstock. Dog Guides. In Foundations of Orientation and Mobility; Welsh, R.L., Blasch, B.B., Eds.; American Foundation
for the Blind: New York, NY, USA, 1980; pp. 565–580.
26.
Whitmarsh, L. The Benefits of Guide Dog Ownership. Vis. Impair. Res.
2005
,7, 27–42. https://doi.org/10.1080/13882350590956439.
27.
Wiggett-Barnard, C.; Steel, H. The Experience of Owning a Guide Dog. Disabil. Rehabil.
2008
,30, 1014–1026.
https://doi.org/10.1080/09638280701466517.
28.
Knol, B.W.; Roozendaal, C.; van den Bogaard, L.; Bouw, J. The Suitability of Dogs as Guide Dogs for the Blind: Criteria and
Testing Procedures. Vet. Q. 1988,10, 198–204. https://doi.org/10.1080/01652176.1988.9694171.
29.
Bourne, R.; Steinmetz, J.D.; Flaxman, S.; Briant, P.S.; Taylor, H.R.; Resnikoff, S.; Casson, R.J.; Abdoli, A.; Abu-Gharbieh, E.; Afshin,
A.; et al. Trends in Prevalence of Blindness and Distance and near Vision Impairment over 30 Years: An Analysis for the Global
Burden of Disease Study. Lancet Glob. Health 2021,9, e130–e143. https://doi.org/10.1016/S2214-109X(20)30425-3.
30.
Bach-y-Rita, P.; Kercel, S.W. Sensory Substitution and the Human-Machine Interface. Trends Cogn. Sci. (Regul. Ed.)
2003
,
7, 541–546. https://doi.org/10.1016/j.tics.2003.10.013.
Sensors 2022,22, 1859 30 of 33
31. Bach-y-Rita, P. Brain Mechanisms in Sensory Substitution; Academic Press Inc: New York, NY, USA, 1972.
32.
Bach-y-Rita, P.; Collins, C.C.; Saunders, F.A.; White, B.; Scadden, L. Vision Substitution by Tactile Image Projection. Nature
1969
,
221, 963–964. https://doi.org/10.1038/221963a0.
33.
White, B.W.; Saunders, F.A.; Scadden, L.; Bach-Y-Rita, P.; Collins, C.C. Seeing with the Skin. Percept. Psychophys.
1970
,7, 23–27.
https://doi.org/10.3758/BF03210126.
34.
Bach-y-Rita, P.; Tyler, M.E.; Kaczmarek, K.A. Seeing with the Brain. Int. J. Hum.–Computer Interact.
2003
,15, 285–295.
https://doi.org/10.1207/S15327590IJHC1502_6.
35.
McDaniel, T.; Krishna, S.; Balasubramanian, V.; Colbry, D.; Panchanathan, S. Using a Haptic Belt to Convey Non-Verbal
Communication Cues during Social Interactions to Individuals Who Are Blind. In Proceedings of the 2008 IEEE Interna-
tional Workshop on Haptic Audio Visual Environments and Games, Ottawa, ON, Canada, 18–19 October 2008; pp. 13–18.
https://doi.org/10.1109/HAVE.2008.4685291.
36.
Krishna, S.; Bala, S.; McDaniel, T.; McGuire, S.; Panchanathan, S. VibroGlove: An Assistive Technology Aid for Conveying Facial
Expressions. In Proceedings of the CHI ’10 Extended Abstracts on Human Factors in Computing Systems, Atlanta, GA, USA,
10–15 April 2010; ACM Press: New York, NY, USA, 2010; pp. 3637–3642. https://doi.org/10.1145/1753846.1754031.
37.
Hanneton, S.; Auvray, M.; Durette, B. The Vibe: A Versatile Vision-to-Audition Sensory Substitution Device. Appl. Bionics Biomech.
2010,7, 269–276. https://doi.org/10.1080/11762322.2010.512734.
38.
Zelek, J.S.; Bromley, S.; Asmar, D.; Thompson, D. A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding. J. Vis.
Impair. Blind. 2003,97, 621–632. https://doi.org/10.1177/0145482X0309701007.
39.
Loomis, J.M.; Golledge, R.G.; Klatzky, R.L. Navigation System for the Blind: Auditory Display Modes and Guidance. Presence
Teleoperators Virtual Environ. 1998,7, 193–203. https://doi.org/10.1162/105474698565677.
40.
Meijer, P. An Experimental System for Auditory Image Representations. IEEE Trans. Biomed. Eng.
1992
,39, 112–121.
https://doi.org/10.1109/10.121642.
41.
Sampaio, E.; Maris, S.; Bach-y-Rita, P. Brain Plasticity: ‘Visual’ Acuity of Blind Persons via the Tongue. Brain Res.
2001
,
908, 204–207. https://doi.org/10.1016/S0006-8993(01)02667-1.
42.
Haigh, A.; Brown, D.J.; Meijer, P.; Proulx, M.J. How Well Do You See What You Hear? The Acuity of Visual-to-Auditory Sensory
Substitution. Front. Psychol. 2013,4, 330:1–330:13. https://doi.org/10.3389/fpsyg.2013.00330.
43.
Abboud, S.; Hanassy, S.; Levy-Tzedek, S.; Maidenbaum, S.; Amedi, A. EyeMusic: Introducing a “Visual” Colorful Experience for
the Blind Using Auditory Sensory Substitution. Restor. Neurol. Neurosci.
2014
,32, 247–257. https://doi.org/10.3233/RNN-130338.
44.
Lenay, C.; Gapenne, O.; Hanneton, S.; Marque, C.; Genouëlle, C. Chapter 16. Sensory Substitution: Limits and Perspectives. In
Advances in Consciousness Research; Hatwell, Y., Streri, A., Gentaz, E., Eds.; John Benjamins Publishing Company: Amsterdam, The
Netherlands, 2003; Volume 53, pp. 275–292. https://doi.org/10.1075/aicr.53.22len.
45.
Velázquez, R. Wearable Assistive Devices for the Blind. In Wearable and Autonomous Biomedical Devices and Systems for
Smart Environment; Lecture Notes in Electrical Engineering, Springer: Berlin/Heidelberg, Germany, 2010; pp. 331–349.
https://doi.org/10.1007/978-3-642-15687-8_17.
46.
Dunai, L.D.; Lengua, I.L.; Tortajada, I.; Simon, F.B. Obstacle Detectors for Visually Impaired People. In Proceedings of the 2014
International Conference on Optimization of Electrical and Electronic Equipment (OPTIM), Bran, Romania, 22–24 May 2014;
pp. 809–816. https://doi.org/10.1109/OPTIM.2014.6850903.
47. Radar Glasses. Available online: https://www.instructables.com/Radar-Glasses (accessed on 26 January 2022).
48.
Stuff Made Here. See in Complete Darkness with Touch. 2020. Available online: https://www.youtube.com/watch?v=8Au4
7gnXs0w (accessed on 26 January 2022).
49. HaptiVision. Available online: https://hackaday.io/project/27112-haptivision (accessed on 26 January 2022).
50.
Low Cost Tongue Vision. Available online: https://hackaday.io/project/13446-low-cost-tongue-vision (accessed on 26 January
2022).
51.
Bach-y-Rita, P. Recovery from Brain Damage. J. Neurol. Rehabil.
1992
,6, 191–199. https://doi.org/10.1177/136140969200600404.
52.
Bach-y-Rita, P. Nonsynaptic Diffusion Neurotransmission and Brain Plasticity. Neuroscientist
1996
,2, 260–261. https://doi.org/
10.1177/107385849600200511.
53.
Held, R.; Hein, A. Movement-Produced Stimulation in the Development of Visually Guided Behavior. J. Comp. Physiol. Psychol.
1963,56, 872–876. https://doi.org/10.1037/h0040546.
54.
O’Regan, J.K.; Noë, A. A Sensorimotor Account of Vision and Visual Consciousness. Behav. Brain Sci.
2001
,24, 939–973.
https://doi.org/doi:10.1017/S0140525X01000115.
55.
Chebat, D.R.; Harrar, V.; Kupers, R.; Maidenbaum, S.; Amedi, A.; Ptito, M. Sensory Substitution and the Neural Correlates of
Navigation in Blindness. In Mobility of Visually Impaired People; Pissaloux, E., Velazquez, R., Eds.; Springer: Cham, Switzerland,
2018; pp. 167–200. https://doi.org/10.1007/978-3-319-54446-5_6.
56.
Elli, G.V.; Benetti, S.; Collignon, O. Is There a Future for Sensory Substitution Outside Academic Laboratories? Multisens. Res.
2014,27, 271–291. https://doi.org/10.1163/22134808-00002460.
57.
Bhowmick, A.; Hazarika, S.M. An Insight into Assistive Technology for the Visually Impaired and Blind People: State-of-the-Art
and Future Trends. J. Multimodal User Interfaces 2017,11, 149–172. https://doi.org/10.1007/s12193-016-0235-6.
Sensors 2022,22, 1859 31 of 33
58.
Kilian, J. Sensorische Plastizität—Die Körperliche Konstitution von Kognition Als Erklärungsmodell Sensorischer Sub-
stitution Und Augmentation. Bachelor’s Thesis, TH Köln—Köln International School of Design, Köln, Germany, 2018.
https://doi.org/10.13140/RG.2.2.22547.73763.
59.
Kilian, J. Unfolding Space–Haptische Wahrnehmung Räumlicher Bilder Durch Sensorische Substitution. Bachelor’s Thesis, TH
Köln—Köln International School of Design, Köln, Germany, 2018. https://doi.org/10.13140/RG.2.2.22967.16805.
60.
pmdtechnologies ag. Development Kit Brief–CamBoard Pico Flexx. 2017. Available online: https://pmdtec.com/picofamily/
wp-content/uploads/2018/03/PMD_DevKit_Brief_CB_pico_flexx_CE_V0218-1.pdf (accessed on 26 January 2022).
61. Li, L. Time-of-Flight Camera—An Introduction; Technical White Paper; Texas Instruments: Dallas, TX, USA, 2014.
62.
Bach, M. The Freiburg Visual Acuity Test—Automatic Measurement of Visual Acuity. Optom. Vis. Sci.
1996
,73, 49–53.
https://doi.org/10.1097/00006324-199601000-00008.
63.
Nau, A.C.; Pintar, C.; Fisher, C.; Jeong, J.H.; Jeong, K. A Standardized Obstacle Course for Assessment of Visual Function in Ultra
Low Vision and Artificial Vision. J. Vis. Exp. 2014,84, e51205. https://doi.org/10.3791/51205.
64.
Due, B.L.; Kupers, R.; Lange, S.B.; Ptito, M. Technology Enhanced Vision in Blind and Visually Impaired Individuals. Circd Work.
Pap. Soc. Interact. 2017,3, 1–31.
65.
Paré, S.; Bleau, M.; Djerourou, I.; Malotaux, V.; Kupers, R.; Ptito, M. Spatial Navigation with Horizontally Spatialized Sounds in
Early and Late Blind Individuals. PLoS ONE 2021,16, e0247448. https://doi.org/10.1371/journal.pone.0247448.
66.
Brooke, J. SUS: A ’Quick and Dirty’ Usability Scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., Weerdmeester,
B.A., McClelland, A.L., Eds.; Taylor and Francis: London, UK, 1996; Volume 189, pp. 189–194.
67.
Zuur, A.F.; Ieno, E.N.; Elphick, C.S. A Protocol for Data Exploration to Avoid Common Statistical Problems. Methods Ecol. Evol.
2010,1, 3–14. https://doi.org/10.1111/j.2041-210X.2009.00001.x.
68.
Walker, E.; Nowacki, A.S. Understanding Equivalence and Noninferiority Testing. J. Gen. Intern. Med.
2011
,26, 192–196.
https://doi.org/10.1007/s11606-010-1513-8.
69. Brooke, J. SUS: A Retrospective. J. Usability Stud. 2013,8, 29–40.
70.
Epstein, W.; Hughes, B.; Schneider, S.; Bach-y-Rita, P. Is There Anything out There? A Study of Distal Attribution in Response to
Vibrotactile Stimulation. Perception 1986,15, 275–284. https://doi.org/10.1068/p150275.
71.
Loomis, J.M. Distal Attribution and Presence. Presence Teleoperators Virtual Environ.
1992
,1, 113–119. https://doi.org/10.1162/
pres.1992.1.1.113.
72.
Auvray, M.; Hanneton, S.; Lenay, C.; O’regan, K. There Is Something out There: Distal Attribution in Sensory Substitution,
Twenty Years Later. J. Integr. Neurosci. 2005,04, 505–521. https://doi.org/10.1142/S0219635205001002.
73.
Siegle, J.H.; Warren, W.H. Distal Attribution and Distance Perception in Sensory Substitution. Perception
2010
,39, 208–223.
https://doi.org/10.1068/p6366.
74.
Sato, T.; Okada, Y.; Miyamoto, T.; Fujiyama, R. Distributions of Sensory Spots in the Hand and Two-Point Discrimination
Thresholds in the Hand, Face and Mouth in Dental Students. J. Physiol.-Paris
1999
,93, 245–250. https://doi.org/10.1016/S0928-
4257(99)80158-2.
75.
van Beers, R.J.; Sittig, A.C.; Denier van der Gon, J.J. The Precision of Proprioceptive Position Sense. Exp. Brain Res.
1998
,
122, 367–377. https://doi.org/10.1007/s002210050525.
76.
Oggier, T.; Lehmann, M.; Kaufmann, R.; Schweizer, M.; Richter, M.; Metzler, P.; Lang, G.; Lustenberger, F.; Blanc, N. An
All-Solid-State Optical Range Camera for 3D Real-Time Imaging with Sub-Centimeter Depth Resolution (SwissRanger). Soc.
Photogr. Instrum. Eng. 2004,5249, 534–545. https://doi.org/10.1117/12.513307.
77.
Kuhlmann, U. 3D-Bilder mit Webcams. 2004. Available online: https://www.heise.de/newsticker/meldung/3D-Bilder-mit-
Webcams-121098.html (accessed on 26 January 2022).
78.
pmdtechnologies ag. Der 3D-Tiefensensor CamBoard pico flexx ist für vielfältige Anwendungsfälle und Zielgruppen Optimiert.
2017. Available online: https://www.pressebox.de/pressemitteilung/pmdtechnologies-gmbh/Der-3D-Tiefensensor-CamBoard-
pico-flexx-ist-fuer-vielfaeltige-Anwendungsfaelle-und-Zielgruppen-optimiert/boxid/867050 (accessed on 26 January 2022).
79.
Development Kit Pmd[Vision]
®
CamBoard Pico Flexx. Available online: https://www.automation24.com/development-kit-
pmd-vision-r-camboard-pico-flexx-700-000-094 (accessed on 26 January 2022).
80.
pmdtechnologies ag. Sensing the World the Way We Do—The World’s First Project Tango Smartphone. 2016. Available
online: https://www.pressebox.com/pressrelease/pmdtechnologies-gmbh/Sensing-the-world-the-way-we-do-the-worlds-
first-Project-Tango-Smartphone/boxid/800705 (accessed on 26 January 2022).
81.
pmdtechnologies ag. Pmdtechnologies Coming Strong. 2017. Available online: https://pmdtec.com/fileadmin/user_upload/
PR_20170501_pmdinsideASUSphone.pdf (accessed on 26 January 2022).
82.
Das, A.; Galdi, C.; Han, H.; Ramachandra, R.; Dugelay, J.L.; Dantcheva, A. Recent Advances in Biometric Technology for Mobile
Devices. In Proceedings of the 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS),
Redondo Beach, CA, USA, 22–25 October 2018; pp. 1–11. https://doi.org/10.1109/BTAS.2018.8698587.
83.
Kawai, Y.; Kobayashi, M.; Minagawa, H.; Miyakawa, M.; Tomita, F. A Support System for Visually Impaired Persons Using Three-
Dimensional Virtual Sound. IEEJ Trans. Electron. Inf. Syst. 2000,120, 648–655. https://doi.org/10.1541/ieejeiss1987.120.5_648.
84.
Lu, X.; Manduchi, R. Detection and Localization of Curbs and Stairways Using Stereo Vision. In Proceedings of the
2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 4648–4654.
https://doi.org/10.1109/ROBOT.2005.1570837.
Sensors 2022,22, 1859 32 of 33
85.
Lee, D.J.; Anderson, J.D.; Archibald, J.K. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo
Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired. EURASIP J. Adv. Signal Process.
2008
,2008, 1–10.
https://doi.org/10.1155/2008/385827.
86.
Bourbakis, N. Sensing Surrounding 3-D Space for Navigation of the Blind. IEEE Eng. Med. Biol. Mag.
2008
,27, 49–55.
https://doi.org/10.1109/MEMB.2007.901780.
87.
Zöllner, M.; Huber, S.; Jetter, H.C.; Reiterer, H. NAVI—A Proof-of-Concept of a Mobile Navigational Aid for Visually Impaired
Based on the Microsoft Kinect. In Proceedings of the Human-Computer Interaction—INTERACT 2011, Lisbon, Portugal, 5–9
September 2011; Lecture Notes in Computer Science; Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M.,
Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 584–587. https://doi.org/10.1007/978-3-642-23768-3_88.
88.
Dunai, L.; Peris-Fajarnés, G.; Lluna, E.; Defez, B. Sensory Navigation Device for Blind People. J. Navig.
2013
,66, 349–362.
https://doi.org/10.1017/S0373463312000574.
89.
Lengua, I.; Dunai, L.; Peris-Fajarnés, G.; Defez, B. Navigation Device for Blind People Based on Time-of-Flight Technology.
Dyna-Colombia 2013,80, 33–41.
90.
Mattoccia, S.; Macrı’, P. 3D Glasses as Mobility Aid for Visually Impaired People. In Proceedings of the Computer Vision—ECCV
2014 Workshops, Zurich, Switzerland, 6–12 September 2014; Lecture Notes in Computer Science; Agapito, L., Bronstein, M.M.,
Rother, C., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 539–554. https://doi.org/10.1007/978-3-319-
16199-0_38.
91.
Balakrishnan, G.; Sainarayanan, G.; Nagarajan, R.; Yaacob, S. Wearable Real-Time Stereo Vision for the Visually Impaired. Eng.
Lett. 2007,14, pp. 6—14.
92.
Bujacz, M. Representing 3D Scenes through Spatial Audio in an Electronic Travel Aid for the Blind. Ph.D. Thesis, Technical
University of Lodz, Faculty of Electrical, Electronic, Computer and Control Engineering, Łód´z, Poland, 2010.
93.
Callaghan, K.J.O.; Mahony, M.J.O. An Obstacle Segmentation and Classification System for the Visually Impaired. In Proceedings
of the 2010 International Symposium on Optomechatronic Technologies, Toronto, ON, Canada, 24–27 October 2010; pp. 1–6.
https://doi.org/10.1109/ISOT.2010.5687345.
94.
Akhter, S.; Mirsalahuddin, J.; Marquina, F.; Islam, S.; Sareen, S. A Smartphone-based Haptic Vision Substitution System for the
Blind. In Proceedings of the 2011 IEEE 37th Annual Northeast Bioengineering Conference (NEBEC), Troy, NY, USA, 1–3 April
2011; IEEE: Troy, NY, USA, 2011; pp. 1–2. https://doi.org/10.1109/NEBC.2011.5778565.
95.
Cancar, L.; Díaz, A.; Barrientos, A.; Travieso, D.; Jacobs, D.M. Tactile-Sight: A Sensory Substitution Device Based on Distance-
Related Vibrotactile Flow. Int. J. Adv. Robot. Syst. 2013,10, 272. https://doi.org/10.5772/56235.
96.
Maidenbaum, S.; Chebat, D.R.; Levy-Tzedek, S.; Amedi, A. Depth-To-Audio Sensory Substitution for Increasing the Accessibility
of Virtual Environments. In Universal Access in Human-Computer Interaction. Design and Development Methods for Universal Access;
Lecture Notes in Computer Science; Stephanidis, C., Antona, M., Eds.; Springer International Publishing: Cham, Switzerland,
2014; pp. 398–406. https://doi.org/10.1007/978-3-319-07437-5_38.
97.
Stoll, C.; Palluel-Germain, R.; Fristot, V.; Pellerin, D.; Alleysson, D.; Graff, C. Navigating from a Depth Image Converted into
Sound. Appl. Bionics Biomech. 2015,2015, 543492. https://doi.org/10.1155/2015/543492.
98.
Filipe, V.; Fernandes, F.; Fernandes, H.; Sousa, A.; Paredes, H.; Barroso, J. Blind Navigation Support System Based on Microsoft
Kinect. Procedia Comput. Sci. 2012,14, 94–101. https://doi.org/10.1016/j.procs.2012.10.011.
99.
Gholamalizadeh, T.; Pourghaemi, H.; Mhaish, A.; Ince, G.; Duff, D.J. Sonification of 3D Object Shape for Sensory Substitution: An
Empirical Exploration. In Proceedings of the ACHI 2017, The Tenth International Conference on Advances in Computer-Human
Interactions, Nice, France, 19–23 March 2017; pp. 18–24.
100.
Morar, A.; Moldoveanu, F.; Petrescu, L.; Moldoveanu, A. Real Time Indoor 3D Pipeline for an Advanced Sensory Substitution
Device. In Proceedings of the Image Analysis and Processing—ICIAP 2017, Catania, Italy, 11–15 September 2017; Lecture Notes
in Computer Science; Springer: Cham, Switzerland, 2017; pp. 685–695. https://doi.org/10.1007/978-3-319-68548-9_62.
101.
Caraiman, S.; Morar, A.; Owczarek, M.; Burlacu, A.; Rzeszotarski, D.; Botezatu, N.; Herghelegiu, P.; Moldoveanu, F.; Strumillo,
P.; Moldoveanu, A. Computer Vision for the Visually Impaired: The Sound of Vision System. In Proceedings of the 2017
IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 1480–1489.
https://doi.org/10.1109/ICCVW.2017.175.
102.
Caraiman, S.; Zvoristeanu, O.; Burlacu, A.; Herghelegiu, P. Stereo Vision Based Sensory Substitution for the Visually Impaired.
Sensors 2019,19, 2771. https://doi.org/10.3390/s19122771.
103.
Katzschmann, R.K.; Araki, B.; Rus, D. Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic
Feedback Device. IEEE Trans. Neural Syst. Rehabil. Eng. 2018,26, 583–593. https://doi.org/10.1109/TNSRE.2018.2800665.
104.
Massiceti, D.; Hicks, S.L.; van Rheede, J.J. Stereosonic Vision: Exploring Visual-to-Auditory Sensory Substitution Mappings in an
Immersive Virtual Reality Navigation Paradigm. PLoS ONE
2018
,13, e0199389. https://doi.org/10.1371/journal.pone.0199389.
105.
Neugebauer, A.; Rifai, K.; Getzlaff, M.; Wahl, S. Navigation Aid for Blind Persons by Visual-to-Auditory Sensory Substitution: A
Pilot Study. PLoS ONE 2020,15, e0237344. https://doi.org/10.1371/journal.pone.0237344.
106.
Zvori
s
,
teanu, O.; Caraiman, S.; Lupu, R.G.; Botezatu, N.A.; Burlacu, A. Sensory Substitution for the Visually Im-
paired: A Study on the Usability of the Sound of Vision System in Outdoor Environments. Electronics
2021
,10, 1619.
https://doi.org/10.3390/electronics10141619.
Sensors 2022,22, 1859 33 of 33
107.
Hamilton-Fletcher, G.; Alvarez, J.; Obrist, M.; Ward, J. SoundSight: A Mobile Sensory Substitution Device That Sonifies Colour,
Distance, and Temperature. J. Multimodal User Interfaces 2021,16, 107–123. https://doi.org/10.1007/s12193-021-00376-w.
108.
Poryzala, P. EEG Measures of Auditory and Tactile Stimulations in Computer Vision Based Sensory Substitution System for
Visually Impaired Users. In Proceedings of the 2018 11th International Conference on Human System Interaction (HSI), Gda´nsk,
Poland, 4–6 July 2018; pp. 200–206. https://doi.org/10.1109/HSI.2018.8431353.
109.
Precision Microdrives. AB-028: Vibration Motor Comparison Guide. 2021. Available online: https://www.precisionmicrodrives.
com/ab-028 (accessed on 26 January 2022).
110.
Electronics, J.M. G1040003D—Product Specification. 2019. Available online: https://vibration-motor.com/coin-vibration-
motors/coin-linear-resonant-actuators-lra/g1040003d (accessed on 26 January 2022).
111.
Seim, C.; Hallam, J.; Raghu, S.; Le, T.A.; Bishop, G.; Starner, T. Perception in Hand-Worn Haptics: Placement, Simultaneous Stimuli,
and Vibration Motor Comparisons; Technical Report; Georgia Institute of Technology: Atlanta, GA, USA, 2015.
112.
Huang, H.; Li, T.; Antfolk, C.; Enz, C.; Justiz, J.; Koch, V.M. Experiment and Investigation of Two Types of Vibrotactile Devices. In
Proceedings of the 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), Singapore,
26–29 June 2016; pp. 1266–1271. https://doi.org/10.1109/BIOROB.2016.7523805.
113.
Zuur, A.F.; Ieno, E.N.; Walker, N.; Saveliev, A.A.; Smith, G.M. Mixed Effects Models and Extensions in Ecology with R; Statistics for
Biology and Health; Springer: New York, NY, USA, 2009. https://doi.org/10.1007/978-0-387-87458-6.
... Compared with the non-disabled, blind people who have lost visual function become more dependent on other sensations, such as tactile sensation and auditory sensation. Therefore, different types of wearable devices using tactile sensations to transmit information about obstacles have been proposed to assist visually impaired people [2][3][4][5][6][7][8][9][10]. ...
Article
Full-text available
Nowadays, improving the traffic safety of visually impaired people is a topic of widespread concern. To help avoid the risks and hazards of road traffic in their daily life, we propose a wearable device using object detection techniques and a novel tactile display made from shape-memory alloy (SMA) actuators. After detecting obstacles in real-time, the tactile display attached to a user’s hands presents different tactile sensations to show the position of the obstacles. To implement the computation-consuming object detection algorithm in a low-memory mobile device, we introduced a slimming compression method to reduce 90% of the redundant structures of the neural network. We also designed a particular driving circuit board that can efficiently drive the SMA-based tactile displays. In addition, we also conducted several experiments to verify our wearable assistive device’s performance. The results of the experiments showed that the subject was able to recognize the left or right position of a stationary obstacle with 96% accuracy and also successfully avoided collisions with moving obstacles by using the wearable assistive device.
Article
Full-text available
For most visually impaired people, simple tasks such as understanding the environment or moving safely around it represent huge challenges. The Sound of Vision system was designed as a sensory substitution device, based on computer vision techniques, that encodes any environment in a naturalistic representation through audio and haptic feedback. The present paper presents a study on the usability of this system for visually impaired people in relevant environments. The aim of the study is to assess how well the system is able to help the perception and mobility of the visually impaired participants in real life environments and circumstances. The testing scenarios were devised to allow the assessment of the added value of the Sound of Vision system compared to traditional assistive instruments, such as the white cane. Various data were collected during the tests to allow for a better evaluation of the performance: system configuration, completion times, electro-dermal activity, video footage, user feedback. With minimal training, the system could be successfully used in outdoor environments to perform various perception and mobility tasks. The benefit of the Sound of Vision device compared to the white cane was confirmed by the participants and by the evaluation results to consist in: providing early feedback about static and dynamic objects, providing feedback about elevated objects, walls, negative obstacles (e.g., holes in the ground) and signs.
Article
Full-text available
Depth, colour, and thermal images contain practical and actionable information for the blind. Conveying this information through alternative modalities such as audition creates new interaction possibilities for users as well as opportunities to study neuroplasticity. The 'SoundSight' App (www. Sound Sight. co. uk) is a smartphone platform that allows 3D position, colour, and thermal information to directly control thousands of high-quality sounds in real-time to create completely unique and responsive soundscapes for the user. Users can select the specific sensor input and style of auditory output, which can be based on anything-tones, rainfall, speech, instruments, or even full musical tracks. Appropriate default settings for image-sonification are given by designers, but users still have a fine degree of control over the timing and selection of these sounds. Through utilising smartphone technology with a novel approach to sonification, the SoundSight App provides a cheap, widely accessible, scalable, and flexible sensory tool. In this paper we discuss common problems encountered with assistive sensory tools reaching long-term adoption, how our device seeks to address these problems, its theoretical background, its technical implementation, and finally we showcase both initial user experiences and a range of use case scenarios for scientists, artists, and the blind community.
Article
Full-text available
Blind individuals often report difficulties to navigate and to detect objects placed outside their peri-personal space. Although classical sensory substitution devices could be helpful in this respect, these devices often give a complex signal which requires intensive training to analyze. New devices that provide a less complex output signal are therefore needed. Here, we evaluate a smartphone-based sensory substitution device that offers navigation guidance based on strictly spatial cues in the form of horizontally spatialized sounds. The system uses multiple sensors to either detect obstacles at a distance directly in front of the user or to create a 3D map of the environment (detection and avoidance mode, respectively), and informs the user with auditory feedback. We tested 12 early blind, 11 late blind and 24 blindfolded sighted participants for their ability to detect obstacles and to navigate in an obstacle course. The three groups did not differ in the number of objects detected and avoided. However , early blind and late blind participants were faster than their sighted counterparts to navigate through the obstacle course. These results are consistent with previous research on sensory substitution showing that vision can be replaced by other senses to improve performance in a wide variety of tasks in blind individuals. This study offers new evidence that sensory substitution devices based on horizontally spatialized sounds can be used as a navigation tool with a minimal amount of training.
Article
Full-text available
Summary Background To contribute to the WHO initiative, VISION 2020: The Right to Sight, an assessment of global vision impairment in 2020 and temporal change is needed. We aimed to extensively update estimates of global vision loss burden, presenting estimates for 2020, temporal change over three decades between 1990–2020, and forecasts for 2050. Methods We did a systematic review and meta-analysis of population-based surveys of eye disease from January, 1980, to October, 2018. Only studies with samples representative of the population and with clearly defined visual acuity testing protocols were included. We fitted hierarchical models to estimate 2020 prevalence (with 95% uncertainty intervals [UIs]) of mild vision impairment (presenting visual acuity ≥6/18 and <6/12), moderate and severe vision impairment (<6/18 to 3/60), and blindness (<3/60 or less than 10° visual field around central fixation); and vision impairment from uncorrected presbyopia (presenting near vision <N6 or <N8 at 40 cm where best-corrected distance visual acuity is ≥6/12). We forecast estimates of vision loss up to 2050. Findings In 2020, an estimated 43·3 million (95% UI 37·6–48·4) people were blind, of whom 23·9 million (55%; 20·8–26·8) were estimated to be female. We estimated 295 million (267–325) people to have moderate and severe vision impairment, of whom 163 million (55%; 147–179) were female; 258 million (233–285) to have mild vision impairment, of whom 142 million (55%; 128–157) were female; and 510 million (371–667) to have visual impairment from uncorrected presbyopia, of whom 280 million (55%; 205–365) were female. Globally, between 1990 and 2020, among adults aged 50 years or older, age-standardised prevalence of blindness decreased by 28·5% (–29·4 to –27·7) and prevalence of mild vision impairment decreased slightly (–0·3%, –0·8 to –0·2), whereas prevalence of moderate and severe vision impairment increased slightly (2·5%, 1·9 to 3·2; insufficient data were available to calculate this statistic for vision impairment from uncorrected presbyopia). In this period, the number of people who were blind increased by 50·6% (47·8 to 53·4) and the number with moderate and severe vision impairment increased by 91·7% (87·6 to 95·8). By 2050, we predict 61·0 million (52·9 to 69·3) people will be blind, 474 million (428 to 518) will have moderate and severe vision impairment, 360 million (322 to 400) will have mild vision impairment, and 866 million (629 to 1150) will have uncorrected presbyopia. Interpretation Age-adjusted prevalence of blindness has reduced over the past three decades, yet due to population growth, progress is not keeping pace with needs. We face enormous challenges in avoiding vision impairment as the global population grows and ages.
Article
Full-text available
Purpose In this study, we investigate to what degree augmented reality technology can be used to create and evaluate a visual-to-auditory sensory substitution device to improve the performance of blind persons in navigation and recognition tasks. Methods A sensory substitution algorithm that translates 3D visual information into audio feedback was designed. This algorithm was integrated in an augmented reality based mobile phone application. Using the mobile device as sensory substitution device, a study with blind participants (n = 7) was performed. The participants navigated through pseudo-randomized obstacle courses using either the sensory substitution device, a white cane or a combination of both. In a second task, virtual 3D objects and structures had to be identified by the participants using the same sensory substitution device. Results The realized application for mobile devices enabled participants to complete the navigation and object recognition tasks in an experimental environment already within the first trials without previous training. This demonstrates the general feasibility and low entry barrier of the designed sensory substitution algorithm. In direct comparison to the white cane, within the study duration of ten hours the sensory substitution device did not offer a statistically significant improvement in navigation.
Article
Full-text available
Visually impaired individuals often rely on assistive technologies such as white canes for independent navigation. Many electronic enhancements to the traditional white cane have been proposed. However, only a few of these proof-of-concept technologies have been tested with authentic users, as most studies rely on blindfolded non-visually impaired participants or no testing with participants at all. Experiments involving blind users are usually not contrasted with the traditional white cane. This study set out to compare an ultrasound-based electronic cane with a traditional white cane. Moreover, we also compared the performance of a group of visually impaired participants (N = 10) with a group of blindfolded participants without visual impairments (N = 31). The results show that walking speed with the electronic cane is significantly slower compared to the traditional white cane. Moreover, the results show that the performance of the participants without visual impairments is significantly slower than for the visually impaired participants. No significant differences in obstacle detection rates were observed across participant groups and device types for obstacles on the ground, while 79% of the hanging obstacles were detected by the electronic cane. The results of this study thus suggest that electronic canes present only one advantage over the traditional cane, namely in its ability to detect hanging obstacles, at least without prolonged practice. Next, blindfolded participants are insufficient substitutes for blind participants who are expert cane users. The implication of this study is that research into digital white cane enhancements should include blind participants. These participants should be followed over time in longitudinal experiments to document if practice will lead to improvements that surpass the performance achieved with traditional canes.
Article
Full-text available
Purpose The impact of visual acuity (VA) on Health-Related Quality of Life (HRQoL) and the cross-sectional and longitudinal differences in HRQoL during the 11-year follow-up were investigated. The aim was to examine the impact declining vision has on HRQoL and to provide comparable data to facilitate the allocation of health-care resources. Methods We utilized nationwide health examination surveys carried out by the National Institute for Health and Welfare in 2000 and 2011, providing a representative sampling of the Finnish adult population aged 30 and older. VA was assessed through Snellen E test, and HRQoL scores were evaluated using EQ-5D and 15D questionnaires. Multiple imputations with Markov chain Monte Carlo method was used to utilize the data more effectively. Regression analyses were conducted to assess the impact of declining VA on HRQoL, adjusted for incident comorbidities. Results Lower VA status was associated with significantly lower HRQoL at both time points, most clearly observable below the VA level of 0.5. Declining VA resulted in statistically significant decline in HRQoL during the follow-up, greater with distance than near VA. 15D impairment associated with decline in the distance VA was also clinically meaningful and greater than that associated with any of the examined comorbidities. Conclusions HRQoL was significantly and meaningfully impaired even before the threshold of severe vision loss or blindness was reached. The results encourage the improvement of available treatment options aiming to postpone the onset of visual impairment or declining VA, to maintain better quality of life among the population.
Article
Full-text available
The development of computer vision based systems dedicated to help visually impaired people to perceive the environment, to orientate and navigate has been the main research subject of many works in the recent years. A significant ensemble of resources has been employed to support the development of sensory substitution devices (SSDs) and electronic travel aids for the rehabilitation of the visually impaired. The Sound of Vision (SoV) project used a comprehensive approach to develop such an SSD, tackling all the challenging aspects that so far restrained the large scale adoption of such systems by the intended audience: Wearability, real-time operation, pervasiveness, usability, cost. This article is set to present the artificial vision based component of the SoV SSD that performs the scene reconstruction and segmentation in outdoor environments. In contrast with the indoor use case, where the system acquires depth input from a structured light camera, in outdoors SoV relies on stereo vision to detect the elements of interest and provide an audio and/or haptic representation of the environment to the user. Our stereo-based method is designed to work with wearable acquisition devices and still provide a real-time, reliable description of the scene in the context of unreliable depth input from the stereo correspondence and of the complex 6 DOF motion of the head-worn camera. We quantitatively evaluate our approach on a custom benchmarking dataset acquired with SoV cameras and provide the highlights of the usability evaluation with visually impaired users.
Thesis
Full-text available
This is the theoretical part of my bachelor thesis. The paper is only available in German. In this paper – so called "proposal" – the phenomenon of Sensory Plasticity is researched as a basis to develop a Sensory Substitution device in the later published final thesis.