Conference PaperPDF Available

Designing AR Visualizations to Facilitate Stair Navigation for People with Low Vision

Authors:

Abstract and Figures

Navigating stairs is a dangerous mobility challenge for people with low vision, who have a visual impairment that falls short of blindness. Prior research contributed systems for stair navigation that provide audio or tactile feedback, but people with low vision have usable vision and don't typically use nonvisual aids. We conducted the first exploration of augmented reality (AR) visualizations to facilitate stair navigation for people with low vision. We designed visualiza-tions for a projection-based AR platform and smartglasses, considering the different characteristics of these platforms. For projection-based AR, we designed visual highlights that are projected directly on the stairs. In contrast, for smartglasses that have a limited vertical field of view, we designed visualizations that indicate the user's position on the stairs, without directly augmenting the stairs themselves. We evaluated our visualizations on each platform with 12 people with low vision, finding that the visualizations for projection based AR increased participants' walking speed. Our designs on both platforms largely increased participants' self-reported psychological security.
Content may be subject to copyright.
Designing AR Visualizations to Facilitate Stair Navigation
for People with Low Vision
Yuhang Zhao1, Elizabeth Kupferstein1, Brenda Veronica Castro1,
Steven Feiner2, Shiri Azenkot1
1Jacobs Technion-Cornell Institute, Cornell Tech,
Cornell University, New York, NY, USA
{yz769, ek544, bvc5, shiri.azenkot}@cornell.edu
2Department of Computer Science, Columbia
University, New York, NY, USA
feiner@cs.columbia.edu
ABSTRACT
Navigating stairs is a dangerous mobility challenge for peo-
ple with low vision, who have a visual impairment that falls
short of blindness. Prior research contributed systems for
stair navigation that provide audio or tactile feedback, but
people with low vision have usable vision and don’t typically
use nonvisual aids. We conducted the first exploration of
augmented reality (AR) visualizations to facilitate stair nav-
igation for people with low vision. We designed visualiza-
tions for a projection-based AR platform and smartglasses,
considering the different characteristics of these platforms.
For projection-based AR, we designed visual highlights that
are projected directly on the stairs. In contrast, for smart-
glasses that have a limited vertical field of view, we designed
visualizations that indicate the user’s position on the stairs,
without directly augmenting the stairs themselves. We eval-
uated our visualizations on each platform with 12 people
with low vision, finding that the visualizations for projec-
tion-based AR increased participants’ walking speed. Our
designs on both platforms largely increased participants’
self-reported psychological security.
Author Keywords
Accessibility; augmented reality; low vision; visualization.
ACM Classification Keywords
Human-centered computing~Mixed / augmented real-
ity; Accessibility technologies.
INTRODUCTION
As many as 1.2 billion people worldwide have low vision, a
visual impairment that cannot be corrected with eyeglasses
or contact lenses [11, 72]. Unlike people who are blind, peo-
ple with low vision (PLV) have functional vision that they
use extensively in daily activities [73, 74]. Low vision can
be attributed to a variety of diseases (e.g., glaucoma, diabetic
retinopathy) and affects many visual functions including vis-
ual acuity, contrast sensitivity, and peripheral vision [21].
Stair navigation is one of the most dangerous mobility chal-
lenges for PLV [5]. With reduced depth perception and pe-
ripheral vision [45, 56], PLV have difficulty detecting stairs
or perceiving the exact location of stair edges [86]. As a re-
sult, PLV experience higher rates of falls and injuries than
their typically-sighted counterparts [5, 13].
Despite the difficulty they experience, PLV use their residual
vision extensively when navigating stairs [73]. Zhao et al.
[86] found that they looked at contrast stripes (i.e., con-
trasting marking stripes on stair treads) to perceive the exact
location of stair edges; some also observed the trend of the
railing to understand the overall structure of a staircase.
However, sometimes stairs do not have contrast stripes, and
even when they do, their stripes are often not accessibly de-
signed; for example, stripes may have low contrast with the
stairs or be too thin to detect [86]. Today, the only known
tool to assist with stair navigation is the white cane, which
many PLV prefer not to use [86]. Thus, there is a gap in tools
that support PLV in the basic task of stair navigation.
Advances in augmented reality (AR) present a unique oppor-
tunity to address this problem. By automatically recognizing
the environment with computer vision, AR technology has
the potential to generate corresponding visual and auditory
feedback to help people better perceive and navigate the en-
vironment more safely and quickly.
Our research explores AR visualization designs to facilitate
stair navigation by leveraging PLV’s residual vision. Design-
ing visualizations for PLV is challenging [84, 85], especially
for stair navigation, a dangerous mobility task. On one hand,
the visualizations should be easily perceivable by PLV. A
visualization that a sighted person can easily see (e.g., a small
arrow) may not be noticeable by PLV: it may be too small
for them to see or outside their visual field [87]. On the other
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from Permissions@acm.org.
UIST '19, October 2023, 2019, New Orleans, LA, USA
© 2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-6816-2/19/10…$15.00
https://doi.org/10.1145/3332165.3347906
Figure 1: Our visualizations for (a) projection-based AR and
(b) smartglasses to facilitate stair navigation for PLV.
hand, the visualizations should not be distracting. An ex-
tremely large, bright, or animated visualization can distract
PLV and hinder their ability to see. This could be dangerous
in the context of stair navigation. We sought to design effec-
tive visualizations for PLV, which balance visibility and dis-
traction, while providing alternative choices to support a
wide range of visual abilities.
We designed visualizations on two AR platforms that can
generate immersive virtual content in the physical environ-
ment: projection-based AR and smartglasses. Our designs
considered the different characteristics of the two platforms:
(1) For projection, which can augment a large physical space,
we designed visual highlights with different patterns that are
directly projected onto the stairs to enhance their visibility
(Figure 1a). (2) For smartglasses that have a limited vertical
field of view (FOV), we designed visualizations in the user’s
central FOV to indicate the user’s exact position on the stairs
(Figure 1b).
We evaluated our visualizations on each platform with 12
PLV. We found that the visualizations on both platforms in-
creased participants’ self-reported psychological security.
Our visualizations also changed participants’ behaviors.
Many participants didn’t stare down at the stairs when walk-
ing with our visualizations; some stopped holding the railing.
Moreover, the visualizations on the projection-based AR
platform showed a trend to significantly reduce participants
walking time.
In summary, we contribute the first exploration of AR visu-
alizations to facilitate stair navigation for PLV. Our evalua-
tions demonstrated the effectiveness of our visualizations
and provide insights for the design of AR visualizations for
PLV that support other tasks as well.
RELATED WORK
Stair Navigation Experiences of PLV
Mobility is critical but challenging for PLV. Many studies
have shown that reduced visual functions hinder mobility [6,
10, 19, 44, 80] and increase the risk of mobility-related acci-
dents [5, 13, 22, 23, 31, 32]. For example, Leat and Lovie-
Kitchin [45] found that visual field loss reduced walking
speed, while reduced visual acuity and contrast sensitivity
impacted distance and depth perception.
Stair navigation is one of the most dangerous mobility chal-
lenges for PLV [5]. Legge et al. [47] found that failing to
detect descending stairs was more dangerous and had a
higher correlation with falls than failing to see obstacles or
ascending stairs. West et al. [80] measured 782 older adults’
visual abilities and collected self-reported mobility limita-
tions. They found that people with low visual acuity and low
contrast sensitivity reported difficulty walking up and down
stairs without help. Bibby et al. [8] also surveyed 30 PLV
about their mobility performance, finding that PLV reported
greater difficulty navigating curbs and descending stairs.
In the humancomputer interaction field, researchers also ex-
plored the challenges that PLV face during navigation, in-
cluding navigating stairs. Szpiro et al. [73] observed 11
PLV’s behaviors as they navigated to a nearby pharmacy.
They found that PLV struggled but used their vision exten-
sively, and lighting conditions affected their ability to notice
obstacles and uneven pavement on the ground. Zhao et al.
[86] conducted a more in-depth study observing 14 PLV
walking on different sets of stairs indoors and outdoors. They
found that most participants relied on their vision (e.g., look-
ing at contrast stripes) to navigate stairs. Besides the white
cane, which only four participants used, no technology was
used to assist with this task. Zhao et al.’s study emphasized
the need for tools that facilitate stair navigation for PLV.
Safe Navigation for Blind and PLV
Mobility problems for people who are blind and PLV can be
divided into two categories: wayfinding (i.e., the global
problem of planning and following routes from place to
place) and safe navigation (i.e., the local problem of taking
the next step safely without bumping into things or tripping)
[75]. Most prior research in this general area has focused on
wayfinding, both indoors [3, 27, 35, 38, 46, 62] and outdoors
[4, 12, 15, 29, 50]. Yet walking up and down stairs falls into
the latter category, which has received less attention.
Safe Navigation for Blind People
To facilitate safe navigation, researchers designed obstacle
avoidance systems for people who are blind (e.g., [1, 24, 48,
77]). By detecting obstacles with cameras or range finders,
these systems generated auditory [2, 39, 40, 53, 70, 78] or
tactile feedback [14, 52, 54, 71, 76] to notify blind users of
obstacles and their distance.
Since perceiving stairs is essential for safe navigation, many
obstacle avoidance systems also detected stairs [7, 17, 28,
34]. For example, Bhowmick et al. [7] designed IntelliNavi,
a wearable navigation system that combined a Kinect and an
earphone. With SURF descriptors and an SVM classifier, the
system recognized walls, stairs, and other obstacles and gen-
erated audio messages to safely guide a blind user through
and around these features. Capi and Toda [17] embedded
depth sensors and a PC into a wheeled walker. With the depth
sensors recognizing the environment, the system informed
blind users of the existence and position of obstacles, stairs,
and curbs using verbal directions or beeps. Moreover, Hub et
al. [36] presented an (unimplemented) concept for an indoor
navigation system that provided more specific information
about stairs, such as the number of stairs and the position of
the railing.
In addition to navigation systems, researchers have also pro-
posed stair detection algorithms [20, 30, 5760, 68, 79]. For
example, Murakami et al. [58] proposed a method that uses
an RGB-D camera to detect stairs. Cloix et al. [20] designed
an algorithm that detected descending stairs with a passive
stereo camera, achieving a 91% recognition rate in real-time.
Perez-Yus et al. [60] proposed a real-time recognition
method that detected, located, and parametrized stairs with a
wearable RGB-D camera, and could even work when the
stairs were partially occluded.
This prior research addressed only auditory feedback for
people who are blind, overlooking PLV’s preference to use
their remaining vision. In contrast, our work addresses this
gap by designing AR visualizations to assist PLV in navi-
gating stairs.
Safe Navigation for PLV
There has been little research on navigation systems for low
vision. No work has specifically focused on stairs.
In terms of low-tech tools, some PLV use optical devices to
enhance their visual abilities. Bioptics, monoculars, tele-
scopes, and binoculars are used for recognizing signs and ob-
stacles at a distance [81]. Some PLV occasionally use prisms
that are ground into glasses to expand their FOV. However,
these specialized tools often stigmatize users in social set-
tings [69]; thus, people avoid using them or abandon them
altogether [25]. Some PLV also use a white cane, especially
at night and in unfamiliar places, but many prefer not using
it because it exposes their disability [86].
Some research has contributed obstacle avoidance systems
for PLV [26, 33, 41, 64]. Everingham et al. [26] designed a
neural-network classification algorithm for a head-worn de-
vice that segmented scenes rendered in front of users’ eyes
and recolored objects to make obstacles more visible. Simi-
larly, Kinateder et al. [41] developed a HoloLens application
that recolored the scene with high contrast colors for PLV
based on the spatial information from the HoloLens. Besides
recoloring the scenes, Hicks et al. [28] and Rheede et al. [64]
built a real-time head-worn LED display with a depth camera
to aid navigation by detecting the distance to nearby objects
and changing the brightness of the objects to indicate their
distances. To our knowledge, our research is the first attempt
to facilitate stair navigation for PLV.
INITIAL EXPLORATION
We sought to facilitate stair navigation by augmenting the
stairs with AR visualizations. In general, there are three types
of AR displays: video see-through, optical see-through, and
projection [88]. For each display type, devices exist (either
commercially or as research prototypes) with different form
factors and device characteristics. For example, a mobile de-
vice can be used as a video see-through AR platform. It is
hand-held with a limited FOV. Considering the different vis-
ual abilities of PLV and our new use case for AR, we did not
know a-priori what AR platform would be most appropriate
for the stair navigation task.
To determine what platforms would be appropriate, we be-
gan by conducting a formative study with 11 PLV (7 female,
4 male; age: 2870, mean = 40) to evaluate prototype visu-
alizations for a smartphone. A smartphone is a widely used
AR device, so it would be a practical choice with potential
for high immediate impact. We presented the real-time cap-
tured image of the stairs on the phone screen and enhanced
the stair edges with yellow highlights. However, participants
had difficulty perceiving the visualizations on the hand-held
phone screen. They switched their gaze between the phone
and the real stairs, hindering their safety during motion. All
participants said they would prefer an immersive experience
where visualizations are seamlessly incorporated into the
physical environment.
Based on the formative study, we narrowed down our target
platforms to immersive AR platforms, specifically (1) hand-
held projection-based AR, and (2) optical see-through smart-
glasses. These platforms would not require the user to switch
their gaze or hinder their ability to perceive motion [88]. We
designed and evaluated visualizations for both platforms,
given that each platform has its own strength: projection-
based AR can augment large physical surfaces but projects
content publicly, which may be better suited to private places
with few people (e.g., home, workspace); meanwhile, smart-
glasses present information only to the user, which may be
better for crowded public places (e.g., subway stations).
VISUALIZATIONS FOR PROJECTION-BASED AR
We first explored the design space of hand-held projection-
based AR, which combines a camera that recognizes the en-
vironment and a projector that projects visual contents into
that environment [61]. This platform has potential to facili-
tate mobility because it can project over a relatively large
area [88] and provide visual augmentations in people’s pe-
ripheral vision, which is shown to be important for stair nav-
igation [56].
Although there are no popular commercial devices in the
market, researchers have prototyped different hand-held pro-
jection-based AR platforms [16, 18, 63, 82]. With a growing
number of smartphones that have embedded depth sensors
(e.g., iPhone XR, Samsung Galaxy S10) and projectors (e.g.,
Samsung Galaxy Beam [67]), smartphones may support pro-
jection-based AR with depth-sensing capabilities in the near
future. Thus, we designed visualizations for such a projec-
tion-based AR smartphone to augment the stairs for PLV.
Visualization (and Sonification) Design
From an interaction perspective, we aimed to simulate use of
a flashlight, which is commonly used by PLV in dark places
[79]: when a user points the projection-based AR phone at
the stairs, it recognizes several stairs in front of her and pro-
jects visualizations on those stairs in real time (Figure 1a).
Inspired by the contrast stripes that many PLV used to dis-
tinguish stair edges [79], we project highlights on the stair
edges to increase their visibility.
According to Zhao et al. [86], PLV had difficulty detecting
stairs and recognizing the stair edges, especially at a distance.
As a result, they walked slowly, stared down to better see the
current and next stair, and shuffled their feet to feel the stair
edges. We therefore designed our visualizations to help them
perceive the stairs from a greater distance, so they can better
plan and prepare their steps.
To alert users of the presence of stairs as they approach, we
first generate auditory feedback to provide an overview of
the stairs, including the stair direction and number of stairs.
Zhao et al. [86] found that PLV sought this kind of infor-
mation, which at times was difficult to perceive. We provide
three different auditory feedback choices: (1) Sonification
that indicates stair direction: one “ding” sound for going up
and two “ding” sounds for going down, adapted from the
sonic alerts for some elevators; (2) a human voice that ver-
bally reported stair direction and number of stairs: “Ap-
proaching upstairs, 14 stairs going up;and (3) a combined
sonification and human voice: “ding, approaching upstairs,
14 stairs going up.”
Since locating the first and last stairs was most important but
challenging for PLV [86], we distinguish the first and last
stairs from the rest by projecting thick highlights on them
(Figure 2a), while projecting thin highlights on the middle
stairs (Figure 3a). We call the highlights on the first and last
stairs End Highlights, and we call those on the middle stairs
Middle Highlights. We needed a visible color for these high-
lights that would not be confused with natural light, so we
used yellow.
Beyond these highlights, we sought ways to further empha-
size the first and last stairs so that a user will notice them and
perceive their exact location from a distance. We designed
five animations to achieve this:
(1) Flash: Since a flash can attract people’s attention [83, 84],
we added this feature to the end highlights. The highlights
appear and disappear with a frequency of 1Hz.
(2) Flashing Edge: When the end highlight flashes, the user
may lose track of the edge position when the highlight dis-
appears. So in this design, we kept a stable line at the stair
edge while flashing the rest of the highlighted strip (Figure
2b). The flash occurs at a frequency of 1Hz.
(3) Moving Edge: Movement also attracts attention [51].
With a stable line at the stair edge, we added another line
moving towards the edge to generate movement (Figure 2c).
(4) Moving Horizontal Zebra: Since movement can be dis-
tracting [84], we design a more subtle movement effect with
a yellow and black zebra pattern moving back and forth at a
frequency of 1Hz (Figure 2d).
(5) Moving Vertical Zebra: Moving the highlight over the
edge of the stair may distort the perceived location of the
edge, so we also designed a zebra pattern that is perpendicu-
lar to the edge (Figure 2e).
Since a staircase typically has stairs of uniform size, the mid-
dle stairs usually do not require much of the user’s attention.
We designed two middle highlights to support the user in a
minimally obtrusive way.
(1) Dull Yellow Highlights: We reduced the lightness of the
original highlights on the middle stairs to 60% to make them
less obtrusive than the end highlights (Figure 3b).
(2) Blue Highlights: We set the middle highlights to blue
since it has a lower contrast with the stairs but still enhances
their visibility [87] (Figure 3c).
To support a range of visual abilities, the design alternatives
can be selected and combined by a user to optimize her ex-
perience for a particular environment.
Evaluation of Projection-Based AR Visualizations
We evaluated the visualizations for projection-based AR,
aiming to answer three questions: (1) How do PLV perceive
the different visualization designs? (2) How useful are the
visualizations for stair navigation? (3) How secure do people
feel when using our visualizations?
Method
Participants. We recruited 12 PLV (6 female, 6 male; mean
age=53.9) with different low-vision conditions, as shown in
Table 1 (P1 P12). Eleven participants (all except P3) were
registered as legally blind, meaning that either (1) their best-
corrected visual acuity in their better eye was 20/200 or
worse, or (2) their visual field was 20°. We conducted a
phone screen to ensure participants were eligible.
Apparatus. The study was conducted at an emergency exit
staircase with eight stairs. To minimize the confounding ef-
fect of computer vision accuracy, we prototyped our design
with a Wizard of Oz protocol [65]. This involved mounting
a stationary projector on a tripod at the top of the set of stairs.
The projector was connected to a laptop that generated the
visualizations. We created all visualizations with Power-
Point. A researcher sat in front of the laptop to control the
visualizations manually, based on the participants position
and orientation (facing upstairs or downstairs). To simulate
the limited projection area of a handheld projector, we pro-
jected visualizations only on the three stairs in front of the
participant (Figure 1a).
Figure 3. Middle highlights: (a) Initial thin highlights with
bright yellow; (b) Dull Yellow Highlights; (c) Blue Highlights.
Figure 2: End highlights for first and last stairs. (a) Initial thick highlight with bright yellow; (b) Flashing Edge: the highlight
switches between thick (b1) and thin (b2); (c) Moving Edge; (d) Moving Horizontal Zebra; (e) Moving Vertical Zebra.
We asked the participant to hold a regular phone with the
back camera facing the stairs, assuming the projected visual-
izations were from the smartphone. We also implemented the
auditory feedback on the smartphone. One researcher con-
trolled the audio feedback with another smartphone via TCP.
Procedure. The study consisted of a single session that lasted
1.5 hours. We started the session with an interview, asking
each participant about their demographics, visual condition,
and technology use when navigating stairs. A licensed op-
tometrist conducted a confrontation visual field test and a
visual acuity test using a Snellen chart (Table 1). After the
interview, we walked the participant to the staircase and con-
tinued the study with a visualization experience session and
a stair navigation session.
During the visualization experience, we gave the participant
our prototype smartphone and explained how to use it. The
participant experienced our design in three phases: (1) Audi-
tory feedback when approaching the stairs, with three alter-
natives: sound, human voice, and the combination of them;
(2) End highlights on the first and last stairs with six design
alternatives (Figure 2); and (3) Middle highlights on the mid-
dle stairs with three design alternatives (Figure 3).
In each phase, we presented all design options to the partici-
pant and asked about their experiences, including whether or
not they liked the design, whether the design distracted them
from seeing the environment, and how they wanted to im-
prove it. For each design option, participants were encour-
aged to walk up and down the stairs. To avoid order effects,
we randomized the order of the design alternatives.
After the participant experienced all design alternatives in all
three phases, we asked them to select one alternative from
each phase to create a preferred combination. Participants
used this combination for the stair navigation portion.
During the stair navigation portion of the study, participants
conducted two stair navigation tasks: walking upstairs and
walking downstairs. They conducted each task in two condi-
tions: (1) walking in their original way (participants could
use a cane if desired, but nobody chose to use it); (2) walking
using our prototype with their preferred combinations. They
repeated each task in each condition five times.
We indicated the start points with yellow stickers on the
landings, three feet away from the top and bottom stairs. For
each task, participants stood at the starting point and started
the walking task when the researcher said, “Start.” The task
ended when both their feet first touched the landing. Partici-
pants were asked to walk as quickly and safely as possible.
We recorded the time for each task.
To reduce order effects, we used a simultaneous within-sub-
jects design, switching the task condition after each walking
up and down task. We counterbalanced the starting task
(up/down) and condition (with/without the prototype).
We ended the study with an exit interview, asking about the
participant’s general experience with the prototype. They
also gave Likert-scale scores for the usefulness and comfort
level of the prototype, as well as their psychological security
when using the prototype, ranging from 1 (strongly negative)
to 7 (strongly positive).
Analysis. We analyzed the effect of our visualizations on
participants’ walking time when navigating stairs. Our ex-
periment had one within-subject factor, Condition (Visuali-
zations, No Visualizations), and one measure, Time. We de-
ID
Gen-
der
Legally
Blind
Diagnosis
Visual acuity
(Left Eye)
Visual acuity
(Right Eye)
Visual acuity
(Both Eyes)
Visual field
(Left Eye)
Visual field
(Right Eye)
P11
F
P
Retinopathy of Prem-
aturity; Glaucoma
20/400
20/1333
20/400
Inferior constriction
All fields constriction
P21
F
P
Retinitis Pigmentosa
20/200
20/140
20/140
Full
Full
P31
F
Î
Doyne Macular Dys-
trophy; Glaucoma
20/40
20/140
20/40
Full
Full
P41
M
P
Glaucoma
20/500
20/800
20/400
Inferior nasal and superior
temporal fields constriction
Constricted in superior
fields
P51
M
P
Achromatopsia
20/400
20/400
20/200
Full
Full
P61,2
F
P
Posterior Uveitis
20/400
20/400
20/400
All fields constriction
Temporal field con-
striction
P71,2
M
P
Flecked Retina Syn-
drome
20/400
20/400
20/400
Inferior nasal field con-
striction
Full;
P81,2
F
P
Stargardts
20/140
20/140
20/140
Full
Full
P91,2
M
P
Albinism with nys-
tagmus
20/200
20/200
20/200
Full
Full
P101,2
M
P
Steven Johnson's
Disease
20/700
20/200
20/140+
Inferior temporal field con-
striction
Full
P111,2
F
P
Stargardts
20/500
20/800
20/500
All fields constriction
All fields constriction
P121,2
M
P
Macular Degenera-
tion (Juvenile)
20/500
20/400
20/200
Full
Full
P132
M
P
Brain Tumor Re-
moval age 2
20/200
20/200
20/200
Inferior nasal fields con-
striction
Inferior nasal fields con-
striction
P142
M
P
Achromatopsia (cone
monochromatism)
20/140
20/140
20/140
Full
Full
P152
M
P
Stargardts
20/400
20/200
20/200
Full
Full
P162
F
P
Glaucoma
20/50
20/400
20/40
All fields constriction
All fields constriction
P172
F
P
Diabetic Retinopathy
20/40
20/25
20/25
Inferior and superior nasal
fields constriction
Inferior and superior na-
sal fields constriction
Table 1. Participant demographic information. Participants labeled with superscript ‘1’ were in the study for projection-based
AR, while those labeled with superscript ‘2’ were in the study for smartglasses.
fined a Trial (15) as one walking task. To validate counter-
balancing, we added another between-subject factor, Order
(two levels: WithWithout, WithoutWith), into our model.
An ANOVA found no significant effect of Order on walking
time (downstairs: F(1,10)=0.108, p=0.749; upstairs:
F(1,10)=0.007, p=0.937) for α = 0.05.
We analyzed the participants’ qualitative feedback by coding
the interview transcripts based on grounded theory [66].
Results
Effectiveness of the Visualizations (and Sonification). All
participants felt our design was helpful and “[would make]
life easier” (P4), especially in relatively dark environments,
such as subway stations. They liked the idea of projecting
highlights on the stair edges to simulate the physical contrast
stripes. P9 said, Having [the highlights] this bright is really
good. Because usually [the contrast stripes] are painted, and
theyre about to fade out, and theyre not as vibrant and
bright as this is. This is great here because you can see it.”
Participants gave high scores to the usefulness and comfort
level of the visualizations, as shown in Figure 4.
Next, we report participants’ responses on all design alterna-
tives in the three design phases.
(1) Auditory feedback when approaching stairs. Four partic-
ipants chose the human voice since they felt it was friendlier
and more informative, reporting the number of stairs. Mean-
while, three participants (P2, P8, P7) chose the nonverbal
sound because they had relatively good vision and felt the
human voice was unnecessary. The other participants pre-
ferred the combination, feeling that the sound and human
voice complemented each other: the “ding” sound was an
alert in noisy environment and the human voice reported
more concrete information.
(2) End highlights. All participants felt that the end high-
lights were an important aspect of the design. “This is the
part where I probably trip the most, on that last step. The light
[end highlights] is really important because it defines the end
of the step, so youre not gonna miss a step” (P5).
Although we provided different visualizations (flash or
movement) to further enhance the end highlights, most par-
ticipants (seven out of 12) liked the original design. They felt
the thickness and brightness of the highlights sufficiently at-
tracted their attention and flashes and movements distracted
them. As P7 explained: “I guess because I dont see details,
when I see things moving, I kind of get the sense of not see-
ing it correctly. I prefer just still… You’ve got the thick [end
highlights] to distinguish from the thin [middle highlights].
This is nice.”
Three participants (P6, P4, P11) felt the flash effect grabbed
their attention more and alerted them. P6 and P4 preferred
the Flashing Edge since it helped them better track the stair
edges than the Flash. However, P11 preferred the Flash since
the thin stable highlight of the Flashing Edge gave him an
illusion of “another small step” (P11).
Two participants (P2, P3) liked the Moving Vertical Zebra
the most. They felt that the movement attracted their atten-
tion and the vertical zebra pattern also labeled the stair edges.
However, none of the participants liked the Moving Horizon-
tal Zebra since the parallel movement to the stair edge dis-
torted its appearance.
Although no participants chose the Moving Edge in the
study, P6 felt it could be helpful since it indicated direction.
She explained that “at least it shows you where to go.” How-
ever, most participants found it overwhelming; it made them
feel like the ground is going to move” (P9).
(3) Middle highlights. Eleven out of 12 participants found the
middle highlights useful. Projecting highlights onto the next
several steps gave participants a preview of the stairs and
helped them better prepare their steps, especially when there
were abnormal stairs. As P5 said,
“So you don’t have to guess what’s coming [with the middle
highlights]. Sometimes you can have a broken step, you can
have no step, or you can have a step that was not installed
properly. Sometimes staircases were defective and the dis-
tance between some of them is not even… With the [high-
lights], you can see the definition of the steps.” (P5)
Even on a typical set of stairs, participants wanted the middle
highlights to confirm that they are still on the stairs, which
made them feel safe. “It’s better with [the] lines. So I know
that this wont be my final step” (P10).
In terms of color, most participants preferred the bright yel-
low (seven out of 12), wanting to be alert on each step. “The
yellow gives me more alert and the blue gives me a little bit
more of a relaxed mode. But when I go up and down the
steps, I wanna be alert” (P5).
Figure 4: Diverging bars that demonstrate the distribution of participant scores (strongly negative 1 to strongly positive 7)
for usefulness, comfort level, and psychological security when using visualizations on projection-based AR. We label the
mean and SD under each category.
Meanwhile, four participants felt that the middle highlights
should be a different color from the end highlights. Three
participants liked the blue color since “it’s not as attracting
as yellow but still sticks out” (P9). No one liked the dull yel-
low since it was too subtle. One participant wanted red.
P6 was the only one who did not want the middle highlights.
She felt it unnecessary since the she could walk on stairs
knowing the position of the first stair and the number of stairs
(she counted stairs). The middle highlights distracted her
from seeing her surroundings.
Walking Time. Our visualizations reduced the time partici-
pants spent during stair navigation. For descending stairs,
participants’ navigation time was reduced by 6.42% when
using their preferred visualizations (mean=6.17s,
SD=1.93s) than when not using them (mean=6.59s,
SD=2.03s). With a paired t-test, we found a considerable
trend towards significance when evaluating the effect of
Condition on the time walking downstairs (t11=-2.131,
p=0.0565) with an effect size of 0.615 (Cohen’s d). P11 re-
marked on the increase in her speed: “This is the fastest Ive
used stairs ever! You dont understand, this is like Im back
to being me!
For ascending stairs, participants’ navigation time was re-
duced by 5.78% when using their preferred visualizations
(mean=5.84s, SD=1.59s) than when not using them
(mean=6.20s, SD=1.81s). With a paired t-test, we also found
a trend towards significant effect of Condition on the time
walking upstairs (t11=1.9894, p=0.0721).
Behavior Change. Based on our observations of the walking
tasks, some participants (e.g., P9, P4) looked down less when
using our design since they could use their lower peripheral
vision to notice the highlights. As P9 mentioned, I know
mentally Im looking in the bottom field of vision, even
though Im looking straight ahead…The [highlight] stands
out very bright and my peripheral catches it, it catches blue,
it catches the yellow…Without the system, I have to stare a
lot more at the stairs and, I have to look a little bit extra to
make sure that that is really the last step.
Some participants (e.g., P6, P3, P11) hesitated at the first and
last stairs and felt the stairs with their feet when walking
without our visualizations (especially in the first two trials of
the walking tasks). When using our visualizations, they
stopped feeling the stairs with their feet. Some participants
(e.g., P7, P11) walked without holding the railing when using
our visualizations. P10 also changed how he balanced his
body when using our prototype: without our design, he
walked down leaning his left shoulder forward instead of fac-
ing forward. He explained:
I noticed when [I walked] without the [highlights], Im
walking more down on my side when descending the stairs.
In case if I fall, then I fall at least more on my side as opposed
to falling forward. With the [highlights] on, I was walking
more straight down. I feel a lot more confident(P10).
Psychological Security. Our visualizations improved partic-
ipantspsychological security when walking on stairs. Par-
ticipants all gave high scores to their psychological security
when using our prototype (mean=6.6, SD=0.67), as shown in
Figure 4. They all felt more confident and safer when navi-
gating stairs with the projected visualizations. P6 and P8 also
said that the visualizations reduced their visual effort, so that
they could look at the surroundings (e.g., other people and
obstacles on the stairs), which also helped them feel safe.
Social Acceptance. Most participants were not concerned
about projecting highlights on stairs. They felt this technol-
ogy wascooland could even be beneficial for people who
are sighted, for example, in dark environments. P11 regarded
the prototype as an identity tool (similar with the identity
cane), which could indicate her disability to others, so that
other people won’t bump into her on stairs. Only P6 and P9
were concerned that this technology might “scare others” and
draw too much attention to themselves. They preferred de-
vices, such as smartglasses, that would show the visualiza-
tions only to them.
VISUALIZATIONS FOR SMARTGLASSES
The second platform we explored was optical see-through
smartglasses. They present information only to the user and
do not need to project onto a physical surface [88]. Today,
this platform is more readily available than projection-based
AR. Beyond smartglasses prototypes developed by research-
ers [9, 42, 43], many early versions of products, such as Mi-
crosoft HoloLens [55] and Magic Leap One [49], mark a
trend towards mainstream smartglasses devices.
However, current optical see-through smartglasses have a
very limited FOV [88] (e.g., ca. 30° wide × 17° high for Ho-
loLens v1), largely limiting the area for presenting AR visu-
alizations. While the recently announced HoloLens v2 is es-
timated to have a 29° vertical FOV, it is still much smaller
than that of a typically-sighted human (120° vertical FOV).
With the limited vertical FOV, the highlight design on pro-
jection-based AR would not work well for the smartglasses.
To see the highlight on the current stair (Figure 5a), a user
would have to look nearly straight down to her feet (Figure
5b), hindering her ability to see her surroundings. This can
be potentially dangerous and is physically strenuous. As
such, our visualizations aim to facilitate a comfortable head
pose by indicating the user’s exact location on the stairs with-
out augmenting the stairs directly.
Figure 5. (a) The visual effect of adding highlights to stairs
with HoloLens. (b) A user stares down to see the highlights.
Visualization (and Sonification) Design
Similar to projection-based AR, when the user stands on the
landing, our system verbally notifies the user of the existence
of the stairs with stair direction and the number of stairs.
According to Zhao et al.’s study, knowing when the stairs
start and end can help PLV plan their steps, while the middle
stairs are less important because most stairs are uniform [86].
Thus, to better inform the user of their position on the stairs,
we distinguish a user’s position on a set of stairs based on
how close she is to a change in her step pattern. This change
can involve stepping down for the first time after walking on
a flat surface or stepping on a flat surface after stepping down
repeatedly. We provide feedback to indicate that a change is
approaching, and then that the change is about to occur.
Specifically, the following are the seven stages we used in
our design, described for descending stairs as an example
(Figure 6): (1) Upper landing: the flat surface that is more
than 3' away from the edge of the top stair; (2) Upper prepa-
ration area: 1.5'–3' away from the top stair edge where the
person should prepare to step down; (3) Upper alert area:
within 1.5' from the top stair edge where the person’s next
step would be stepping down; (4) Middle stairs: between the
edge of the top stair and the edge of the second-to-last stair,
where the person is stepping down repeatedly; (5) Lower
preparation area: the last stair, where the person is one step
away from the flat surface and should prepare for the immi-
nent flat surface; (6) Lower alert area: within 1.5' from the
last stair edge on the landing where the person’s next step is
on the flat surface (not stepping down); (7) Lower landing:
1.5' away from the last stair edge where the person is walking
on flat surface again. Our visualizations inform PLV of the
different stair stages via different design. We design two vis-
ualizations and one sonification.
(1) Glow visualization (Figure 7a–d): We generate a glow
effect at the bottom of the display to simulate the experience
of seeing the edge highlights on the stairs with peripheral vi-
sion. Unlike the highlights that are attached to the stair edges,
the glow is always at the bottom of the vertical FOV, so that
the user can hold their head at a comfortable angle and does
not need to look down to see the glow. We adjust the glow
color and size to inform the user of their current stage on the
stairs:
Landing stages: thin red glow to indicate the flat surface.
Preparation stages: thick cyan glow, telling users to pre-
pare for the first surface level change or the end of surface
level changes.
Alert stages: thick yellow glow, indicating that the next
step is the first surface level change or the end of surface
level changes.
Middle stairs: thin blue steps to indicate the middle stairs.
(2) Path visualization (Figure 7e–g): Inspired by the railings,
which PLV used as a visual cue to see where the stairs start
and end [86], we designed this visualization to show the trend
of the stairs. The direction of the Path follows the stairs: it
goes straight forward along the landing, turns down (or up)
along the slope of the descending (or ascending) stairs, and
goes straight forward again when arriving at the landing. The
Path is generated at the user’s eye level with a fixed distance
from one side of the head (we adjusted its specific position
based on the user’s visual field and preference), making sure
that they can see it without looking too far down. The user
can thus observe the start and end of the stairs by looking at
the turning points of the Path. To better distinguish the land-
ing and the stairs, we colored the straight part of the visuali-
zation (over the landing) yellow and the slope blue. We
added virtual pillars to connect the Path to each stair to help
users associate the visualization with the physical stairs.
(3) Beep sonification: This sonification informs users of their
current position on the stairs. Similar to glow, we adjusted
the sound based on the different stages of the stairs:
Start landing stage: no sound.
Preparation stages: low-frequency beep, indicating users
should prepare for the first surface level change or the end
of surface level changes.
Alert stages: high-frequency beep, indicating that the next
step is the first surface level change or the end of surface
level changes.
Middle stairs: no sound.
End landing stage: audio description that verbally reports
“Stair ends.”
Evaluation of Smartglasses Visualizations
We conducted a user study to evaluate the visualizations we
designed for commercial smartglasses. We aim to answer:
Figure 6: The seven stages of the stairs.
Figure 7: Glow (ad) and Path (eg). Glow: (a) thin red glow on the landing; (b) thick cyan glow in the preparation area; (c) thick
yellow glow in the alert area; (d) thin blue glow on the middle of the stairs. Path: (e) view of the Path on the landing; (f) view of the
Path when getting close to the first stair; (g) view of the Path on the middle of the stairs.
(1) How do PLV perceive the visualizations on smartglasses?
(2) How effective are the visualizations for stair navigation?
(3) How secure do PLV feel when using our visualizations?
Method
Participants. We recruited 12 PLV (5 female, 7 male; mean
age=51.6) with different low vision conditions (Table 1, P6
P17). All participants were legally blind. Seven participants
had taken part in the evaluation of our projection-based AR
visualizations, but they did not see the stairs used in this
study. We followed the same recruitment procedures as in
the previous study.
Apparatus. We built our prototype on Microsoft HoloLens
v1. We chose HoloLens because of its FOV (~34° diagonal),
binocular displays, and ability to be worn with eyeglasses.
Many lightweight smartglasses have only one display in
front of the right eye (e.g., Google Glass, North Focals), and
are unusable for PLV with vision only in the left eye. Other
options either have a smaller FOV (e.g., Epson Moverio BT-
300, 23° diagonal) or cannot be used with eyeglasses (e.g.,
Magic Leap One).
To minimize the confounding effect of general computer vi-
sion accuracy, we marked the position of the stairs with two
Vuforia image targets [37] (on the side walls at the top and
bottom landing of the stairs) that can be recognized by Ho-
loLens. This provided an anchor in the environment, which
enabled our application to determine the position of the user
on the stairs by tracking the motion of the HoloLens, improv-
ing the accuracy of our visualizations and sonification.
Procedure. The study consisted of a single session that lasted
about 1.5–2 hours. An initial interview asked about de-
mographics, visual condition, and use of tools when navi-
gating stairs. Next, a licensed optometrist on the team con-
ducted a confrontation visual field test and a visual acuity
test using a Snellen chart (Table 1). We then gave the Ho-
loLens to the participant and explained how to use it. After
the participant put on the HoloLens, the optometrist tested
her visual field and visual acuity again to measure the effect
of the HoloLens on the participant’s visual ability. We con-
tinued the study with a design exploration session and a stair
navigation session.
We conducted the design exploration session at an emer-
gency staircase with 12 stairs (different stairs than those in
the projection study). Participants wore the HoloLens and
experienced four different designs: Glow, Path, Beep, and
Edge Highlights as a baseline. Participants were allowed to
walk up and down the stairs to experience the design in-situ.
They thought aloud, talking about whether or not they liked
the design, whether the design distracted them, and how they
wanted to improve it. We counterbalanced by randomizing
the presentation order of the four designs. After the partici-
pant experienced all the design alternatives, we asked for
their preferred combination.
The stair navigation session was conducted at another stair-
case with 14 stairsa wider set of access stairs in a more
brightly lit and open environment. Participants performed
two stair navigation tasks: walking upstairs and walking
downstairs. They conducted each task in three conditions: (1)
walking on the stairs as they typically would (they could use
a cane if desired, but none chose to use it), (2) walking on the
stairs with HoloLens and no visualizations, and (3) walking
on the stairs with HoloLens and their chosen designs. Each
task in each condition was repeated five times.
We indicated the start and end points on the stairs with stick-
ers that were three feet away from the top and bottom steps
on the landings. For each task, the participant stood at the
starting point and started when the researcher said, “Start.”
The task ended when both her feet first arrived at the landing.
Participants were asked to walk as quickly and safely as pos-
sible during the task. We recorded the time for each task.
To reduce the effect of order on the results, we used a simul-
taneous within-subjects design by switching the task condi-
tion after each round of walking up and down. We also coun-
terbalanced the starting task (up/down) and the conditions.
The study ended with a final interview asking about the par-
ticipant’s general experience with the prototype. We asked
them to score the usefulness and comfort level of the proto-
type on a Likert scale, as well as their psychological security
when using the prototype, ranging from 1 (strongly negative)
to 7 (strongly positive).
Analysis. We analyzed the effect of our visualizations on
participants’ walking time when navigating stairs. Our ex-
periment had one within-subject factor, Condition (No Ho-
loLens; HoloLens w/o visualizations; Visualizations), and
one measure, Time. We defined a Trial (15) as one walking
task. We determined Time from the video we recorded dur-
ing the study. When analyzing data, we removed the first trial,
treating it as a practice trial for participants to get used to the
HoloLens.
To validate counterbalancing, we added another between-
subject factor, Order (six levels based on the three condi-
tions), into our model. An ANOVA found no significant ef-
fect of Order on walking time (downstairs: F(5,6)=0.35,
p=0.338; upstairs: F(5,6)=0.445, p=0.804) and no significant
effect of the interaction between Order and Condition on
walking time (downstairs: F(10,12)=1.418, p=0.280, upstairs:
F(10, 12)=0.535, p=0.835).
We analyzed participants’ qualitative responses with the
same method we used in the previous study.
Results
Experience with the Smartglasses. We first report the effect
of the HoloLens on participants’ visual abilities. Some par-
ticipants appreciated the tinted optics because they blocked
environmental glare. Three participants’ visual acuity im-
proved when wearing the HoloLens (P14: from 20/140 to
20/100, P7: from 20/400 to 20/200, P15: from 20/200 to
20/140). However, P12 experienced a decrease in visual acu-
ity (from 20/200 to 20/400). It is possible that the tint of the
HoloLens made the environment too dark for him to see. In
terms of visual field, no participants experienced a change
while wearing the HoloLens. All participants mentioned the
heaviness of the hardware, which potentially impacted their
experience negatively.
Effectiveness of the visualizations (and sonification). We
report participants’ feedback on each design alternative.
(1) Edge Highlights (Baseline). Most participants found it
difficult to use the Edge Highlights because of the limited
vertical FOV. Participants had to angle their head down a lot
to see the highlight on the current stair. They found it uncom-
fortable and unsafe to maintain that posture on the stairs, es-
pecially when walking down. P9 reported that, “To continue
seeing everything, my head has to be completely [down], my
chin is touching my chest.”
Nevertheless, some participants (e.g., P6, P10, P13) felt this
design was helpful because it provided a preview for future
steps, especially when they looked downstairs from the top
landing. Interestingly, P10 mentioned that he could combine
his own vision (that is not covered by the HoloLens) with the
Edge Highlights. He didn’t feel the need to look down all the
time because he has good peripheral vision to see the stairs,
and he could use the Edge Highlights on the HoloLens to
prepare for future steps and verify the last step.
(2) Glow. Most participants found Glow helpful and easy to
understand. They felt the different colors can effectively in-
form them of their stage on the stairs, and the thicker and
brighter glow colors at the preparation and alert area success-
fully attracted their attention. Moreover, participants enjoyed
the freedom to move their head in any direction while still
being able to see Glow. This enabled them to better explore
their surroundings and still be visually alerted about the stairs
without looking down. P9 described his experience:
“This one is my kind of style. Its subtle, simple, and I can
keep my head wherever I want at the same time. And [the
color of the Glow] changes exactly when I need to step. It
warns me when Im about to take my last stepIt’s very dis-
creet but not distracting. So Ill still be able to see people,
and things around me without falling over steps. If my real
glasses could do this, it would be good.”
However, two participants (P6, P14) had difficulty using
Glow because of difficulty distinguishing colors. P14 doesn’t
have color vision, while P6’s visual condition included auras
of various colors that interfere with the colors of Glow.
Moreover, some participants (e.g., P10, P12, P17) mentioned
that the blue glow on the middle stairs was difficult to notice,
especially in the bright environment for the walking tasks.
Not seeing the glow on the middle stairs distracted the par-
ticipants and made them feel uncertain about the stairs. As
P10 mentioned, “I want more information while Im going
down the stairs, The yellow color was helpful to let me know
that I’m at the last step...but I didnt really see that [blue glow
in the middle], I need to be reassured that Im still going
down the stairs.” P17 slowed down as she struggled to see
the blue glow when completing the walking tasks.
(3) Path. Half of the participants indicated that Path could be
helpful. They mentioned that Path gave them a clear over-
view of the stair trends, specifically where the stairs start and
end. P13 described his impression, “This is perfect because
if Im coming to the stairs, looking at the stairs and I wont
have to look down, I immediately know where [the stair] be-
gins and where it ends, as soon as my head turns to the [Path].”
P8 also felt Path could guide him along the stairs: “It’s like a
reinforced railing but it’s also like a guide [showing] where
I’m stepping. It’s like a good reference. I kinda like to have
the guide.” Moreover, three participants (e.g., P6, P9, P13)
interpreted Path as a reminder to look for the physical railing.
Interestingly, we found that participants had different prefer-
ences for Path’s position in their visual field. Many (e.g., P12,
P16) adjusted Path to a position where their vision was best.
Meanwhile, others adjusted it to a position that they felt was
the most intuitive to comprehend. For example, P9 and P15
adjusted Path so that it was in the center of their vision and
that they could use it in a similar fashion to a GPS guide. P14
moved Path lower so he can more easily associate the virtual
Path to the real staircase. As he said, “[Path] would be my
favorite if we were able to get it to [get close] to the stairs
instead hanging up in the middle of everything.”
However, half of the participants felt Path was distracting
and hard to understand. P6 even felt it was misleading to
have a virtual railing (Path) in a different place than the real
railing because it changed her perception of the width of the
staircase: “It suggests that there is a railing and then I feel I
have a very narrow staircase” (P6).
(4) Beep. All participants except for P17 felt Beep was help-
ful. P6 thought it could reduce cognitive load and enable her
to see the surroundings. As she said, “Its really interesting.
The more often I use it, the more I like the [Beep]… I don’t
have to watch out for visual [information] of the stairs. With
the audio, I just look at the [surrounding] or look at people in
front of me and I dont have to worry about [the stairs].
Thats actually easier. P14 also felt Beep could be a good
compensation when the visualizations are not visible in
bright environment.
On the other hand, P17 felt that Beep may not be distinguish-
able from environmental sounds: “The world around you is
so full of noise. I mean, if I use this in the city… you have
cars honking and everything like that, Im not sure if I would
react in time. P8 and P14 voiced the same concern about
environmental noise but explained that along with the visu-
alizations the sound would be recognizable.
Figure 8: Distribution of participants’ preferences for visualizations and sonification on HoloLens.
Preferences for visualizations (and sonification). Partici-
pants combined different visualizations and sonification
based on their preferences, as shown in Figure 8.
We found that most participants (10 out of 12) combined a
visualization with a sonification (Beep). While they all men-
tioned that visualizations were more effective than audio
feedback and used the visualization as a primary guide, par-
ticipants also appreciated the beep and used it as a secondary
complement to the visualizations. As P12 said, “Actually I
liked [Glow] more with the audio [Beep]. They augment
each other. I found it to be more useful together than sepa-
rate.” Only two participants did not combine the visualiza-
tion with the sonification: P7 used audio alone, and P17 used
Glow alone.
The most commonly chosen visualization was Glow, which
was preferred by eight participants. One participant (P14)
chose Path, while two participants (P6 and P10) chose Edge
Highlights. P13 combined all four designs because he used
each design for different purposes: Path as a reminder to look
for a railing, Edge Highlights to get an overview of the stairs,
and Glow when walking on stairs and scanning the environ-
ment for people or obstacles.
In general, participants felt that our prototype was helpful,
especially in unfamiliar places. They gave high scores
(mean=5.8, SD=1.65) for the usefulness of their preferred
visualizations and sonification. They also felt the visualiza-
tions were comfortable to see (mean=5.6, SD=1.73), as
shown in Figure 9.
Walking Time. In the walking tasks, the HoloLens itself had
a big impact on participants’ walking time when navigating
descending stairs. With ANOVA, we found that participants’
walking time significantly increased when they walked
downstairs wearing the HoloLens whether using our visuali-
zations or not (F(2,12)=8.783, p=0.0045). However, when
walking upstairs, there was no significant effect of Condition
on participants’ walking time (F(2,10)=2.924, p=0.092). Since
navigating descending stairs is more challenging, wearing a
new device can more easily affect people’s walking speed.
With the condition of wearing HoloLens without visualiza-
tions as the baseline, we analyzed the effect of our visualiza-
tions on PLV’s walking time. We found that there’s no sig-
nificant effect of Condition (HoloLens with visualizations vs.
HoloLens without visualizations) on participants walking
time for both ascending (F(1,10)=0.466, p=0.511) and de-
scending stairs (F(1,10)=0.114, p=0.742). Four participants
(P6, P8, P12, P17) slowed down a little on ascending stairs
with the visualizations, while five participants (P6, P13, P12,
P16, P17) slowed down on descending stairs with their pre-
ferred visualizations. Except for P17, who slowed down a lot
when walking downstairs with our visualizations, all other
participants’ times increased by less than 1 second. We in-
vestigated and found that P17 had a hard time seeing the blue
glow on middle stairs in the bright environment. She slowed
down and struggled to see the blue glow during walking tasks.
Psychological Security. While there is no significant im-
provement in walking speed when using the visualizations,
participants reported feeling safer and more confident when
using our design. P11 described her experience when using
our prototype, I love the fact that the [visualizations] are
there. Once you understand what they mean, you can actually
move more confidentlyI would be very safe instead of
falling down and kicking things.”
Participants gave scores to their psychological security dur-
ing stair navigation in three conditions (Figure 9): (1) walk-
ing as they typically would (mean=4.8, SD=1.60); (2) with
HoloLens but no visualizations (mean=3.9, SD=1.44); (3)
with preferred visualizations or sonification (mean=6.1,
SD=1.38). Paired Wilcoxon Signed-Rank tests showed that,
while wearing HoloLens significantly reduced participants’
psychological security (V=8, p=0.031), our visualizations
significantly increased participant psychological security
compared with not wearing HoloLens (V=21, p=0.050).
Behavior Change. Our design changed people’s behaviors
when walking on stairs. Two participants (P8, P15) walked
without holding the railing when using their preferred visu-
alizations. Moreover, we tracked participants’ head orienta-
tion with HoloLens during the walking tasks, and found that
some participants’ (e.g., P6, P9) head orientation changed
when using our visualizations. For example, Figure 10 shows
the head forward angle of P9 on each stair stage when walk-
ing downstairs with and without the visualizations. We found
Figure 9: Diverging bars that demonstrate the distribution of participant scores (strongly negative 1 to strongly positive 7) for the
usefulness and comfort level of the visualizations, and their psychological security in three conditions: without HoloLens, with Ho-
loLens but no visualizations, and with visualizations. We label the mean and SD under each category.
that, he looked much further down to the stairs when not us-
ing our visualizations, especially at the beginning and the end
of the stairs (e.g., preparation area, alert area).
DISCUSSION
Our research is the first to explore AR visualizations for peo-
ple with low vision in the context of stair navigation. Our
studies demonstrate the effectiveness of our designs with
both projection-based AR and smartglasses. We found that
our visualizations on both platforms largely increased peo-
ple’s psychological security, making them feel confident and
safe when walking on stairs. Moreover, the visualizations on
projection-based AR showed a trend towards significantly
reducing PLV’s walking time on stairs.
Participants had some common choices on the visualizations
on each platform. For projection-based AR, the stable thick
yellow highlights on first and last stairs were the most pre-
ferred (7/12). For highlights on middle stairs, most partici-
pants (7/12) preferred the most visible yellow highlights in-
stead of blue or dull yellow ones. For HoloLens, most partic-
ipants (6/12) chose the combination of Glow and Beep. Un-
like prior research, which showed that PLV had very differ-
ent preferences for visual augmentations [84, 85], our study
revealed that some common preferences among PLV cross
different visual abilities for stair navigation. This can poten-
tially set a foundation for future visualization design for stair
navigation and more general navigation systems.
We compared users’ experiences with the visualizations on
both platforms given that seven participated in both studies.
Most PLV (e.g., P10, P12) felt that the visualizations on pro-
jection-based AR were easier to use than those on the smart-
glasses. The highlights on projection AR were intuitive to
perceive because they directly enhance the stair edges that
participants were looking for. Meanwhile, the design on
smartglasses, especially Glow and Beep, proposed a new
way to perceive stairs: it divided the stairs into different
stages, providing only immediate information about the cur-
rent stair without a preview of what’s to come. This new stair
perception method increased participants’ cognitive load, be-
cause they had to associate the design with the physical
stairs, making them more cautious. This could be one major
reason why PLV’s walking time did not improve when using
smartglasses. P12 compared his experiences with the two
platforms, “The first experience [projection-based AR] gave
me a better sense of a direction as to where this was go-
ingBut the [glow] was like floating over the steps, and they
didnt stay fixed in place. That was one big difference. I like
the light fixed on the step.”
While our study focused on the design and evaluation of the
AR visualizations, we discuss the technical feasibility and
challenges for our AR stair navigation systems. The imple-
mentation of such a system could be challenging. For such a
dangerous task as stair navigation, the navigation system
should be highly accurate and fast since a small error could
lead to severe consequences (e.g., a slight shift of the edge
highlight could make the user fall). The system also needs to
tolerate the users body (e.g., hand, head) movement when
walking on stairs, which requires a tradeoff between speed
and stabilization. While many stair detection methods have
been presented in prior research [20, 58], algorithms that lo-
cate the exact position of each stair with high speed and ac-
curacy should be investigated and tested to support the stair
visualization systems we designed for PLV.
The system implementation should also take into account
different real-world situations. Our evaluation was con-
ducted indoors, with no other people around. However, the
real world could be much more complicated, raising all kinds
of challenges. For example, AR visualizations could be less
visible outdoors, crowded stairs could diminish the accuracy
of the stair recognition because the stair edges are blocked,
and the projected highlights may also disturb other people.
In future work, we will consider these real-world challenges
when developing AR stair navigation systems. For example,
besides recognizing stairs with computer vision, we will con-
sider instrumenting the environment (e.g., using RFID) to
foster accurate and fast stair recognition in a complex envi-
ronment. We will also add face detection to avoid projecting
in bystanders’ faces.
As with any study, ours had some limitations. First, the Ho-
loLens’s weight strongly diminished PLV’s experiences,
which may have influenced our results. Future studies should
refine and evaluate the design on more lightweight smart-
glasses. Second, because of the extreme head pitch required
to view the closest stairs caused by the small vertical FOV of
Figure 10: P9’s gaze direction when walking downstairs in two conditions: using HoloLens w/o visualizations and using his pre-
ferred visualizations on HoloLens. The x-axis represents each stair, while the y-axis represents the angle between the partici-
pant’s gaze direction and the horizontal surface. When the participant looks up (down), the angle is positive (negative).
the HoloLens, we designed visualizations in the users’ cen-
tral vision instead of adding highlights to the stairs in our
smartglasses prototype. More data could be collected to
quantify the head pitch angle to determine an effective verti-
cal FOV that allows PLV to use the stair highlights with a
comfortable head pose. Third, we asked participants to score
their feeling of psychological security, but these results could
be influenced by a novelty effect. Future research should
consider more objective measurements (e.g., biometrics) to
evaluate psychological security.
CONCLUSIONS
In this paper, we designed AR visualizations to facilitate stair
navigation for people with low vision. We designed visuali-
zations (and sonification) for both projection-based AR and
smartglasses based on the different characteristics of these
platforms. We evaluated the design on each platform with 12
participants, finding that both visualizations increased par-
ticipants’ psychological security, making them feel safer and
more confident when walking on stairs. Moreover, our de-
sign for projection-based AR showed a trend towards signif-
icantly reducing participants’ walking time on stairs.
ACKNOWLEDGMENTS
This work was supported in part by the National Science
Foundation under grant no. IIS-1657315. Feiner was funded
in part by the National Science Foundation under grant no.
IIS-1514429.
REFERENCES
[1] Abu-Faraj, Z.O. et al. 2012. Design and development
of a prototype rehabilitative shoes and spectacles for
the blind. 2012 5th International Conference on
Biomedical Engineering and Informatics, BMEI 2012
(2012), 795799.
[2] Aguerrevere, D. et al. 2004. Portable 3D Sound / Sonar
Navigation System for Blind Individuals. 2nd LACCEI
Int. Latin Amer. Caribbean Conf. (2004).
[3] Ahmetovic, D. et al. 2017. Achieving Practical and
Accurate Indoor Navigation for People with Visual
Impairments. Proceedings of the 14th Web for All
Conference on The Future of Accessible Work - W4A
’17 (New York, New York, USA, 2017), 110.
[4] Ahmetovic, D. et al. 2016. NavCog: A Navigational
Cognitive Assistant for the Blind. Proceedings of the
18th International Conference on Human-Computer
Interaction with Mobile Devices and Services (2016),
9099.
[5] Archea, J.C. and Clin Geriatr, M. 1985. Environmental
Factors Associated with Stair Accidents by the Elderly.
Clinics in geriatric medicine. 1, 3 (Aug. 1985), 555
569.
[6] Berger, S. and Porell, F. 2008. The Association
Between Low Vision and Function. Journal of Aging
and Health. 20, 5 (Aug. 2008), 504525.
DOI:https://doi.org/10.1177/0898264308317534.
[7] Bhowmick, A. et al. 2014. IntelliNavi: Navigation for
Blind Based on Kinect and Machine Learning.
Springer, Cham. 172183.
[8] Bibby, S.A. et al. 2007. Vision and self-reported
mobility performance in patients with low vision.
Clinical and Experimental Optometry. 90, 2 (Mar.
2007), 115123. DOI:https://doi.org/10.1111/j.1444-
0938.2007.00120.x.
[9] Bimber, O. and Frohlich, B. 2002. Occlusion shadows:
Using projected light to generate realistic occlusion
effects for view-dependent optical see-through
displays. Proceedings - International Symposium on
Mixed and Augmented Reality, ISMAR 2002 (2002),
186198.
[10] Black, A.A. et al. 1997. Mobility Performance with
Retinitis Pigmentosa. Clinical and experimental
optometry. 80, 1 (Jan. 1997), 112.
DOI:https://doi.org/10.1111/j.1444-
0938.1997.tb04841.x.
[11] Blindness and Visual Impairment: 2017.
http://www.who.int/news-room/fact-
sheets/detail/blindness-and-visual-impairment.
Accessed: 2018-09-14.
[12] Blum, J.R. et al. 2011. What’s around me? Spatialized
audio augmented reality for blind users with a
smartphone. International Conference on Mobile and
Ubiquitous Systems: Computing, Networking, and
Services. (2011), 4962.
[13] BOptom, R.Q.I. et al. 1998. Visual Impairment and
Falls in Older Adults: The Blue Mountains Eye Study.
Journal of the American Geriatrics Society. 46, 1 (Jan.
1998), 5864. DOI:https://doi.org/10.1111/j.1532-
5415.1998.tb01014.x.
[14] Bouzit, M. et al. 2004. Tactile feedback navigation
handle for the visually impaired. IMECE2004 (Jan.
2004), 17.
[15] Campbell, M. et al. 2014. Where’s My Bus Stop?
Supporting Independence of Blind Transit Riders with
StopInfo. ASSETS ’14 Proceedings of the 16th
international ACM SIGACCESS conference on
Computers & accessibility. (2014), 1118.
DOI:https://doi.org/10.1145/2661334.2661378.
[16] Cao, X. and Balakrishnan, R. 2006. Interacting with
dynamically defined information spaces using a
handheld projector and a pen. the 19th annual ACM
symposium on User interface software and technology
(2006), 225.
[17] Capi, G. and Toda, H. 2011. A new robotic system to
assist visually impaired people. IEEE International
Workshop on Robot and Human Interactive
Communication (2011), 259263.
[18] Choi, J. and Kim, G.J. 2013. Usability of one-handed
interaction methods for handheld projection-based
augmented reality. Personal and Ubiquitous
Computing. 17, 2 (Feb. 2013), 399409.
DOI:https://doi.org/10.1007/s00779-011-0502-1.
[19] Cimarolli, V.R. et al. 2012. Challenges faced by older
adults with vision loss: a qualitative study with
implications for rehabilitation. Clinical Rehabilitation.
26, 8 (Aug. 2012), 748757.
DOI:https://doi.org/10.1177/0269215511429162.
[20] Cloix, S. et al. 2016. Low-power depth-based
descending stair detection for smart assistive devices.
Eurasip Journal on Image and Video Processing. 2016,
1 (2016). DOI:https://doi.org/10.1186/s13640-016-
0133-6.
[21] Common Types of Low Vision:
http://www.aoa.org/patients-and-public/caring-for-
your-vision/low-vision/common-types-of-low-
vision?sso=y. Accessed: 2015-07-07.
[22] Cox, A. et al. 2005. Visual impairment in elderly
patients with hip fracture: causes and associations. Eye
(London, England). 19, 6 (Jun. 2005), 652656.
DOI:https://doi.org/10.1038/sj.eye.6701610.
[23] Cummings, S.R. et al. 1995. Risk Factors for Hip
Fracture in White Women. New England Journal of
Medicine. 332, 12 (Mar. 1995), 767774.
DOI:https://doi.org/10.1056/NEJM199503233321202.
[24] Dakopoulos, D. and Bourbakis, N.G. 2010. Wearable
Obstacle Avoidance Electronic Travel Aids for Blind:
A Survey. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews). 40, 1
(Jan. 2010), 2535.
DOI:https://doi.org/10.1109/TSMCC.2009.2021255.
[25] Dougherty, B.E. et al. 2011. Abandonment of low-
vision devices in an outpatient population. Optometry
and vision science : official publication of the
American Academy of Optometry. 88, 11 (Nov. 2011),
12837.
DOI:https://doi.org/10.1097/OPX.0b013e31822a61e7.
[26] Everingham, M.R. et al. 1999. Head-mounted mobility
aid for low vision using scene classification techniques.
International Journal of Virtual Reality. 3, (1999), 3
12.
[27] Fiannaca, A. et al. 2014. Headlock: a Wearable
Navigation Aid that Helps Blind Cane Users Traverse
Large Open Spaces. Proceedings of ASSETS ’14
(2014), 1926.
[28] Filipe, V. et al. 2012. Blind Navigation Support System
based on Microsoft Kinect. Procedia Computer
Science. 14, (Jan. 2012), 94101.
DOI:https://doi.org/10.1016/J.PROCS.2012.10.011.
[29] Hara, K. et al. 2015. Improving Public Transit
Accessibility for Blind Riders by Crowdsourcing Bus
Stop Landmark Locations with Google Street View:
An Extended Analysis. ACM Transactions on
Accessible Computing. 6, 2 (2015), 123.
DOI:https://doi.org/10.1145/2717513.
[30] Harms, H. et al. 2015. Detection of ascending stairs
using stereo vision. IEEE International Conference on
Intelligent Robots and Systems. 2015-Decem, (2015),
24962502.
DOI:https://doi.org/10.1109/IROS.2015.7353716.
[31] Harwood, R.H. et al. 2005. Falls and health status in
elderly women following first eye cataract surgery: a
randomised controlled trial. The British journal of
ophthalmology. 89, 1 (Jan. 2005), 539.
DOI:https://doi.org/10.1136/bjo.2004.049478.
[32] Harwood, R.H. 2001. Visual problems and falls. Age
and Ageing. 30, SUPPL. 4 (Nov. 2001), 1318.
DOI:https://doi.org/10.1093/ageing/30.suppl_4.13.
[33] Hicks, S.L. et al. 2013. A Depth-Based Head-Mounted
Visual Display to Aid Navigation in Partially Sighted
Individuals. PLoS ONE. 8, 7 (Jul. 2013), e67695.
DOI:https://doi.org/10.1371/journal.pone.0067695.
[34] Huang, H.-C. et al. 2015. An Indoor Obstacle
Detection System Using Depth Information and Region
Growth. Sensors. 15, 10 (2015), 2711627141.
DOI:https://doi.org/10.3390/s151027116.
[35] Huang, J. et al. 2019. An augmented reality sign-
reading assistant for users with reduced vision. PLOS
ONE. 14, 1 (Jan. 2019), e0210630.
DOI:https://doi.org/10.1371/journal.pone.0210630.
[36] Hub, A. et al. Augmented Indoor Modeling for
Navigation Support for the Blind.
[37] Image Targets:
https://library.vuforia.com/articles/Training/Image-
Target-Guide. Accessed: 2019-07-04.
[38] Ivanov, R. 2010. Indoor navigation system for visually
impaired. The 11th International Conference on
Computer Systems and Technologies and Workshop for
PhD Students in Computing on International
Conference on Computer Systems and Technologies
(2010), 143.
[39] Kanwal, N. et al. 2015. A Navigation System for the
Visually Impaired: A Fusion of Vision and Depth
Sensor. Applied Bionics and Biomechanics. 2015,
(Aug. 2015), 116.
DOI:https://doi.org/10.1155/2015/479857.
[40] Khambadkar, V. and Folmer, E. 2013. GIST: a
Gestural Interface for Remote Nonvisual Spatial
Perception. the 26th annual ACM symposium on User
interface software and technology (2013), 301310.
[41] Kinateder, M. et al. 2018. Using an Augmented Reality
Device as a Distance-based Vision AidPromise and
Limitations. Optometry and Vision Science. 95, 9
(2018), 727.
DOI:https://doi.org/10.1097/OPX.0000000000001232.
[42] Kiyokawa, K. et al. An optical see-through display for
mutual occlusion of real and virtual environments.
Proceedings IEEE and ACM International Symposium
on Augmented Reality (ISAR 2000) 6067.
[43] Kiyoshi Kiyokawa et al. 2003. An Occlusion-Capable
Optical See-through Head Mount Display for
Supporting Co-located Collaboration. Proceedings of
the 2nd IEEE/ACM International Symposium on Mixed
and Augmented Reality (2003), 133.
[44] Leat, S.J. and Lovie-Kitchin, J.E. 2008. Visual
function, visual attention, and mobility performance in
low vision. Optometry and Vision Science. 85, 11
(Nov. 2008), 10491056.
DOI:https://doi.org/10.1097/OPX.0b013e31818b949.
[45] Leat, S.J. and Lovie-Kitchin, J.E. 2008. Visual
Function, Visual Attention, and Mobility Performance
in Low Vision. Optometry and vision science : official
publication of the American Academy of Optometry.
85, 11 (2008), 10491056.
DOI:https://doi.org/10.1097/OPX.0b013e31818b949.
[46] Legge, G.E. et al. 2013. Indoor Navigation by People
with Visual Impairment Using a Digital Sign System.
PLoS ONE. 8, 10 (Oct. 2013), e76783.
DOI:https://doi.org/10.1371/journal.pone.0076783.
[47] Legge, G.E. et al. 2010. Visual accessibility of ramps
and steps. Journal of Vision. 10, 11 (Sep. 2010), 88.
DOI:https://doi.org/10.1167/10.11.8.
[48] Liu, H. et al. 2015. iSee: obstacle detection and
feedback system for the blind. Proceedings of the 2015
ACM International Joint Conference on Pervasive and
Ubiquitous Computing and Proceedings of the 2015
ACM International Symposium on Wearable
Computers - UbiComp ’15. (2015), 197200.
DOI:https://doi.org/10.1145/2800835.2800917.
[49] Magic Leap: https://www.magicleap.com/magic-leap-
one.
[50] Mascetti, S. et al. 2016. ZebraRecognizer: Pedestrian
crossing recognition for people with visual impairment
or blindness. Pattern Recognition. 60, (Dec. 2016),
405419.
DOI:https://doi.org/10.1016/J.PATCOG.2016.05.002.
[51] McLeod, P. et al. 1988. Visual Search for a
Conjunction of Movement and Form is parallel.
Nature. 336, (1988), 403405.
[52] Meers, S. and Ward, K. 2005. A Substitute Vision
System for Providing 3D Perception and GPS
Navigation via Electro-Tactile Stimulation.
International Conference on Sensing Technology.
November (Nov. 2005), 551556.
[53] Meijer, P.B.L. 1992. An experimental system for
auditory image representations. IEEE Transactions on
Biomedical Engineering. 39, 2 (1992), 112121.
DOI:https://doi.org/10.1109/10.121642.
[54] Menikdiwela, M.P. et al. 2013. Haptic based walking
stick for visually impaired people. 2013 International
conference on Circuits, Controls and Communications
(CCUBE) (Dec. 2013), 16.
[55] Microsoft HoloLens | Official Site:
https://www.microsoft.com/microsoft-hololens/en-us.
Accessed: 2015-07-07.
[56] Miyasike-daSilva, V. et al. 2019. A role for the lower
visual field information in stair climbing. Gait &
Posture. 70, (May 2019), 162167.
DOI:https://doi.org/10.1016/J.GAITPOST.2019.02.033
.
[57] Munoz, R. et al. 2016. Depth-aware indoor staircase
detection and recognition for the visually impaired.
2016 IEEE international conference on multimedia &
expo workshops (ICMEW) (2016), 1–6.
[58] Murakami, S. et al. 2014. Study on stairs detection
using RGB-depth images. 2014 Joint 7th International
Conference on Soft Computing and Intelligent Systems,
SCIS 2014 and 15th International Symposium on
Advanced Intelligent Systems, ISIS 2014. (2014), 1186
1191. DOI:https://doi.org/10.1109/SCIS-
ISIS.2014.7044705.
[59] Perez-Yus, A. et al. 2015. Stair Detection and
Modelling from a Wearable Depth Camera. (2015),
2015.
[60] Perez-Yus, A. et al. 2017. Stairs detection with
odometry-aided traversal from a wearable RGB-D
camera. Computer Vision and Image Understanding.
154, (2017), 192205.
DOI:https://doi.org/10.1016/j.cviu.2016.04.007.
[61] Pinhanez, C. 2001. The Everywhere Displays
Projector: A Device to Create Ubiquitous Graphical
Interfaces. International conference on ubiquitous
computing. Springer, Berlin, Heidelberg. 315331.
[62] Priyadarshini, A.R. 1024. Dual Objective Based
Navigation Assistance to the Blind and Visually
Impaired. International Journal of Innovative Research
in Computer and Communication Engineering. 2, 5
(1024), 43354342.
[63] Rapp, S. et al. 2004. Spotlight Navigation : Interacton
with a Handheld Projection Device. Advances in
Pervasive Computing (2004), 397400.
[64] van Rheede, J.J. et al. 2015. Improving mobility
performance in low vision with a distance-based
representation of the visual scene. Investigative
Ophthalmology and Visual Science. 56, 8 (2015),
48024809. DOI:https://doi.org/10.1167/iovs.14-
16311.
[65] Salber, D. and Coutaz, J. 1993. Applying the Wizard of
Oz Technique to the Study of Multimodal Systems.
Proceedings of EWHCI. (1993), 219230.
DOI:https://doi.org/10.1007/3-540-57433-6_51.
[66] Saldana, J. 2010. The Coding Manual for Qualitative
Researchers. The qualitative report. 15, 3 (2010), 754
760.
DOI:https://doi.org/10.1017/CBO9781107415324.004.
[67] Samsung I8530 Galaxy Beam:
https://www.gsmarena.com/samsung_i8530_galaxy_be
am-4566.php. Accessed: 2019-03-26.
[68] Shahrabadi, S. et al. 2013. Detection of indoor and
outdoor stairs. Iberian Conference on Pattern
Recognition and Image Analysis (2013), 847854.
[69] Shinohara, K. and Wobbrock, J.O. 2011. In the shadow
of misperception: assistive technology use and social
interactions. Proceedings of the 2011 annual
conference on Human factors in computing systems
(2011), 705714.
[70] Shoval, S. et al. 1994. Mobile robot obstacle avoidance
in a computerized travel aid for the blind. Proceedings
of the 1994 IEEE International Conference on Robotics
and Automation (1994), 20232028.
[71] Shoval, S. et al. 2003. NavBelt and the GuideCane.
IEEE Robotics and Automation Magazine. 10, 1 (Mar.
2003), 920.
DOI:https://doi.org/10.1109/MRA.2003.1191706.
[72] Summary Health Statistics for the U.S. Population:
National Health Interview Survey, 2004.: 2004.
http://www.cdc.gov/nchs/data/series/sr_10/sr10_229.p
df. Accessed: 2015-05-03.
[73] Szpiro, S. et al. 2016. Finding a store, searching for a
product: a study of daily challenges of low vision
people. Proceedings of the 2016 ACM International
Joint Conference on Pervasive and Ubiquitous
Computing. (2016), 6172.
DOI:https://doi.org/10.1145/2971648.2971723.
[74] Szpiro, S. et al. 2016. How People with Low Vision
Access Computing Devices: Understanding Challenges
and Opportunities. Proceedings of the 18th
International ACM SIGACCESS Conference on
Computers and Accessibility (2016), 171180.
[75] Tjan, B.S. et al. 2005. Digital Sign System for Indoor
Wayfinding for the Visually Impaired. 2005 IEEE
Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR’05) - Workshops, 3030.
[76] Ulrich, I. and Borenstein, J. 2001. The GuideCane-
applying mobile robot technologies to assist the
visually impaired. IEEE Transactions on Systems,
Man, and Cybernetics - Part A: Systems and Humans.
31, 2 (Mar. 2001), 131136.
DOI:https://doi.org/10.1109/3468.911370.
[77] Vera, P. et al. 2014. A smartphone-based virtual white
cane. Pattern Analysis and Applications. 17, 3 (Aug.
2014), 623632. DOI:https://doi.org/10.1007/s10044-
013-0328-8.
[78] Wahab, M.H.A. et al. 2011. Smart Cane: Assistive
Cane for Visually-impaired People. IJCSI International
Journal of Computer Science Issues. 8, 4 (2011), 21
27. DOI:https://doi.org/1694-0814.
[79] Wang, S. and Tian, Y. 2012. Detecting stairs and
pedestrian crosswalks for the blind by RGBD camera.
2012 IEEE International Conference on Bioinformatics
and Biomedicine Workshops (Oct. 2012), 732739.
[80] West, C.G. et al. 2002. Is Vision Function Related to
Physical Functional Ability in Older Adults? Journal of
the American Geriatrics Society. 50, 1 (Jan. 2002),
136145. DOI:https://doi.org/10.1046/j.1532-
5415.2002.50019.x.
[81] What Are Low Vision Optical Devices?
http://www.visionaware.org/info/your-eye-
condition/eye-health/low-vision/low-vision-optical-
devices/1235. Accessed: 2015-10-11.
[82] Willis, K.D.D. and Poupyrev, I. 2011. MotionBeam: A
Metaphor for Character Interaction with Handheld
Projectors. the SIGCHI Conference on Human Factors
in Computing Systems (2011), 10311040.
[83] Yantis, S. and Jonides, J. 1990. Abrupt visual onsets
and selective attention: Voluntary versus automatic
allocation. Journal of Experimental Psychology:
Human Perception and Performance. 16, 1 (1990),
121134.
[84] Zhao, Y. et al. 2016. CueSee : Exploring Visual Cues
for People with Low Vision to Facilitate a Visual
Search Task. International Joint Conference on
Pervasive and Ubiquitous Computing (2016), 7384.
[85] Zhao, Y. et al. 2015. ForeSee: A Customizable Head-
Mounted Vision Enhancement System for People with
Low Vision. The 17th International ACM SIGACCESS
Conference on Computers and Accessibility. (2015),
239249.
DOI:https://doi.org/10.1145/2700648.2809865.
[86] Zhao, Y. et al. 2018. “It Looks Beautiful but Scary:”
How Low Vision People Navigate Stairs and Other
Surface Level Changes. Proceedings of the 20th
International ACM SIGACCESS Conference on
Computers and Accessibility - ASSETS ’18 (New York,
New York, USA, 2018), 307320.
[87] Zhao, Y. et al. 2017. Understanding Low Vision
People’s Visual Perception on Commercial Augmented
Reality Glasses. Proceedings of the 2017 CHI
Conference on Human Factors in Computing Systems.
(2017), 41704181.
DOI:https://doi.org/10.1145/3025453.3025949.
[88] Zhou, F. et al. 2008. Trends in augmented reality
tracking, interaction and display: A review of ten years
of ISMAR. Proceedings of the 7th IEEE International
Symposium on Mixed and Augmented Reality 2008,
ISMAR 2008 (Sep. 2008), 193202.
... In addition to turn-by-turn instructions, several systems have been developed to support safe navigation by detecting obstacles for PLV [5,18,24,85]. For example, Bai et al. [5] detected obstacles along the way and explored the feasibility of different feedback to inform walkable directions, such as visual indicators (e.g., position of a virtual circle) and speech alerts of obstacles. ...
... For example, Bai et al. [5] detected obstacles along the way and explored the feasibility of different feedback to inform walkable directions, such as visual indicators (e.g., position of a virtual circle) and speech alerts of obstacles. Zhao et al. [85] supported stair navigation for PLV by highlighting stair edges and railings via projection-based and smartglass-based AR. Lately, Fox et al. [24] also explored the effectiveness of different AR cues to augment obstacles for PLV, finding that 3D world-locked AR cues were superior to directional heads-up cues. ...
... We chose head-mounted AR as it provides hands-free interactions, which can be particularly beneficial to low vision users who hold other assistive support (e.g., white cane, guide dog) in navigation [75]. Compared to other AR platforms (e.g., mobile AR), head-mounted AR also mitigates attention switching between the real world and an additional display, enabling users to focus on the surrounding environments and facilitating safety [85]. Following the guidelines, VisiMark provides visual augmentations to enhance landmarks (DG 1). ...
Preprint
Full-text available
Landmarks are critical in navigation, supporting self-orientation and mental model development. Similar to sighted people, people with low vision (PLV) frequently look for landmarks via visual cues but face difficulties identifying some important landmarks due to vision loss. We first conducted a formative study with six PLV to characterize their challenges and strategies in landmark selection, identifying their unique landmark categories (e.g., area silhouettes, accessibility-related objects) and preferred landmark augmentations. We then designed VisiMark, an AR interface that supports landmark perception for PLV by providing both overviews of space structures and in-situ landmark augmentations. We evaluated VisiMark with 16 PLV and found that VisiMark enabled PLV to perceive landmarks they preferred but could not easily perceive before, and changed PLV's landmark selection from only visually-salient objects to cognitive landmarks that are more important and meaningful. We further derive design considerations for AR-based landmark augmentation systems for PLV.
... For example, when reading with a magnifier, a user's functional field of view is reduced as the entire reading material being magnified, making it difficult for low vision users to tracking reading positions [14,42], especially when switching lines [44]. While researchers have designed task-specific low vision aids by considering user context [10,35,48,49], little vision enhancement technology has considered user intents-the immediate goal behind visual behaviors that infer users' dynamic needs in different tasks. To our knowledge, only one research by Wang et al. [43] incorporated user intent into low vision aids design. ...
Preprint
Full-text available
Accessing visual information is crucial yet challenging for people with low vision due to their visual conditions (e.g., low visual acuity, limited visual field). However, unlike blind people, low vision people have and prefer using their functional vision in daily tasks. Gaze patterns thus become an important indicator to uncover their visual challenges and intents, inspiring more adaptive visual support. We seek to deeply understand low vision users' gaze behaviors in different image viewing tasks, characterizing typical visual intents and the unique gaze patterns exhibited by people with different low vision conditions. We conducted a retrospective think-aloud study using eye tracking with 14 low vision participants and nine sighted controls. Participants completed various image viewing tasks and watched the playback of their gaze trajectories to reflect on their visual experiences. Based on the study, we derived a visual intent taxonomy with five intents characterized by participants' gaze behaviors and demonstrated how low vision conditions affect gaze patterns across visual intents. Our findings underscore the importance of combining visual ability information, image context, and eye tracking data in visual intent recognition, setting up a foundation for intent-aware assistive technologies for low vision.
... The visual search paradigm asks users to find a target from among other distractor objects, from a complex environment, or a combination of both [15,5]. It is a common task used for XR cueing and has been applied to a variety of use cases such as airborne surveillance [1], finding office supplies [16], or navigating stairs for low-vision individuals [27]. Several studies have explored important aspects such as cue subtlety [24], lighting [11], out-of-view targets [6,2], and large area searches [12]. ...
Article
Due to the continuous improvement of Augmented Reality (AR) head-mounted displays (HMDs), these devices are bound to be increasingly integrated into our daily routines. So far, a major focus of AR research has been on indoor usage and deployment. However, since seamlessly supporting users in their activities while being on-the-move in various outdoor contexts becomes increasingly important, there is a need to investigate the current state-of-the-art of AR technologies while people are in motion outdoors. Therefore, we conducted a systematic literature review of pertinent HCI publications, specifically looking into applications concerning vulnerable road users. We identify the contexts in which such technologies have been researched, prevailing challenges in the field, and applied methodological approaches. Our findings show that most contributions address pedestrians, a shift towards HMDs, and a prevalence of lab studies due to technology limitations. Based on our findings, we discuss trends, existing gaps and opportunities for future research.
Article
Full-text available
People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.
Conference Paper
Full-text available
Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed navigation tasks in two buildings and through two city blocks. The tasks involved walking in- and outdoors, across four staircases and two city blocks. We found that surface level changes were a source of uncertainty and even fear for all participants. Besides the white cane that many participants did not want to use, participants did not use technology in the study. Participants mostly used their vision, which was exhausting and sometimes deceptive. Our findings highlight the need for systems that support surface level changes and other depth-perception tasks; they should consider low vision people's distinct experiences from blind people, their sensitivity to different lighting conditions, and leverage visual enhancements.
Article
Full-text available
SIGNIFICANCE For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. PURPOSE The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. METHODS An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. RESULTS In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. CONCLUSIONS Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.
Conference Paper
Full-text available
People with low vision have a visual impairment that affects their ability to perform daily activities. Unlike blind people, low vision people have functional vision and can potentially benefit from smart glasses that provide dynamic, always-available visual information. We sought to determine what low vision people could see on mainstream commercial augmented reality (AR) glasses, despite their visual limitations and the device's constraints. We conducted a study with 20 low vision participants and 18 sighted controls, asking them to identify virtual shapes and text in different sizes, colors, and thicknesses. We also evaluated their ability to see the virtual elements while walking. We found that low vision participants were able to identify basic shapes and read short phrases on the glasses while sitting and walking. Identifying virtual elements had a similar effect on low vision and sighted people's walking speed, slowing it down slightly. Our study yielded preliminary evidence that mainstream AR glasses can be powerful accessibility tools. We derive guidelines for presenting visual output for low vision people and discuss opportunities for accessibility applications on this platform.
Conference Paper
Full-text available
Methods that provide accurate navigation assistance to people with visual impairments often rely on instrumenting the environment with specialized hardware infrastructure. In particular, approaches that use sensor networks of Bluetooth Low Energy (BLE) beacons have been shown to achieve precise localization and accurate guidance while the structural modifications to the environment are kept at minimum. To install navigation infrastructure, however, a number of complex and time-critical activities must be performed. The BLE beacons need to be positioned correctly and samples of Bluetooth signal need to be collected across the whole environment. These tasks are performed by trained personnel and entail costs proportional to the size of the environment that needs to be instrumented. To reduce the instrumentation costs while maintaining a high accuracy, we improve over a traditional regression-based localization approach by introducing a novel, graph-based localization method using Pedestrian Dead Reckoning (PDR) and particle filter. We then study how the number and density of beacons and Bluetooth samples impact the balance between localization accuracy and set-up cost of the navigation environment. Studies with users show the impact that the increased accuracy has on the usability of our navigation application for the visually impaired.
Article
Full-text available
Assistive technologies aim at improving personal mobility of individuals with disabilities, increasing their independence and their access to social life. They include mechanical mobility aids that are increasingly employed amongst the older people who rely on them. However, these devices might fail to prevent falls due to the under-estimation of approaching hazards. Stairs and curbs are among these potential dangers present in urban environments and living accommodations, which increase the risk of an accident. We present and evaluate a low-complexity algorithm to detect descending stairs and curbs of any shape, specifically designed for low-power real-time embedded platforms. Based on a passive stereo camera, as opposed to a 3D active sensor, we assessed the detection accuracy, processing time and power consumption. Our goal being to decide on three possible situations (safe, dangerous and potentially unsafe), we achieve to distinguish more than 94 % dangers from safe scenes within a 91 % overall recognition rate at very low resolution. This is accomplished in real-time with robustness to indoor/outdoor lighting conditions. We show that our method can run for a day on a smartphone battery.
Article
Background: Locomotion on stairs is challenging for balance control and relates to a significant number of injurious falls. The visual system provides relevant information to guide stair locomotion and there is evidence that peripheral vision is potentially important. Research question: This study investigated the role of the lower visual field information for the control of stair walking. It was hypothesized that restriction in the lower visual field (LVF) would significantly impact gaze and locomotor behaviour specifically during descent and during transition phases emphasizing the importance of the LVF information during online control. Methods: Healthy young adults (n = 12) ascended and descended a 7-step staircase while wearing customized goggles that restricted the LVF. Three visual conditions were tested: full field of view (FULL); 30° (MILD), and 15° (SEVERE) of lower field of view available. Stride time, head pitch angle and handrail use were assessed during approach, transition steps (two steps at the top and bottom of the stairs) and middle step phases. Results: Transient downward head pitch angle increased with LVF restriction, while walk speed decreased and handrail use increased. Occlusion impaired stair descent more strongly than ascent reflected by a larger downward head pitch angles and slower walk times. LVF restriction had a greater influence on stride time and head angle during the approach and first transition compared to other stair regions. Significance: Information from the lower visual field is important to guide stair walking and particularly when negotiating the first few steps of a staircase. Restriction in the lower visual field during stair walking results in more cautious locomotor behaviour such as walking slower and using the handrails. In daily activities, tasks or conditions that restrict or alter the lower visual field information may elevate the risk for missteps and falls.
Conference Paper
Low vision is a pervasive condition in which people have difficulty seeing even with corrective lenses. People with low vision frequently use mainstream computing devices, however how they use their devices to access information and whether digital low vision accessibility tools provide adequate support remains understudied. We addressed these questions with a contextual inquiry study. We observed 11 low vision participants using their smartphones, tablets, and computers when performing simple tasks such as reading email. We found that participants preferred accessing information visually than aurally (e.g., screen readers), and juggled a variety of accessibility tools. However, accessibility tools did not provide them with appropriate support. Moreover, participants had to constantly perform multiple gestures in order to see content comfortably. These challenges made participants inefficient-they were slow and often made mistakes; even tech savvy participants felt frustrated and not in control. Our findings reveal the unique needs of low vision people, which differ from those of people with no vision and design opportunities for improving low vision accessibility tools.