Content uploaded by Yuhang Zhao
Author content
All content in this area was uploaded by Yuhang Zhao on Aug 09, 2019
Content may be subject to copyright.
Content uploaded by Yuhang Zhao
Author content
All content in this area was uploaded by Yuhang Zhao on Aug 09, 2019
Content may be subject to copyright.
Designing AR Visualizations to Facilitate Stair Navigation
for People with Low Vision
Yuhang Zhao1, Elizabeth Kupferstein1, Brenda Veronica Castro1,
Steven Feiner2, Shiri Azenkot1
1Jacobs Technion-Cornell Institute, Cornell Tech,
Cornell University, New York, NY, USA
{yz769, ek544, bvc5, shiri.azenkot}@cornell.edu
2Department of Computer Science, Columbia
University, New York, NY, USA
feiner@cs.columbia.edu
ABSTRACT
Navigating stairs is a dangerous mobility challenge for peo-
ple with low vision, who have a visual impairment that falls
short of blindness. Prior research contributed systems for
stair navigation that provide audio or tactile feedback, but
people with low vision have usable vision and don’t typically
use nonvisual aids. We conducted the first exploration of
augmented reality (AR) visualizations to facilitate stair nav-
igation for people with low vision. We designed visualiza-
tions for a projection-based AR platform and smartglasses,
considering the different characteristics of these platforms.
For projection-based AR, we designed visual highlights that
are projected directly on the stairs. In contrast, for smart-
glasses that have a limited vertical field of view, we designed
visualizations that indicate the user’s position on the stairs,
without directly augmenting the stairs themselves. We eval-
uated our visualizations on each platform with 12 people
with low vision, finding that the visualizations for projec-
tion-based AR increased participants’ walking speed. Our
designs on both platforms largely increased participants’
self-reported psychological security.
Author Keywords
Accessibility; augmented reality; low vision; visualization.
ACM Classification Keywords
• Human-centered computing~Mixed / augmented real-
ity; Accessibility technologies.
INTRODUCTION
As many as 1.2 billion people worldwide have low vision, a
visual impairment that cannot be corrected with eyeglasses
or contact lenses [11, 72]. Unlike people who are blind, peo-
ple with low vision (PLV) have functional vision that they
use extensively in daily activities [73, 74]. Low vision can
be attributed to a variety of diseases (e.g., glaucoma, diabetic
retinopathy) and affects many visual functions including vis-
ual acuity, contrast sensitivity, and peripheral vision [21].
Stair navigation is one of the most dangerous mobility chal-
lenges for PLV [5]. With reduced depth perception and pe-
ripheral vision [45, 56], PLV have difficulty detecting stairs
or perceiving the exact location of stair edges [86]. As a re-
sult, PLV experience higher rates of falls and injuries than
their typically-sighted counterparts [5, 13].
Despite the difficulty they experience, PLV use their residual
vision extensively when navigating stairs [73]. Zhao et al.
[86] found that they looked at contrast stripes (i.e., con-
trasting marking stripes on stair treads) to perceive the exact
location of stair edges; some also observed the trend of the
railing to understand the overall structure of a staircase.
However, sometimes stairs do not have contrast stripes, and
even when they do, their stripes are often not accessibly de-
signed; for example, stripes may have low contrast with the
stairs or be too thin to detect [86]. Today, the only known
tool to assist with stair navigation is the white cane, which
many PLV prefer not to use [86]. Thus, there is a gap in tools
that support PLV in the basic task of stair navigation.
Advances in augmented reality (AR) present a unique oppor-
tunity to address this problem. By automatically recognizing
the environment with computer vision, AR technology has
the potential to generate corresponding visual and auditory
feedback to help people better perceive and navigate the en-
vironment more safely and quickly.
Our research explores AR visualization designs to facilitate
stair navigation by leveraging PLV’s residual vision. Design-
ing visualizations for PLV is challenging [84, 85], especially
for stair navigation, a dangerous mobility task. On one hand,
the visualizations should be easily perceivable by PLV. A
visualization that a sighted person can easily see (e.g., a small
arrow) may not be noticeable by PLV: it may be too small
for them to see or outside their visual field [87]. On the other
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from Permissions@acm.org.
UIST '19, October 20–23, 2019, New Orleans, LA, USA
© 2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-6816-2/19/10…$15.00
https://doi.org/10.1145/3332165.3347906
Figure 1: Our visualizations for (a) projection-based AR and
(b) smartglasses to facilitate stair navigation for PLV.
hand, the visualizations should not be distracting. An ex-
tremely large, bright, or animated visualization can distract
PLV and hinder their ability to see. This could be dangerous
in the context of stair navigation. We sought to design effec-
tive visualizations for PLV, which balance visibility and dis-
traction, while providing alternative choices to support a
wide range of visual abilities.
We designed visualizations on two AR platforms that can
generate immersive virtual content in the physical environ-
ment: projection-based AR and smartglasses. Our designs
considered the different characteristics of the two platforms:
(1) For projection, which can augment a large physical space,
we designed visual highlights with different patterns that are
directly projected onto the stairs to enhance their visibility
(Figure 1a). (2) For smartglasses that have a limited vertical
field of view (FOV), we designed visualizations in the user’s
central FOV to indicate the user’s exact position on the stairs
(Figure 1b).
We evaluated our visualizations on each platform with 12
PLV. We found that the visualizations on both platforms in-
creased participants’ self-reported psychological security.
Our visualizations also changed participants’ behaviors.
Many participants didn’t stare down at the stairs when walk-
ing with our visualizations; some stopped holding the railing.
Moreover, the visualizations on the projection-based AR
platform showed a trend to significantly reduce participants
walking time.
In summary, we contribute the first exploration of AR visu-
alizations to facilitate stair navigation for PLV. Our evalua-
tions demonstrated the effectiveness of our visualizations
and provide insights for the design of AR visualizations for
PLV that support other tasks as well.
RELATED WORK
Stair Navigation Experiences of PLV
Mobility is critical but challenging for PLV. Many studies
have shown that reduced visual functions hinder mobility [6,
10, 19, 44, 80] and increase the risk of mobility-related acci-
dents [5, 13, 22, 23, 31, 32]. For example, Leat and Lovie-
Kitchin [45] found that visual field loss reduced walking
speed, while reduced visual acuity and contrast sensitivity
impacted distance and depth perception.
Stair navigation is one of the most dangerous mobility chal-
lenges for PLV [5]. Legge et al. [47] found that failing to
detect descending stairs was more dangerous and had a
higher correlation with falls than failing to see obstacles or
ascending stairs. West et al. [80] measured 782 older adults’
visual abilities and collected self-reported mobility limita-
tions. They found that people with low visual acuity and low
contrast sensitivity reported difficulty walking up and down
stairs without help. Bibby et al. [8] also surveyed 30 PLV
about their mobility performance, finding that PLV reported
greater difficulty navigating curbs and descending stairs.
In the human–computer interaction field, researchers also ex-
plored the challenges that PLV face during navigation, in-
cluding navigating stairs. Szpiro et al. [73] observed 11
PLV’s behaviors as they navigated to a nearby pharmacy.
They found that PLV struggled but used their vision exten-
sively, and lighting conditions affected their ability to notice
obstacles and uneven pavement on the ground. Zhao et al.
[86] conducted a more in-depth study observing 14 PLV
walking on different sets of stairs indoors and outdoors. They
found that most participants relied on their vision (e.g., look-
ing at contrast stripes) to navigate stairs. Besides the white
cane, which only four participants used, no technology was
used to assist with this task. Zhao et al.’s study emphasized
the need for tools that facilitate stair navigation for PLV.
Safe Navigation for Blind and PLV
Mobility problems for people who are blind and PLV can be
divided into two categories: wayfinding (i.e., the global
problem of planning and following routes from place to
place) and safe navigation (i.e., the local problem of taking
the next step safely without bumping into things or tripping)
[75]. Most prior research in this general area has focused on
wayfinding, both indoors [3, 27, 35, 38, 46, 62] and outdoors
[4, 12, 15, 29, 50]. Yet walking up and down stairs falls into
the latter category, which has received less attention.
Safe Navigation for Blind People
To facilitate safe navigation, researchers designed obstacle
avoidance systems for people who are blind (e.g., [1, 24, 48,
77]). By detecting obstacles with cameras or range finders,
these systems generated auditory [2, 39, 40, 53, 70, 78] or
tactile feedback [14, 52, 54, 71, 76] to notify blind users of
obstacles and their distance.
Since perceiving stairs is essential for safe navigation, many
obstacle avoidance systems also detected stairs [7, 17, 28,
34]. For example, Bhowmick et al. [7] designed IntelliNavi,
a wearable navigation system that combined a Kinect and an
earphone. With SURF descriptors and an SVM classifier, the
system recognized walls, stairs, and other obstacles and gen-
erated audio messages to safely guide a blind user through
and around these features. Capi and Toda [17] embedded
depth sensors and a PC into a wheeled walker. With the depth
sensors recognizing the environment, the system informed
blind users of the existence and position of obstacles, stairs,
and curbs using verbal directions or beeps. Moreover, Hub et
al. [36] presented an (unimplemented) concept for an indoor
navigation system that provided more specific information
about stairs, such as the number of stairs and the position of
the railing.
In addition to navigation systems, researchers have also pro-
posed stair detection algorithms [20, 30, 57–60, 68, 79]. For
example, Murakami et al. [58] proposed a method that uses
an RGB-D camera to detect stairs. Cloix et al. [20] designed
an algorithm that detected descending stairs with a passive
stereo camera, achieving a 91% recognition rate in real-time.
Perez-Yus et al. [60] proposed a real-time recognition
method that detected, located, and parametrized stairs with a
wearable RGB-D camera, and could even work when the
stairs were partially occluded.
This prior research addressed only auditory feedback for
people who are blind, overlooking PLV’s preference to use
their remaining vision. In contrast, our work addresses this
gap by designing AR visualizations to assist PLV in navi-
gating stairs.
Safe Navigation for PLV
There has been little research on navigation systems for low
vision. No work has specifically focused on stairs.
In terms of low-tech tools, some PLV use optical devices to
enhance their visual abilities. Bioptics, monoculars, tele-
scopes, and binoculars are used for recognizing signs and ob-
stacles at a distance [81]. Some PLV occasionally use prisms
that are ground into glasses to expand their FOV. However,
these specialized tools often stigmatize users in social set-
tings [69]; thus, people avoid using them or abandon them
altogether [25]. Some PLV also use a white cane, especially
at night and in unfamiliar places, but many prefer not using
it because it exposes their disability [86].
Some research has contributed obstacle avoidance systems
for PLV [26, 33, 41, 64]. Everingham et al. [26] designed a
neural-network classification algorithm for a head-worn de-
vice that segmented scenes rendered in front of users’ eyes
and recolored objects to make obstacles more visible. Simi-
larly, Kinateder et al. [41] developed a HoloLens application
that recolored the scene with high contrast colors for PLV
based on the spatial information from the HoloLens. Besides
recoloring the scenes, Hicks et al. [28] and Rheede et al. [64]
built a real-time head-worn LED display with a depth camera
to aid navigation by detecting the distance to nearby objects
and changing the brightness of the objects to indicate their
distances. To our knowledge, our research is the first attempt
to facilitate stair navigation for PLV.
INITIAL EXPLORATION
We sought to facilitate stair navigation by augmenting the
stairs with AR visualizations. In general, there are three types
of AR displays: video see-through, optical see-through, and
projection [88]. For each display type, devices exist (either
commercially or as research prototypes) with different form
factors and device characteristics. For example, a mobile de-
vice can be used as a video see-through AR platform. It is
hand-held with a limited FOV. Considering the different vis-
ual abilities of PLV and our new use case for AR, we did not
know a-priori what AR platform would be most appropriate
for the stair navigation task.
To determine what platforms would be appropriate, we be-
gan by conducting a formative study with 11 PLV (7 female,
4 male; age: 28–70, mean = 40) to evaluate prototype visu-
alizations for a smartphone. A smartphone is a widely used
AR device, so it would be a practical choice with potential
for high immediate impact. We presented the real-time cap-
tured image of the stairs on the phone screen and enhanced
the stair edges with yellow highlights. However, participants
had difficulty perceiving the visualizations on the hand-held
phone screen. They switched their gaze between the phone
and the real stairs, hindering their safety during motion. All
participants said they would prefer an immersive experience
where visualizations are seamlessly incorporated into the
physical environment.
Based on the formative study, we narrowed down our target
platforms to immersive AR platforms, specifically (1) hand-
held projection-based AR, and (2) optical see-through smart-
glasses. These platforms would not require the user to switch
their gaze or hinder their ability to perceive motion [88]. We
designed and evaluated visualizations for both platforms,
given that each platform has its own strength: projection-
based AR can augment large physical surfaces but projects
content publicly, which may be better suited to private places
with few people (e.g., home, workspace); meanwhile, smart-
glasses present information only to the user, which may be
better for crowded public places (e.g., subway stations).
VISUALIZATIONS FOR PROJECTION-BASED AR
We first explored the design space of hand-held projection-
based AR, which combines a camera that recognizes the en-
vironment and a projector that projects visual contents into
that environment [61]. This platform has potential to facili-
tate mobility because it can project over a relatively large
area [88] and provide visual augmentations in people’s pe-
ripheral vision, which is shown to be important for stair nav-
igation [56].
Although there are no popular commercial devices in the
market, researchers have prototyped different hand-held pro-
jection-based AR platforms [16, 18, 63, 82]. With a growing
number of smartphones that have embedded depth sensors
(e.g., iPhone XR, Samsung Galaxy S10) and projectors (e.g.,
Samsung Galaxy Beam [67]), smartphones may support pro-
jection-based AR with depth-sensing capabilities in the near
future. Thus, we designed visualizations for such a projec-
tion-based AR smartphone to augment the stairs for PLV.
Visualization (and Sonification) Design
From an interaction perspective, we aimed to simulate use of
a flashlight, which is commonly used by PLV in dark places
[79]: when a user points the projection-based AR phone at
the stairs, it recognizes several stairs in front of her and pro-
jects visualizations on those stairs in real time (Figure 1a).
Inspired by the contrast stripes that many PLV used to dis-
tinguish stair edges [79], we project highlights on the stair
edges to increase their visibility.
According to Zhao et al. [86], PLV had difficulty detecting
stairs and recognizing the stair edges, especially at a distance.
As a result, they walked slowly, stared down to better see the
current and next stair, and shuffled their feet to feel the stair
edges. We therefore designed our visualizations to help them
perceive the stairs from a greater distance, so they can better
plan and prepare their steps.
To alert users of the presence of stairs as they approach, we
first generate auditory feedback to provide an overview of
the stairs, including the stair direction and number of stairs.
Zhao et al. [86] found that PLV sought this kind of infor-
mation, which at times was difficult to perceive. We provide
three different auditory feedback choices: (1) Sonification
that indicates stair direction: one “ding” sound for going up
and two “ding” sounds for going down, adapted from the
sonic alerts for some elevators; (2) a human voice that ver-
bally reported stair direction and number of stairs: “Ap-
proaching upstairs, 14 stairs going up;” and (3) a combined
sonification and human voice: “ding, approaching upstairs,
14 stairs going up.”
Since locating the first and last stairs was most important but
challenging for PLV [86], we distinguish the first and last
stairs from the rest by projecting thick highlights on them
(Figure 2a), while projecting thin highlights on the middle
stairs (Figure 3a). We call the highlights on the first and last
stairs End Highlights, and we call those on the middle stairs
Middle Highlights. We needed a visible color for these high-
lights that would not be confused with natural light, so we
used yellow.
Beyond these highlights, we sought ways to further empha-
size the first and last stairs so that a user will notice them and
perceive their exact location from a distance. We designed
five animations to achieve this:
(1) Flash: Since a flash can attract people’s attention [83, 84],
we added this feature to the end highlights. The highlights
appear and disappear with a frequency of 1Hz.
(2) Flashing Edge: When the end highlight flashes, the user
may lose track of the edge position when the highlight dis-
appears. So in this design, we kept a stable line at the stair
edge while flashing the rest of the highlighted strip (Figure
2b). The flash occurs at a frequency of 1Hz.
(3) Moving Edge: Movement also attracts attention [51].
With a stable line at the stair edge, we added another line
moving towards the edge to generate movement (Figure 2c).
(4) Moving Horizontal Zebra: Since movement can be dis-
tracting [84], we design a more subtle movement effect with
a yellow and black zebra pattern moving back and forth at a
frequency of 1Hz (Figure 2d).
(5) Moving Vertical Zebra: Moving the highlight over the
edge of the stair may distort the perceived location of the
edge, so we also designed a zebra pattern that is perpendicu-
lar to the edge (Figure 2e).
Since a staircase typically has stairs of uniform size, the mid-
dle stairs usually do not require much of the user’s attention.
We designed two middle highlights to support the user in a
minimally obtrusive way.
(1) Dull Yellow Highlights: We reduced the lightness of the
original highlights on the middle stairs to 60% to make them
less obtrusive than the end highlights (Figure 3b).
(2) Blue Highlights: We set the middle highlights to blue
since it has a lower contrast with the stairs but still enhances
their visibility [87] (Figure 3c).
To support a range of visual abilities, the design alternatives
can be selected and combined by a user to optimize her ex-
perience for a particular environment.
Evaluation of Projection-Based AR Visualizations
We evaluated the visualizations for projection-based AR,
aiming to answer three questions: (1) How do PLV perceive
the different visualization designs? (2) How useful are the
visualizations for stair navigation? (3) How secure do people
feel when using our visualizations?
Method
Participants. We recruited 12 PLV (6 female, 6 male; mean
age=53.9) with different low-vision conditions, as shown in
Table 1 (P1 – P12). Eleven participants (all except P3) were
registered as legally blind, meaning that either (1) their best-
corrected visual acuity in their better eye was 20/200 or
worse, or (2) their visual field was ≤ 20°. We conducted a
phone screen to ensure participants were eligible.
Apparatus. The study was conducted at an emergency exit
staircase with eight stairs. To minimize the confounding ef-
fect of computer vision accuracy, we prototyped our design
with a Wizard of Oz protocol [65]. This involved mounting
a stationary projector on a tripod at the top of the set of stairs.
The projector was connected to a laptop that generated the
visualizations. We created all visualizations with Power-
Point. A researcher sat in front of the laptop to control the
visualizations manually, based on the participant’s position
and orientation (facing upstairs or downstairs). To simulate
the limited projection area of a handheld projector, we pro-
jected visualizations only on the three stairs in front of the
participant (Figure 1a).
Figure 3. Middle highlights: (a) Initial thin highlights with
bright yellow; (b) Dull Yellow Highlights; (c) Blue Highlights.
Figure 2: End highlights for first and last stairs. (a) Initial thick highlight with bright yellow; (b) Flashing Edge: the highlight
switches between thick (b1) and thin (b2); (c) Moving Edge; (d) Moving Horizontal Zebra; (e) Moving Vertical Zebra.
We asked the participant to hold a regular phone with the
back camera facing the stairs, assuming the projected visual-
izations were from the smartphone. We also implemented the
auditory feedback on the smartphone. One researcher con-
trolled the audio feedback with another smartphone via TCP.
Procedure. The study consisted of a single session that lasted
1.5 hours. We started the session with an interview, asking
each participant about their demographics, visual condition,
and technology use when navigating stairs. A licensed op-
tometrist conducted a confrontation visual field test and a
visual acuity test using a Snellen chart (Table 1). After the
interview, we walked the participant to the staircase and con-
tinued the study with a visualization experience session and
a stair navigation session.
During the visualization experience, we gave the participant
our prototype smartphone and explained how to use it. The
participant experienced our design in three phases: (1) Audi-
tory feedback when approaching the stairs, with three alter-
natives: sound, human voice, and the combination of them;
(2) End highlights on the first and last stairs with six design
alternatives (Figure 2); and (3) Middle highlights on the mid-
dle stairs with three design alternatives (Figure 3).
In each phase, we presented all design options to the partici-
pant and asked about their experiences, including whether or
not they liked the design, whether the design distracted them
from seeing the environment, and how they wanted to im-
prove it. For each design option, participants were encour-
aged to walk up and down the stairs. To avoid order effects,
we randomized the order of the design alternatives.
After the participant experienced all design alternatives in all
three phases, we asked them to select one alternative from
each phase to create a preferred combination. Participants
used this combination for the stair navigation portion.
During the stair navigation portion of the study, participants
conducted two stair navigation tasks: walking upstairs and
walking downstairs. They conducted each task in two condi-
tions: (1) walking in their original way (participants could
use a cane if desired, but nobody chose to use it); (2) walking
using our prototype with their preferred combinations. They
repeated each task in each condition five times.
We indicated the start points with yellow stickers on the
landings, three feet away from the top and bottom stairs. For
each task, participants stood at the starting point and started
the walking task when the researcher said, “Start.” The task
ended when both their feet first touched the landing. Partici-
pants were asked to walk as quickly and safely as possible.
We recorded the time for each task.
To reduce order effects, we used a simultaneous within-sub-
jects design, switching the task condition after each walking
up and down task. We counterbalanced the starting task
(up/down) and condition (with/without the prototype).
We ended the study with an exit interview, asking about the
participant’s general experience with the prototype. They
also gave Likert-scale scores for the usefulness and comfort
level of the prototype, as well as their psychological security
when using the prototype, ranging from 1 (strongly negative)
to 7 (strongly positive).
Analysis. We analyzed the effect of our visualizations on
participants’ walking time when navigating stairs. Our ex-
periment had one within-subject factor, Condition (Visuali-
zations, No Visualizations), and one measure, Time. We de-
ID
Age
Gen-
der
Legally
Blind
Diagnosis
Visual acuity
(Left Eye)
Visual acuity
(Right Eye)
Visual acuity
(Both Eyes)
Visual field
(Left Eye)
Visual field
(Right Eye)
P11
65
F
P
Retinopathy of Prem-
aturity; Glaucoma
20/400
20/1333
20/400
Inferior constriction
All fields constriction
P21
53
F
P
Retinitis Pigmentosa
20/200
20/140
20/140
Full
Full
P31
67
F
Î
Doyne Macular Dys-
trophy; Glaucoma
20/40
20/140
20/40
Full
Full
P41
65
M
P
Glaucoma
20/500
20/800
20/400
Inferior nasal and superior
temporal fields constriction
Constricted in superior
fields
P51
58
M
P
Achromatopsia
20/400
20/400
20/200
Full
Full
P61,2
57
F
P
Posterior Uveitis
20/400
20/400
20/400
All fields constriction
Temporal field con-
striction
P71,2
54
M
P
Flecked Retina Syn-
drome
20/400
20/400
20/400
Inferior nasal field con-
striction
Full;
P81,2
33
F
P
Stargardts
20/140
20/140
20/140
Full
Full
P91,2
35
M
P
Albinism with nys-
tagmus
20/200
20/200
20/200
Full
Full
P101,2
37
M
P
Steven Johnson's
Disease
20/700
20/200
20/140+
Inferior temporal field con-
striction
Full
P111,2
58
F
P
Stargardts
20/500
20/800
20/500
All fields constriction
All fields constriction
P121,2
65
M
P
Macular Degenera-
tion (Juvenile)
20/500
20/400
20/200
Full
Full
P132
48
M
P
Brain Tumor Re-
moval age 2
20/200
20/200
20/200
Inferior nasal fields con-
striction
Inferior nasal fields con-
striction
P142
56
M
P
Achromatopsia (cone
monochromatism)
20/140
20/140
20/140
Full
Full
P152
56
M
P
Stargardts
20/400
20/200
20/200
Full
Full
P162
63
F
P
Glaucoma
20/50
20/400
20/40
All fields constriction
All fields constriction
P172
52
F
P
Diabetic Retinopathy
20/40
20/25
20/25
Inferior and superior nasal
fields constriction
Inferior and superior na-
sal fields constriction
Table 1. Participant demographic information. Participants labeled with superscript ‘1’ were in the study for projection-based
AR, while those labeled with superscript ‘2’ were in the study for smartglasses.
fined a Trial (1–5) as one walking task. To validate counter-
balancing, we added another between-subject factor, Order
(two levels: With–Without, Without–With), into our model.
An ANOVA found no significant effect of Order on walking
time (downstairs: F(1,10)=0.108, p=0.749; upstairs:
F(1,10)=0.007, p=0.937) for α = 0.05.
We analyzed the participants’ qualitative feedback by coding
the interview transcripts based on grounded theory [66].
Results
Effectiveness of the Visualizations (and Sonification). All
participants felt our design was helpful and “[would make]
life easier” (P4), especially in relatively dark environments,
such as subway stations. They liked the idea of projecting
highlights on the stair edges to simulate the physical contrast
stripes. P9 said, “Having [the highlights] this bright is really
good. Because usually [the contrast stripes] are painted, and
they’re about to fade out, and they’re not as vibrant and
bright as this is. This is great here because you can see it.”
Participants gave high scores to the usefulness and comfort
level of the visualizations, as shown in Figure 4.
Next, we report participants’ responses on all design alterna-
tives in the three design phases.
(1) Auditory feedback when approaching stairs. Four partic-
ipants chose the human voice since they felt it was friendlier
and more informative, reporting the number of stairs. Mean-
while, three participants (P2, P8, P7) chose the nonverbal
sound because they had relatively good vision and felt the
human voice was unnecessary. The other participants pre-
ferred the combination, feeling that the sound and human
voice complemented each other: the “ding” sound was an
alert in noisy environment and the human voice reported
more concrete information.
(2) End highlights. All participants felt that the end high-
lights were an important aspect of the design. “This is the
part where I probably trip the most, on that last step. The light
[end highlights] is really important because it defines the end
of the step, so you’re not gonna miss a step” (P5).
Although we provided different visualizations (flash or
movement) to further enhance the end highlights, most par-
ticipants (seven out of 12) liked the original design. They felt
the thickness and brightness of the highlights sufficiently at-
tracted their attention and flashes and movements distracted
them. As P7 explained: “I guess because I don’t see details,
when I see things moving, I kind of get the sense of not see-
ing it correctly. I prefer just still… You’ve got the thick [end
highlights] to distinguish from the thin [middle highlights].
This is nice.”
Three participants (P6, P4, P11) felt the flash effect grabbed
their attention more and alerted them. P6 and P4 preferred
the Flashing Edge since it helped them better track the stair
edges than the Flash. However, P11 preferred the Flash since
the thin stable highlight of the Flashing Edge gave him an
illusion of “another small step” (P11).
Two participants (P2, P3) liked the Moving Vertical Zebra
the most. They felt that the movement attracted their atten-
tion and the vertical zebra pattern also labeled the stair edges.
However, none of the participants liked the Moving Horizon-
tal Zebra since the parallel movement to the stair edge dis-
torted its appearance.
Although no participants chose the Moving Edge in the
study, P6 felt it could be helpful since it indicated direction.
She explained that “at least it shows you where to go.” How-
ever, most participants found it overwhelming; it made them
“feel like the ground is going to move” (P9).
(3) Middle highlights. Eleven out of 12 participants found the
middle highlights useful. Projecting highlights onto the next
several steps gave participants a preview of the stairs and
helped them better prepare their steps, especially when there
were abnormal stairs. As P5 said,
“So you don’t have to guess what’s coming [with the middle
highlights]. Sometimes you can have a broken step, you can
have no step, or you can have a step that was not installed
properly. Sometimes staircases were defective and the dis-
tance between some of them is not even… With the [high-
lights], you can see the definition of the steps.” (P5)
Even on a typical set of stairs, participants wanted the middle
highlights to confirm that they are still on the stairs, which
made them feel safe. “It’s better with [the] lines. So I know
that this won’t be my final step” (P10).
In terms of color, most participants preferred the bright yel-
low (seven out of 12), wanting to be alert on each step. “The
yellow gives me more alert and the blue gives me a little bit
more of a relaxed mode. But when I go up and down the
steps, I wanna be alert” (P5).
Figure 4: Diverging bars that demonstrate the distribution of participant scores (strongly negative 1 to strongly positive 7)
for usefulness, comfort level, and psychological security when using visualizations on projection-based AR. We label the
mean and SD under each category.
Meanwhile, four participants felt that the middle highlights
should be a different color from the end highlights. Three
participants liked the blue color since “it’s not as attracting
as yellow but still sticks out” (P9). No one liked the dull yel-
low since it was too subtle. One participant wanted red.
P6 was the only one who did not want the middle highlights.
She felt it unnecessary since the she could walk on stairs
knowing the position of the first stair and the number of stairs
(she counted stairs). The middle highlights distracted her
from seeing her surroundings.
Walking Time. Our visualizations reduced the time partici-
pants spent during stair navigation. For descending stairs,
participants’ navigation time was reduced by 6.42% when
using their preferred visualizations (mean=6.17s,
SD=1.93s) than when not using them (mean=6.59s,
SD=2.03s). With a paired t-test, we found a considerable
trend towards significance when evaluating the effect of
Condition on the time walking downstairs (t11=-2.131,
p=0.0565) with an effect size of 0.615 (Cohen’s d). P11 re-
marked on the increase in her speed: “This is the fastest I’ve
used stairs ever! You don’t understand, this is like I’m back
to being me!”
For ascending stairs, participants’ navigation time was re-
duced by 5.78% when using their preferred visualizations
(mean=5.84s, SD=1.59s) than when not using them
(mean=6.20s, SD=1.81s). With a paired t-test, we also found
a trend towards significant effect of Condition on the time
walking upstairs (t11=1.9894, p=0.0721).
Behavior Change. Based on our observations of the walking
tasks, some participants (e.g., P9, P4) looked down less when
using our design since they could use their lower peripheral
vision to notice the highlights. As P9 mentioned, “I know
mentally I’m looking in the bottom field of vision, even
though I’m looking straight ahead…The [highlight] stands
out very bright and my peripheral catches it, it catches blue,
it catches the yellow…Without the system, I have to stare a
lot more at the stairs and, I have to look a little bit extra to
make sure that that is really the last step.”
Some participants (e.g., P6, P3, P11) hesitated at the first and
last stairs and felt the stairs with their feet when walking
without our visualizations (especially in the first two trials of
the walking tasks). When using our visualizations, they
stopped feeling the stairs with their feet. Some participants
(e.g., P7, P11) walked without holding the railing when using
our visualizations. P10 also changed how he balanced his
body when using our prototype: without our design, he
walked down leaning his left shoulder forward instead of fac-
ing forward. He explained:
“I noticed when [I walked] without the [highlights], I’m
walking more down on my side when descending the stairs.
In case if I fall, then I fall at least more on my side as opposed
to falling forward. With the [highlights] on, I was walking
more straight down. I feel a lot more confident” (P10).
Psychological Security. Our visualizations improved partic-
ipants’ psychological security when walking on stairs. Par-
ticipants all gave high scores to their psychological security
when using our prototype (mean=6.6, SD=0.67), as shown in
Figure 4. They all felt more confident and safer when navi-
gating stairs with the projected visualizations. P6 and P8 also
said that the visualizations reduced their visual effort, so that
they could look at the surroundings (e.g., other people and
obstacles on the stairs), which also helped them feel safe.
Social Acceptance. Most participants were not concerned
about projecting highlights on stairs. They felt this technol-
ogy was “cool” and could even be beneficial for people who
are sighted, for example, in dark environments. P11 regarded
the prototype as an identity tool (similar with the identity
cane), which could indicate her disability to others, so that
other people won’t bump into her on stairs. Only P6 and P9
were concerned that this technology might “scare others” and
draw too much attention to themselves. They preferred de-
vices, such as smartglasses, that would show the visualiza-
tions only to them.
VISUALIZATIONS FOR SMARTGLASSES
The second platform we explored was optical see-through
smartglasses. They present information only to the user and
do not need to project onto a physical surface [88]. Today,
this platform is more readily available than projection-based
AR. Beyond smartglasses prototypes developed by research-
ers [9, 42, 43], many early versions of products, such as Mi-
crosoft HoloLens [55] and Magic Leap One [49], mark a
trend towards mainstream smartglasses devices.
However, current optical see-through smartglasses have a
very limited FOV [88] (e.g., ca. 30° wide × 17° high for Ho-
loLens v1), largely limiting the area for presenting AR visu-
alizations. While the recently announced HoloLens v2 is es-
timated to have a 29° vertical FOV, it is still much smaller
than that of a typically-sighted human (120° vertical FOV).
With the limited vertical FOV, the highlight design on pro-
jection-based AR would not work well for the smartglasses.
To see the highlight on the current stair (Figure 5a), a user
would have to look nearly straight down to her feet (Figure
5b), hindering her ability to see her surroundings. This can
be potentially dangerous and is physically strenuous. As
such, our visualizations aim to facilitate a comfortable head
pose by indicating the user’s exact location on the stairs with-
out augmenting the stairs directly.
Figure 5. (a) The visual effect of adding highlights to stairs
with HoloLens. (b) A user stares down to see the highlights.
Visualization (and Sonification) Design
Similar to projection-based AR, when the user stands on the
landing, our system verbally notifies the user of the existence
of the stairs with stair direction and the number of stairs.
According to Zhao et al.’s study, knowing when the stairs
start and end can help PLV plan their steps, while the middle
stairs are less important because most stairs are uniform [86].
Thus, to better inform the user of their position on the stairs,
we distinguish a user’s position on a set of stairs based on
how close she is to a change in her step pattern. This change
can involve stepping down for the first time after walking on
a flat surface or stepping on a flat surface after stepping down
repeatedly. We provide feedback to indicate that a change is
approaching, and then that the change is about to occur.
Specifically, the following are the seven stages we used in
our design, described for descending stairs as an example
(Figure 6): (1) Upper landing: the flat surface that is more
than 3' away from the edge of the top stair; (2) Upper prepa-
ration area: 1.5'–3' away from the top stair edge where the
person should prepare to step down; (3) Upper alert area:
within 1.5' from the top stair edge where the person’s next
step would be stepping down; (4) Middle stairs: between the
edge of the top stair and the edge of the second-to-last stair,
where the person is stepping down repeatedly; (5) Lower
preparation area: the last stair, where the person is one step
away from the flat surface and should prepare for the immi-
nent flat surface; (6) Lower alert area: within 1.5' from the
last stair edge on the landing where the person’s next step is
on the flat surface (not stepping down); (7) Lower landing:
1.5' away from the last stair edge where the person is walking
on flat surface again. Our visualizations inform PLV of the
different stair stages via different design. We design two vis-
ualizations and one sonification.
(1) Glow visualization (Figure 7a–d): We generate a glow
effect at the bottom of the display to simulate the experience
of seeing the edge highlights on the stairs with peripheral vi-
sion. Unlike the highlights that are attached to the stair edges,
the glow is always at the bottom of the vertical FOV, so that
the user can hold their head at a comfortable angle and does
not need to look down to see the glow. We adjust the glow
color and size to inform the user of their current stage on the
stairs:
• Landing stages: thin red glow to indicate the flat surface.
• Preparation stages: thick cyan glow, telling users to pre-
pare for the first surface level change or the end of surface
level changes.
• Alert stages: thick yellow glow, indicating that the next
step is the first surface level change or the end of surface
level changes.
• Middle stairs: thin blue steps to indicate the middle stairs.
(2) Path visualization (Figure 7e–g): Inspired by the railings,
which PLV used as a visual cue to see where the stairs start
and end [86], we designed this visualization to show the trend
of the stairs. The direction of the Path follows the stairs: it
goes straight forward along the landing, turns down (or up)
along the slope of the descending (or ascending) stairs, and
goes straight forward again when arriving at the landing. The
Path is generated at the user’s eye level with a fixed distance
from one side of the head (we adjusted its specific position
based on the user’s visual field and preference), making sure
that they can see it without looking too far down. The user
can thus observe the start and end of the stairs by looking at
the turning points of the Path. To better distinguish the land-
ing and the stairs, we colored the straight part of the visuali-
zation (over the landing) yellow and the slope blue. We
added virtual pillars to connect the Path to each stair to help
users associate the visualization with the physical stairs.
(3) Beep sonification: This sonification informs users of their
current position on the stairs. Similar to glow, we adjusted
the sound based on the different stages of the stairs:
• Start landing stage: no sound.
• Preparation stages: low-frequency beep, indicating users
should prepare for the first surface level change or the end
of surface level changes.
• Alert stages: high-frequency beep, indicating that the next
step is the first surface level change or the end of surface
level changes.
• Middle stairs: no sound.
• End landing stage: audio description that verbally reports
“Stair ends.”
Evaluation of Smartglasses Visualizations
We conducted a user study to evaluate the visualizations we
designed for commercial smartglasses. We aim to answer:
Figure 6: The seven stages of the stairs.
Figure 7: Glow (a–d) and Path (e–g). Glow: (a) thin red glow on the landing; (b) thick cyan glow in the preparation area; (c) thick
yellow glow in the alert area; (d) thin blue glow on the middle of the stairs. Path: (e) view of the Path on the landing; (f) view of the
Path when getting close to the first stair; (g) view of the Path on the middle of the stairs.
(1) How do PLV perceive the visualizations on smartglasses?
(2) How effective are the visualizations for stair navigation?
(3) How secure do PLV feel when using our visualizations?
Method
Participants. We recruited 12 PLV (5 female, 7 male; mean
age=51.6) with different low vision conditions (Table 1, P6–
P17). All participants were legally blind. Seven participants
had taken part in the evaluation of our projection-based AR
visualizations, but they did not see the stairs used in this
study. We followed the same recruitment procedures as in
the previous study.
Apparatus. We built our prototype on Microsoft HoloLens
v1. We chose HoloLens because of its FOV (~34° diagonal),
binocular displays, and ability to be worn with eyeglasses.
Many lightweight smartglasses have only one display in
front of the right eye (e.g., Google Glass, North Focals), and
are unusable for PLV with vision only in the left eye. Other
options either have a smaller FOV (e.g., Epson Moverio BT-
300, 23° diagonal) or cannot be used with eyeglasses (e.g.,
Magic Leap One).
To minimize the confounding effect of general computer vi-
sion accuracy, we marked the position of the stairs with two
Vuforia image targets [37] (on the side walls at the top and
bottom landing of the stairs) that can be recognized by Ho-
loLens. This provided an anchor in the environment, which
enabled our application to determine the position of the user
on the stairs by tracking the motion of the HoloLens, improv-
ing the accuracy of our visualizations and sonification.
Procedure. The study consisted of a single session that lasted
about 1.5–2 hours. An initial interview asked about de-
mographics, visual condition, and use of tools when navi-
gating stairs. Next, a licensed optometrist on the team con-
ducted a confrontation visual field test and a visual acuity
test using a Snellen chart (Table 1). We then gave the Ho-
loLens to the participant and explained how to use it. After
the participant put on the HoloLens, the optometrist tested
her visual field and visual acuity again to measure the effect
of the HoloLens on the participant’s visual ability. We con-
tinued the study with a design exploration session and a stair
navigation session.
We conducted the design exploration session at an emer-
gency staircase with 12 stairs (different stairs than those in
the projection study). Participants wore the HoloLens and
experienced four different designs: Glow, Path, Beep, and
Edge Highlights as a baseline. Participants were allowed to
walk up and down the stairs to experience the design in-situ.
They thought aloud, talking about whether or not they liked
the design, whether the design distracted them, and how they
wanted to improve it. We counterbalanced by randomizing
the presentation order of the four designs. After the partici-
pant experienced all the design alternatives, we asked for
their preferred combination.
The stair navigation session was conducted at another stair-
case with 14 stairs—a wider set of access stairs in a more
brightly lit and open environment. Participants performed
two stair navigation tasks: walking upstairs and walking
downstairs. They conducted each task in three conditions: (1)
walking on the stairs as they typically would (they could use
a cane if desired, but none chose to use it), (2) walking on the
stairs with HoloLens and no visualizations, and (3) walking
on the stairs with HoloLens and their chosen designs. Each
task in each condition was repeated five times.
We indicated the start and end points on the stairs with stick-
ers that were three feet away from the top and bottom steps
on the landings. For each task, the participant stood at the
starting point and started when the researcher said, “Start.”
The task ended when both her feet first arrived at the landing.
Participants were asked to walk as quickly and safely as pos-
sible during the task. We recorded the time for each task.
To reduce the effect of order on the results, we used a simul-
taneous within-subjects design by switching the task condi-
tion after each round of walking up and down. We also coun-
terbalanced the starting task (up/down) and the conditions.
The study ended with a final interview asking about the par-
ticipant’s general experience with the prototype. We asked
them to score the usefulness and comfort level of the proto-
type on a Likert scale, as well as their psychological security
when using the prototype, ranging from 1 (strongly negative)
to 7 (strongly positive).
Analysis. We analyzed the effect of our visualizations on
participants’ walking time when navigating stairs. Our ex-
periment had one within-subject factor, Condition (No Ho-
loLens; HoloLens w/o visualizations; Visualizations), and
one measure, Time. We defined a Trial (1–5) as one walking
task. We determined Time from the video we recorded dur-
ing the study. When analyzing data, we removed the first trial,
treating it as a practice trial for participants to get used to the
HoloLens.
To validate counterbalancing, we added another between-
subject factor, Order (six levels based on the three condi-
tions), into our model. An ANOVA found no significant ef-
fect of Order on walking time (downstairs: F(5,6)=0.35,
p=0.338; upstairs: F(5,6)=0.445, p=0.804) and no significant
effect of the interaction between Order and Condition on
walking time (downstairs: F(10,12)=1.418, p=0.280, upstairs:
F(10, 12)=0.535, p=0.835).
We analyzed participants’ qualitative responses with the
same method we used in the previous study.
Results
Experience with the Smartglasses. We first report the effect
of the HoloLens on participants’ visual abilities. Some par-
ticipants appreciated the tinted optics because they blocked
environmental glare. Three participants’ visual acuity im-
proved when wearing the HoloLens (P14: from 20/140 to
20/100, P7: from 20/400 to 20/200, P15: from 20/200 to
20/140). However, P12 experienced a decrease in visual acu-
ity (from 20/200 to 20/400). It is possible that the tint of the
HoloLens made the environment too dark for him to see. In
terms of visual field, no participants experienced a change
while wearing the HoloLens. All participants mentioned the
heaviness of the hardware, which potentially impacted their
experience negatively.
Effectiveness of the visualizations (and sonification). We
report participants’ feedback on each design alternative.
(1) Edge Highlights (Baseline). Most participants found it
difficult to use the Edge Highlights because of the limited
vertical FOV. Participants had to angle their head down a lot
to see the highlight on the current stair. They found it uncom-
fortable and unsafe to maintain that posture on the stairs, es-
pecially when walking down. P9 reported that, “To continue
seeing everything, my head has to be completely [down], my
chin is touching my chest.”
Nevertheless, some participants (e.g., P6, P10, P13) felt this
design was helpful because it provided a preview for future
steps, especially when they looked downstairs from the top
landing. Interestingly, P10 mentioned that he could combine
his own vision (that is not covered by the HoloLens) with the
Edge Highlights. He didn’t feel the need to look down all the
time because he has good peripheral vision to see the stairs,
and he could use the Edge Highlights on the HoloLens to
prepare for future steps and verify the last step.
(2) Glow. Most participants found Glow helpful and easy to
understand. They felt the different colors can effectively in-
form them of their stage on the stairs, and the thicker and
brighter glow colors at the preparation and alert area success-
fully attracted their attention. Moreover, participants enjoyed
the freedom to move their head in any direction while still
being able to see Glow. This enabled them to better explore
their surroundings and still be visually alerted about the stairs
without looking down. P9 described his experience:
“This one is my kind of style. It’s subtle, simple, and I can
keep my head wherever I want at the same time. And [the
color of the Glow] changes exactly when I need to step. It
warns me when I’m about to take my last step… It’s very dis-
creet but not distracting. So I’ll still be able to see people,
and things around me without falling over steps. If my real
glasses could do this, it would be good.”
However, two participants (P6, P14) had difficulty using
Glow because of difficulty distinguishing colors. P14 doesn’t
have color vision, while P6’s visual condition included auras
of various colors that interfere with the colors of Glow.
Moreover, some participants (e.g., P10, P12, P17) mentioned
that the blue glow on the middle stairs was difficult to notice,
especially in the bright environment for the walking tasks.
Not seeing the glow on the middle stairs distracted the par-
ticipants and made them feel uncertain about the stairs. As
P10 mentioned, “I want more information while I’m going
down the stairs, The yellow color was helpful to let me know
that I’m at the last step...but I didn’t really see that [blue glow
in the middle], I need to be reassured that I’m still going
down the stairs.” P17 slowed down as she struggled to see
the blue glow when completing the walking tasks.
(3) Path. Half of the participants indicated that Path could be
helpful. They mentioned that Path gave them a clear over-
view of the stair trends, specifically where the stairs start and
end. P13 described his impression, “This is perfect because
if I’m coming to the stairs, looking at the stairs and I won’t
have to look down, I immediately know where [the stair] be-
gins and where it ends, as soon as my head turns to the [Path].”
P8 also felt Path could guide him along the stairs: “It’s like a
reinforced railing but it’s also like a guide [showing] where
I’m stepping. It’s like a good reference. I kinda like to have
the guide.” Moreover, three participants (e.g., P6, P9, P13)
interpreted Path as a reminder to look for the physical railing.
Interestingly, we found that participants had different prefer-
ences for Path’s position in their visual field. Many (e.g., P12,
P16) adjusted Path to a position where their vision was best.
Meanwhile, others adjusted it to a position that they felt was
the most intuitive to comprehend. For example, P9 and P15
adjusted Path so that it was in the center of their vision and
that they could use it in a similar fashion to a GPS guide. P14
moved Path lower so he can more easily associate the virtual
Path to the real staircase. As he said, “[Path] would be my
favorite if we were able to get it to [get close] to the stairs
instead hanging up in the middle of everything.”
However, half of the participants felt Path was distracting
and hard to understand. P6 even felt it was misleading to
have a virtual railing (Path) in a different place than the real
railing because it changed her perception of the width of the
staircase: “It suggests that there is a railing and then I feel I
have a very narrow staircase” (P6).
(4) Beep. All participants except for P17 felt Beep was help-
ful. P6 thought it could reduce cognitive load and enable her
to see the surroundings. As she said, “It’s really interesting.
The more often I use it, the more I like the [Beep]… I don’t
have to watch out for visual [information] of the stairs. With
the audio, I just look at the [surrounding] or look at people in
front of me and I don’t have to worry about [the stairs].
That’s actually easier.” P14 also felt Beep could be a good
compensation when the visualizations are not visible in
bright environment.
On the other hand, P17 felt that Beep may not be distinguish-
able from environmental sounds: “The world around you is
so full of noise. I mean, if I use this in the city… you have
cars honking and everything like that, I’m not sure if I would
react in time.” P8 and P14 voiced the same concern about
environmental noise but explained that along with the visu-
alizations the sound would be recognizable.
Figure 8: Distribution of participants’ preferences for visualizations and sonification on HoloLens.
Preferences for visualizations (and sonification). Partici-
pants combined different visualizations and sonification
based on their preferences, as shown in Figure 8.
We found that most participants (10 out of 12) combined a
visualization with a sonification (Beep). While they all men-
tioned that visualizations were more effective than audio
feedback and used the visualization as a primary guide, par-
ticipants also appreciated the beep and used it as a secondary
complement to the visualizations. As P12 said, “Actually I
liked [Glow] more with the audio [Beep]. They augment
each other. I found it to be more useful together than sepa-
rate.” Only two participants did not combine the visualiza-
tion with the sonification: P7 used audio alone, and P17 used
Glow alone.
The most commonly chosen visualization was Glow, which
was preferred by eight participants. One participant (P14)
chose Path, while two participants (P6 and P10) chose Edge
Highlights. P13 combined all four designs because he used
each design for different purposes: Path as a reminder to look
for a railing, Edge Highlights to get an overview of the stairs,
and Glow when walking on stairs and scanning the environ-
ment for people or obstacles.
In general, participants felt that our prototype was helpful,
especially in unfamiliar places. They gave high scores
(mean=5.8, SD=1.65) for the usefulness of their preferred
visualizations and sonification. They also felt the visualiza-
tions were comfortable to see (mean=5.6, SD=1.73), as
shown in Figure 9.
Walking Time. In the walking tasks, the HoloLens itself had
a big impact on participants’ walking time when navigating
descending stairs. With ANOVA, we found that participants’
walking time significantly increased when they walked
downstairs wearing the HoloLens whether using our visuali-
zations or not (F(2,12)=8.783, p=0.0045). However, when
walking upstairs, there was no significant effect of Condition
on participants’ walking time (F(2,10)=2.924, p=0.092). Since
navigating descending stairs is more challenging, wearing a
new device can more easily affect people’s walking speed.
With the condition of wearing HoloLens without visualiza-
tions as the baseline, we analyzed the effect of our visualiza-
tions on PLV’s walking time. We found that there’s no sig-
nificant effect of Condition (HoloLens with visualizations vs.
HoloLens without visualizations) on participants’ walking
time for both ascending (F(1,10)=0.466, p=0.511) and de-
scending stairs (F(1,10)=0.114, p=0.742). Four participants
(P6, P8, P12, P17) slowed down a little on ascending stairs
with the visualizations, while five participants (P6, P13, P12,
P16, P17) slowed down on descending stairs with their pre-
ferred visualizations. Except for P17, who slowed down a lot
when walking downstairs with our visualizations, all other
participants’ times increased by less than 1 second. We in-
vestigated and found that P17 had a hard time seeing the blue
glow on middle stairs in the bright environment. She slowed
down and struggled to see the blue glow during walking tasks.
Psychological Security. While there is no significant im-
provement in walking speed when using the visualizations,
participants reported feeling safer and more confident when
using our design. P11 described her experience when using
our prototype, “I love the fact that the [visualizations] are
there. Once you understand what they mean, you can actually
move more confidently… I would be very safe instead of
falling down and kicking things.”
Participants gave scores to their psychological security dur-
ing stair navigation in three conditions (Figure 9): (1) walk-
ing as they typically would (mean=4.8, SD=1.60); (2) with
HoloLens but no visualizations (mean=3.9, SD=1.44); (3)
with preferred visualizations or sonification (mean=6.1,
SD=1.38). Paired Wilcoxon Signed-Rank tests showed that,
while wearing HoloLens significantly reduced participants’
psychological security (V=8, p=0.031), our visualizations
significantly increased participant psychological security
compared with not wearing HoloLens (V=21, p=0.050).
Behavior Change. Our design changed people’s behaviors
when walking on stairs. Two participants (P8, P15) walked
without holding the railing when using their preferred visu-
alizations. Moreover, we tracked participants’ head orienta-
tion with HoloLens during the walking tasks, and found that
some participants’ (e.g., P6, P9) head orientation changed
when using our visualizations. For example, Figure 10 shows
the head forward angle of P9 on each stair stage when walk-
ing downstairs with and without the visualizations. We found
Figure 9: Diverging bars that demonstrate the distribution of participant scores (strongly negative 1 to strongly positive 7) for the
usefulness and comfort level of the visualizations, and their psychological security in three conditions: without HoloLens, with Ho-
loLens but no visualizations, and with visualizations. We label the mean and SD under each category.
that, he looked much further down to the stairs when not us-
ing our visualizations, especially at the beginning and the end
of the stairs (e.g., preparation area, alert area).
DISCUSSION
Our research is the first to explore AR visualizations for peo-
ple with low vision in the context of stair navigation. Our
studies demonstrate the effectiveness of our designs with
both projection-based AR and smartglasses. We found that
our visualizations on both platforms largely increased peo-
ple’s psychological security, making them feel confident and
safe when walking on stairs. Moreover, the visualizations on
projection-based AR showed a trend towards significantly
reducing PLV’s walking time on stairs.
Participants had some common choices on the visualizations
on each platform. For projection-based AR, the stable thick
yellow highlights on first and last stairs were the most pre-
ferred (7/12). For highlights on middle stairs, most partici-
pants (7/12) preferred the most visible yellow highlights in-
stead of blue or dull yellow ones. For HoloLens, most partic-
ipants (6/12) chose the combination of Glow and Beep. Un-
like prior research, which showed that PLV had very differ-
ent preferences for visual augmentations [84, 85], our study
revealed that some common preferences among PLV cross
different visual abilities for stair navigation. This can poten-
tially set a foundation for future visualization design for stair
navigation and more general navigation systems.
We compared users’ experiences with the visualizations on
both platforms given that seven participated in both studies.
Most PLV (e.g., P10, P12) felt that the visualizations on pro-
jection-based AR were easier to use than those on the smart-
glasses. The highlights on projection AR were intuitive to
perceive because they directly enhance the stair edges that
participants were looking for. Meanwhile, the design on
smartglasses, especially Glow and Beep, proposed a new
way to perceive stairs: it divided the stairs into different
stages, providing only immediate information about the cur-
rent stair without a preview of what’s to come. This new stair
perception method increased participants’ cognitive load, be-
cause they had to associate the design with the physical
stairs, making them more cautious. This could be one major
reason why PLV’s walking time did not improve when using
smartglasses. P12 compared his experiences with the two
platforms, “The first experience [projection-based AR] gave
me a better sense of a direction as to where this was go-
ing…But the [glow] was like floating over the steps, and they
didn’t stay fixed in place. That was one big difference. I like
the light fixed on the step.”
While our study focused on the design and evaluation of the
AR visualizations, we discuss the technical feasibility and
challenges for our AR stair navigation systems. The imple-
mentation of such a system could be challenging. For such a
dangerous task as stair navigation, the navigation system
should be highly accurate and fast since a small error could
lead to severe consequences (e.g., a slight shift of the edge
highlight could make the user fall). The system also needs to
tolerate the user’s body (e.g., hand, head) movement when
walking on stairs, which requires a tradeoff between speed
and stabilization. While many stair detection methods have
been presented in prior research [20, 58], algorithms that lo-
cate the exact position of each stair with high speed and ac-
curacy should be investigated and tested to support the stair
visualization systems we designed for PLV.
The system implementation should also take into account
different real-world situations. Our evaluation was con-
ducted indoors, with no other people around. However, the
real world could be much more complicated, raising all kinds
of challenges. For example, AR visualizations could be less
visible outdoors, crowded stairs could diminish the accuracy
of the stair recognition because the stair edges are blocked,
and the projected highlights may also disturb other people.
In future work, we will consider these real-world challenges
when developing AR stair navigation systems. For example,
besides recognizing stairs with computer vision, we will con-
sider instrumenting the environment (e.g., using RFID) to
foster accurate and fast stair recognition in a complex envi-
ronment. We will also add face detection to avoid projecting
in bystanders’ faces.
As with any study, ours had some limitations. First, the Ho-
loLens’s weight strongly diminished PLV’s experiences,
which may have influenced our results. Future studies should
refine and evaluate the design on more lightweight smart-
glasses. Second, because of the extreme head pitch required
to view the closest stairs caused by the small vertical FOV of
Figure 10: P9’s gaze direction when walking downstairs in two conditions: using HoloLens w/o visualizations and using his pre-
ferred visualizations on HoloLens. The x-axis represents each stair, while the y-axis represents the angle between the partici-
pant’s gaze direction and the horizontal surface. When the participant looks up (down), the angle is positive (negative).
the HoloLens, we designed visualizations in the users’ cen-
tral vision instead of adding highlights to the stairs in our
smartglasses prototype. More data could be collected to
quantify the head pitch angle to determine an effective verti-
cal FOV that allows PLV to use the stair highlights with a
comfortable head pose. Third, we asked participants to score
their feeling of psychological security, but these results could
be influenced by a novelty effect. Future research should
consider more objective measurements (e.g., biometrics) to
evaluate psychological security.
CONCLUSIONS
In this paper, we designed AR visualizations to facilitate stair
navigation for people with low vision. We designed visuali-
zations (and sonification) for both projection-based AR and
smartglasses based on the different characteristics of these
platforms. We evaluated the design on each platform with 12
participants, finding that both visualizations increased par-
ticipants’ psychological security, making them feel safer and
more confident when walking on stairs. Moreover, our de-
sign for projection-based AR showed a trend towards signif-
icantly reducing participants’ walking time on stairs.
ACKNOWLEDGMENTS
This work was supported in part by the National Science
Foundation under grant no. IIS-1657315. Feiner was funded
in part by the National Science Foundation under grant no.
IIS-1514429.
REFERENCES
[1] Abu-Faraj, Z.O. et al. 2012. Design and development
of a prototype rehabilitative shoes and spectacles for
the blind. 2012 5th International Conference on
Biomedical Engineering and Informatics, BMEI 2012
(2012), 795–799.
[2] Aguerrevere, D. et al. 2004. Portable 3D Sound / Sonar
Navigation System for Blind Individuals. 2nd LACCEI
Int. Latin Amer. Caribbean Conf. (2004).
[3] Ahmetovic, D. et al. 2017. Achieving Practical and
Accurate Indoor Navigation for People with Visual
Impairments. Proceedings of the 14th Web for All
Conference on The Future of Accessible Work - W4A
’17 (New York, New York, USA, 2017), 1–10.
[4] Ahmetovic, D. et al. 2016. NavCog: A Navigational
Cognitive Assistant for the Blind. Proceedings of the
18th International Conference on Human-Computer
Interaction with Mobile Devices and Services (2016),
90–99.
[5] Archea, J.C. and Clin Geriatr, M. 1985. Environmental
Factors Associated with Stair Accidents by the Elderly.
Clinics in geriatric medicine. 1, 3 (Aug. 1985), 555–
569.
[6] Berger, S. and Porell, F. 2008. The Association
Between Low Vision and Function. Journal of Aging
and Health. 20, 5 (Aug. 2008), 504–525.
DOI:https://doi.org/10.1177/0898264308317534.
[7] Bhowmick, A. et al. 2014. IntelliNavi: Navigation for
Blind Based on Kinect and Machine Learning.
Springer, Cham. 172–183.
[8] Bibby, S.A. et al. 2007. Vision and self-reported
mobility performance in patients with low vision.
Clinical and Experimental Optometry. 90, 2 (Mar.
2007), 115–123. DOI:https://doi.org/10.1111/j.1444-
0938.2007.00120.x.
[9] Bimber, O. and Frohlich, B. 2002. Occlusion shadows:
Using projected light to generate realistic occlusion
effects for view-dependent optical see-through
displays. Proceedings - International Symposium on
Mixed and Augmented Reality, ISMAR 2002 (2002),
186–198.
[10] Black, A.A. et al. 1997. Mobility Performance with
Retinitis Pigmentosa. Clinical and experimental
optometry. 80, 1 (Jan. 1997), 1–12.
DOI:https://doi.org/10.1111/j.1444-
0938.1997.tb04841.x.
[11] Blindness and Visual Impairment: 2017.
http://www.who.int/news-room/fact-
sheets/detail/blindness-and-visual-impairment.
Accessed: 2018-09-14.
[12] Blum, J.R. et al. 2011. What’s around me? Spatialized
audio augmented reality for blind users with a
smartphone. International Conference on Mobile and
Ubiquitous Systems: Computing, Networking, and
Services. (2011), 49–62.
[13] BOptom, R.Q.I. et al. 1998. Visual Impairment and
Falls in Older Adults: The Blue Mountains Eye Study.
Journal of the American Geriatrics Society. 46, 1 (Jan.
1998), 58–64. DOI:https://doi.org/10.1111/j.1532-
5415.1998.tb01014.x.
[14] Bouzit, M. et al. 2004. Tactile feedback navigation
handle for the visually impaired. IMECE2004 (Jan.
2004), 1–7.
[15] Campbell, M. et al. 2014. Where’s My Bus Stop?
Supporting Independence of Blind Transit Riders with
StopInfo. ASSETS ’14 Proceedings of the 16th
international ACM SIGACCESS conference on
Computers & accessibility. (2014), 11–18.
DOI:https://doi.org/10.1145/2661334.2661378.
[16] Cao, X. and Balakrishnan, R. 2006. Interacting with
dynamically defined information spaces using a
handheld projector and a pen. the 19th annual ACM
symposium on User interface software and technology
(2006), 225.
[17] Capi, G. and Toda, H. 2011. A new robotic system to
assist visually impaired people. IEEE International
Workshop on Robot and Human Interactive
Communication (2011), 259–263.
[18] Choi, J. and Kim, G.J. 2013. Usability of one-handed
interaction methods for handheld projection-based
augmented reality. Personal and Ubiquitous
Computing. 17, 2 (Feb. 2013), 399–409.
DOI:https://doi.org/10.1007/s00779-011-0502-1.
[19] Cimarolli, V.R. et al. 2012. Challenges faced by older
adults with vision loss: a qualitative study with
implications for rehabilitation. Clinical Rehabilitation.
26, 8 (Aug. 2012), 748–757.
DOI:https://doi.org/10.1177/0269215511429162.
[20] Cloix, S. et al. 2016. Low-power depth-based
descending stair detection for smart assistive devices.
Eurasip Journal on Image and Video Processing. 2016,
1 (2016). DOI:https://doi.org/10.1186/s13640-016-
0133-6.
[21] Common Types of Low Vision:
http://www.aoa.org/patients-and-public/caring-for-
your-vision/low-vision/common-types-of-low-
vision?sso=y. Accessed: 2015-07-07.
[22] Cox, A. et al. 2005. Visual impairment in elderly
patients with hip fracture: causes and associations. Eye
(London, England). 19, 6 (Jun. 2005), 652–656.
DOI:https://doi.org/10.1038/sj.eye.6701610.
[23] Cummings, S.R. et al. 1995. Risk Factors for Hip
Fracture in White Women. New England Journal of
Medicine. 332, 12 (Mar. 1995), 767–774.
DOI:https://doi.org/10.1056/NEJM199503233321202.
[24] Dakopoulos, D. and Bourbakis, N.G. 2010. Wearable
Obstacle Avoidance Electronic Travel Aids for Blind:
A Survey. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews). 40, 1
(Jan. 2010), 25–35.
DOI:https://doi.org/10.1109/TSMCC.2009.2021255.
[25] Dougherty, B.E. et al. 2011. Abandonment of low-
vision devices in an outpatient population. Optometry
and vision science : official publication of the
American Academy of Optometry. 88, 11 (Nov. 2011),
1283–7.
DOI:https://doi.org/10.1097/OPX.0b013e31822a61e7.
[26] Everingham, M.R. et al. 1999. Head-mounted mobility
aid for low vision using scene classification techniques.
International Journal of Virtual Reality. 3, (1999), 3–
12.
[27] Fiannaca, A. et al. 2014. Headlock: a Wearable
Navigation Aid that Helps Blind Cane Users Traverse
Large Open Spaces. Proceedings of ASSETS ’14
(2014), 19–26.
[28] Filipe, V. et al. 2012. Blind Navigation Support System
based on Microsoft Kinect. Procedia Computer
Science. 14, (Jan. 2012), 94–101.
DOI:https://doi.org/10.1016/J.PROCS.2012.10.011.
[29] Hara, K. et al. 2015. Improving Public Transit
Accessibility for Blind Riders by Crowdsourcing Bus
Stop Landmark Locations with Google Street View:
An Extended Analysis. ACM Transactions on
Accessible Computing. 6, 2 (2015), 1–23.
DOI:https://doi.org/10.1145/2717513.
[30] Harms, H. et al. 2015. Detection of ascending stairs
using stereo vision. IEEE International Conference on
Intelligent Robots and Systems. 2015-Decem, (2015),
2496–2502.
DOI:https://doi.org/10.1109/IROS.2015.7353716.
[31] Harwood, R.H. et al. 2005. Falls and health status in
elderly women following first eye cataract surgery: a
randomised controlled trial. The British journal of
ophthalmology. 89, 1 (Jan. 2005), 53–9.
DOI:https://doi.org/10.1136/bjo.2004.049478.
[32] Harwood, R.H. 2001. Visual problems and falls. Age
and Ageing. 30, SUPPL. 4 (Nov. 2001), 13–18.
DOI:https://doi.org/10.1093/ageing/30.suppl_4.13.
[33] Hicks, S.L. et al. 2013. A Depth-Based Head-Mounted
Visual Display to Aid Navigation in Partially Sighted
Individuals. PLoS ONE. 8, 7 (Jul. 2013), e67695.
DOI:https://doi.org/10.1371/journal.pone.0067695.
[34] Huang, H.-C. et al. 2015. An Indoor Obstacle
Detection System Using Depth Information and Region
Growth. Sensors. 15, 10 (2015), 27116–27141.
DOI:https://doi.org/10.3390/s151027116.
[35] Huang, J. et al. 2019. An augmented reality sign-
reading assistant for users with reduced vision. PLOS
ONE. 14, 1 (Jan. 2019), e0210630.
DOI:https://doi.org/10.1371/journal.pone.0210630.
[36] Hub, A. et al. Augmented Indoor Modeling for
Navigation Support for the Blind.
[37] Image Targets:
https://library.vuforia.com/articles/Training/Image-
Target-Guide. Accessed: 2019-07-04.
[38] Ivanov, R. 2010. Indoor navigation system for visually
impaired. The 11th International Conference on
Computer Systems and Technologies and Workshop for
PhD Students in Computing on International
Conference on Computer Systems and Technologies
(2010), 143.
[39] Kanwal, N. et al. 2015. A Navigation System for the
Visually Impaired: A Fusion of Vision and Depth
Sensor. Applied Bionics and Biomechanics. 2015,
(Aug. 2015), 1–16.
DOI:https://doi.org/10.1155/2015/479857.
[40] Khambadkar, V. and Folmer, E. 2013. GIST: a
Gestural Interface for Remote Nonvisual Spatial
Perception. the 26th annual ACM symposium on User
interface software and technology (2013), 301–310.
[41] Kinateder, M. et al. 2018. Using an Augmented Reality
Device as a Distance-based Vision Aid—Promise and
Limitations. Optometry and Vision Science. 95, 9
(2018), 727.
DOI:https://doi.org/10.1097/OPX.0000000000001232.
[42] Kiyokawa, K. et al. An optical see-through display for
mutual occlusion of real and virtual environments.
Proceedings IEEE and ACM International Symposium
on Augmented Reality (ISAR 2000) 60–67.
[43] Kiyoshi Kiyokawa et al. 2003. An Occlusion-Capable
Optical See-through Head Mount Display for
Supporting Co-located Collaboration. Proceedings of
the 2nd IEEE/ACM International Symposium on Mixed
and Augmented Reality (2003), 133.
[44] Leat, S.J. and Lovie-Kitchin, J.E. 2008. Visual
function, visual attention, and mobility performance in
low vision. Optometry and Vision Science. 85, 11
(Nov. 2008), 1049–1056.
DOI:https://doi.org/10.1097/OPX.0b013e31818b949.
[45] Leat, S.J. and Lovie-Kitchin, J.E. 2008. Visual
Function, Visual Attention, and Mobility Performance
in Low Vision. Optometry and vision science : official
publication of the American Academy of Optometry.
85, 11 (2008), 1049–1056.
DOI:https://doi.org/10.1097/OPX.0b013e31818b949.
[46] Legge, G.E. et al. 2013. Indoor Navigation by People
with Visual Impairment Using a Digital Sign System.
PLoS ONE. 8, 10 (Oct. 2013), e76783.
DOI:https://doi.org/10.1371/journal.pone.0076783.
[47] Legge, G.E. et al. 2010. Visual accessibility of ramps
and steps. Journal of Vision. 10, 11 (Sep. 2010), 8–8.
DOI:https://doi.org/10.1167/10.11.8.
[48] Liu, H. et al. 2015. iSee: obstacle detection and
feedback system for the blind. Proceedings of the 2015
ACM International Joint Conference on Pervasive and
Ubiquitous Computing and Proceedings of the 2015
ACM International Symposium on Wearable
Computers - UbiComp ’15. (2015), 197–200.
DOI:https://doi.org/10.1145/2800835.2800917.
[49] Magic Leap: https://www.magicleap.com/magic-leap-
one.
[50] Mascetti, S. et al. 2016. ZebraRecognizer: Pedestrian
crossing recognition for people with visual impairment
or blindness. Pattern Recognition. 60, (Dec. 2016),
405–419.
DOI:https://doi.org/10.1016/J.PATCOG.2016.05.002.
[51] McLeod, P. et al. 1988. Visual Search for a
Conjunction of Movement and Form is parallel.
Nature. 336, (1988), 403–405.
[52] Meers, S. and Ward, K. 2005. A Substitute Vision
System for Providing 3D Perception and GPS
Navigation via Electro-Tactile Stimulation.
International Conference on Sensing Technology.
November (Nov. 2005), 551–556.
[53] Meijer, P.B.L. 1992. An experimental system for
auditory image representations. IEEE Transactions on
Biomedical Engineering. 39, 2 (1992), 112–121.
DOI:https://doi.org/10.1109/10.121642.
[54] Menikdiwela, M.P. et al. 2013. Haptic based walking
stick for visually impaired people. 2013 International
conference on Circuits, Controls and Communications
(CCUBE) (Dec. 2013), 1–6.
[55] Microsoft HoloLens | Official Site:
https://www.microsoft.com/microsoft-hololens/en-us.
Accessed: 2015-07-07.
[56] Miyasike-daSilva, V. et al. 2019. A role for the lower
visual field information in stair climbing. Gait &
Posture. 70, (May 2019), 162–167.
DOI:https://doi.org/10.1016/J.GAITPOST.2019.02.033
.
[57] Munoz, R. et al. 2016. Depth-aware indoor staircase
detection and recognition for the visually impaired.
2016 IEEE international conference on multimedia &
expo workshops (ICMEW) (2016), 1–6.
[58] Murakami, S. et al. 2014. Study on stairs detection
using RGB-depth images. 2014 Joint 7th International
Conference on Soft Computing and Intelligent Systems,
SCIS 2014 and 15th International Symposium on
Advanced Intelligent Systems, ISIS 2014. (2014), 1186–
1191. DOI:https://doi.org/10.1109/SCIS-
ISIS.2014.7044705.
[59] Perez-Yus, A. et al. 2015. Stair Detection and
Modelling from a Wearable Depth Camera. (2015),
2015.
[60] Perez-Yus, A. et al. 2017. Stairs detection with
odometry-aided traversal from a wearable RGB-D
camera. Computer Vision and Image Understanding.
154, (2017), 192–205.
DOI:https://doi.org/10.1016/j.cviu.2016.04.007.
[61] Pinhanez, C. 2001. The Everywhere Displays
Projector: A Device to Create Ubiquitous Graphical
Interfaces. International conference on ubiquitous
computing. Springer, Berlin, Heidelberg. 315–331.
[62] Priyadarshini, A.R. 1024. Dual Objective Based
Navigation Assistance to the Blind and Visually
Impaired. International Journal of Innovative Research
in Computer and Communication Engineering. 2, 5
(1024), 4335–4342.
[63] Rapp, S. et al. 2004. Spotlight Navigation : Interacton
with a Handheld Projection Device. Advances in
Pervasive Computing (2004), 397–400.
[64] van Rheede, J.J. et al. 2015. Improving mobility
performance in low vision with a distance-based
representation of the visual scene. Investigative
Ophthalmology and Visual Science. 56, 8 (2015),
4802–4809. DOI:https://doi.org/10.1167/iovs.14-
16311.
[65] Salber, D. and Coutaz, J. 1993. Applying the Wizard of
Oz Technique to the Study of Multimodal Systems.
Proceedings of EWHCI. (1993), 219–230.
DOI:https://doi.org/10.1007/3-540-57433-6_51.
[66] Saldana, J. 2010. The Coding Manual for Qualitative
Researchers. The qualitative report. 15, 3 (2010), 754–
760.
DOI:https://doi.org/10.1017/CBO9781107415324.004.
[67] Samsung I8530 Galaxy Beam:
https://www.gsmarena.com/samsung_i8530_galaxy_be
am-4566.php. Accessed: 2019-03-26.
[68] Shahrabadi, S. et al. 2013. Detection of indoor and
outdoor stairs. Iberian Conference on Pattern
Recognition and Image Analysis (2013), 847–854.
[69] Shinohara, K. and Wobbrock, J.O. 2011. In the shadow
of misperception: assistive technology use and social
interactions. Proceedings of the 2011 annual
conference on Human factors in computing systems
(2011), 705–714.
[70] Shoval, S. et al. 1994. Mobile robot obstacle avoidance
in a computerized travel aid for the blind. Proceedings
of the 1994 IEEE International Conference on Robotics
and Automation (1994), 2023–2028.
[71] Shoval, S. et al. 2003. NavBelt and the GuideCane.
IEEE Robotics and Automation Magazine. 10, 1 (Mar.
2003), 9–20.
DOI:https://doi.org/10.1109/MRA.2003.1191706.
[72] Summary Health Statistics for the U.S. Population:
National Health Interview Survey, 2004.: 2004.
http://www.cdc.gov/nchs/data/series/sr_10/sr10_229.p
df. Accessed: 2015-05-03.
[73] Szpiro, S. et al. 2016. Finding a store, searching for a
product: a study of daily challenges of low vision
people. Proceedings of the 2016 ACM International
Joint Conference on Pervasive and Ubiquitous
Computing. (2016), 61–72.
DOI:https://doi.org/10.1145/2971648.2971723.
[74] Szpiro, S. et al. 2016. How People with Low Vision
Access Computing Devices: Understanding Challenges
and Opportunities. Proceedings of the 18th
International ACM SIGACCESS Conference on
Computers and Accessibility (2016), 171–180.
[75] Tjan, B.S. et al. 2005. Digital Sign System for Indoor
Wayfinding for the Visually Impaired. 2005 IEEE
Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR’05) - Workshops, 30–30.
[76] Ulrich, I. and Borenstein, J. 2001. The GuideCane-
applying mobile robot technologies to assist the
visually impaired. IEEE Transactions on Systems,
Man, and Cybernetics - Part A: Systems and Humans.
31, 2 (Mar. 2001), 131–136.
DOI:https://doi.org/10.1109/3468.911370.
[77] Vera, P. et al. 2014. A smartphone-based virtual white
cane. Pattern Analysis and Applications. 17, 3 (Aug.
2014), 623–632. DOI:https://doi.org/10.1007/s10044-
013-0328-8.
[78] Wahab, M.H.A. et al. 2011. Smart Cane: Assistive
Cane for Visually-impaired People. IJCSI International
Journal of Computer Science Issues. 8, 4 (2011), 21–
27. DOI:https://doi.org/1694-0814.
[79] Wang, S. and Tian, Y. 2012. Detecting stairs and
pedestrian crosswalks for the blind by RGBD camera.
2012 IEEE International Conference on Bioinformatics
and Biomedicine Workshops (Oct. 2012), 732–739.
[80] West, C.G. et al. 2002. Is Vision Function Related to
Physical Functional Ability in Older Adults? Journal of
the American Geriatrics Society. 50, 1 (Jan. 2002),
136–145. DOI:https://doi.org/10.1046/j.1532-
5415.2002.50019.x.
[81] What Are Low Vision Optical Devices?
http://www.visionaware.org/info/your-eye-
condition/eye-health/low-vision/low-vision-optical-
devices/1235. Accessed: 2015-10-11.
[82] Willis, K.D.D. and Poupyrev, I. 2011. MotionBeam: A
Metaphor for Character Interaction with Handheld
Projectors. the SIGCHI Conference on Human Factors
in Computing Systems (2011), 1031–1040.
[83] Yantis, S. and Jonides, J. 1990. Abrupt visual onsets
and selective attention: Voluntary versus automatic
allocation. Journal of Experimental Psychology:
Human Perception and Performance. 16, 1 (1990),
121–134.
[84] Zhao, Y. et al. 2016. CueSee : Exploring Visual Cues
for People with Low Vision to Facilitate a Visual
Search Task. International Joint Conference on
Pervasive and Ubiquitous Computing (2016), 73–84.
[85] Zhao, Y. et al. 2015. ForeSee: A Customizable Head-
Mounted Vision Enhancement System for People with
Low Vision. The 17th International ACM SIGACCESS
Conference on Computers and Accessibility. (2015),
239–249.
DOI:https://doi.org/10.1145/2700648.2809865.
[86] Zhao, Y. et al. 2018. “It Looks Beautiful but Scary:”
How Low Vision People Navigate Stairs and Other
Surface Level Changes. Proceedings of the 20th
International ACM SIGACCESS Conference on
Computers and Accessibility - ASSETS ’18 (New York,
New York, USA, 2018), 307–320.
[87] Zhao, Y. et al. 2017. Understanding Low Vision
People’s Visual Perception on Commercial Augmented
Reality Glasses. Proceedings of the 2017 CHI
Conference on Human Factors in Computing Systems.
(2017), 4170–4181.
DOI:https://doi.org/10.1145/3025453.3025949.
[88] Zhou, F. et al. 2008. Trends in augmented reality
tracking, interaction and display: A review of ten years
of ISMAR. Proceedings of the 7th IEEE International
Symposium on Mixed and Augmented Reality 2008,
ISMAR 2008 (Sep. 2008), 193–202.