Conference PaperPDF Available

Designing AR Visualizations to Facilitate Stair Navigation for People with Low Vision


Abstract and Figures

Navigating stairs is a dangerous mobility challenge for people with low vision, who have a visual impairment that falls short of blindness. Prior research contributed systems for stair navigation that provide audio or tactile feedback, but people with low vision have usable vision and don't typically use nonvisual aids. We conducted the first exploration of augmented reality (AR) visualizations to facilitate stair navigation for people with low vision. We designed visualiza-tions for a projection-based AR platform and smartglasses, considering the different characteristics of these platforms. For projection-based AR, we designed visual highlights that are projected directly on the stairs. In contrast, for smartglasses that have a limited vertical field of view, we designed visualizations that indicate the user's position on the stairs, without directly augmenting the stairs themselves. We evaluated our visualizations on each platform with 12 people with low vision, finding that the visualizations for projection based AR increased participants' walking speed. Our designs on both platforms largely increased participants' self-reported psychological security.
Content may be subject to copyright.
Designing AR Visualizations to Facilitate Stair Navigation
for People with Low Vision
Yuhang Zhao1, Elizabeth Kupferstein1, Brenda Veronica Castro1,
Steven Feiner2, Shiri Azenkot1
1Jacobs Technion-Cornell Institute, Cornell Tech,
Cornell University, New York, NY, USA
{yz769, ek544, bvc5, shiri.azenkot}
2Department of Computer Science, Columbia
University, New York, NY, USA
Navigating stairs is a dangerous mobility challenge for peo-
ple with low vision, who have a visual impairment that falls
short of blindness. Prior research contributed systems for
stair navigation that provide audio or tactile feedback, but
people with low vision have usable vision and don’t typically
use nonvisual aids. We conducted the first exploration of
augmented reality (AR) visualizations to facilitate stair nav-
igation for people with low vision. We designed visualiza-
tions for a projection-based AR platform and smartglasses,
considering the different characteristics of these platforms.
For projection-based AR, we designed visual highlights that
are projected directly on the stairs. In contrast, for smart-
glasses that have a limited vertical field of view, we designed
visualizations that indicate the user’s position on the stairs,
without directly augmenting the stairs themselves. We eval-
uated our visualizations on each platform with 12 people
with low vision, finding that the visualizations for projec-
tion-based AR increased participants’ walking speed. Our
designs on both platforms largely increased participants’
self-reported psychological security.
Author Keywords
Accessibility; augmented reality; low vision; visualization.
ACM Classification Keywords
Human-centered computing~Mixed / augmented real-
ity; Accessibility technologies.
As many as 1.2 billion people worldwide have low vision, a
visual impairment that cannot be corrected with eyeglasses
or contact lenses [11, 72]. Unlike people who are blind, peo-
ple with low vision (PLV) have functional vision that they
use extensively in daily activities [73, 74]. Low vision can
be attributed to a variety of diseases (e.g., glaucoma, diabetic
retinopathy) and affects many visual functions including vis-
ual acuity, contrast sensitivity, and peripheral vision [21].
Stair navigation is one of the most dangerous mobility chal-
lenges for PLV [5]. With reduced depth perception and pe-
ripheral vision [45, 56], PLV have difficulty detecting stairs
or perceiving the exact location of stair edges [86]. As a re-
sult, PLV experience higher rates of falls and injuries than
their typically-sighted counterparts [5, 13].
Despite the difficulty they experience, PLV use their residual
vision extensively when navigating stairs [73]. Zhao et al.
[86] found that they looked at contrast stripes (i.e., con-
trasting marking stripes on stair treads) to perceive the exact
location of stair edges; some also observed the trend of the
railing to understand the overall structure of a staircase.
However, sometimes stairs do not have contrast stripes, and
even when they do, their stripes are often not accessibly de-
signed; for example, stripes may have low contrast with the
stairs or be too thin to detect [86]. Today, the only known
tool to assist with stair navigation is the white cane, which
many PLV prefer not to use [86]. Thus, there is a gap in tools
that support PLV in the basic task of stair navigation.
Advances in augmented reality (AR) present a unique oppor-
tunity to address this problem. By automatically recognizing
the environment with computer vision, AR technology has
the potential to generate corresponding visual and auditory
feedback to help people better perceive and navigate the en-
vironment more safely and quickly.
Our research explores AR visualization designs to facilitate
stair navigation by leveraging PLV’s residual vision. Design-
ing visualizations for PLV is challenging [84, 85], especially
for stair navigation, a dangerous mobility task. On one hand,
the visualizations should be easily perceivable by PLV. A
visualization that a sighted person can easily see (e.g., a small
arrow) may not be noticeable by PLV: it may be too small
for them to see or outside their visual field [87]. On the other
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from
UIST '19, October 2023, 2019, New Orleans, LA, USA
© 2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-6816-2/19/10…$15.00
Figure 1: Our visualizations for (a) projection-based AR and
(b) smartglasses to facilitate stair navigation for PLV.
hand, the visualizations should not be distracting. An ex-
tremely large, bright, or animated visualization can distract
PLV and hinder their ability to see. This could be dangerous
in the context of stair navigation. We sought to design effec-
tive visualizations for PLV, which balance visibility and dis-
traction, while providing alternative choices to support a
wide range of visual abilities.
We designed visualizations on two AR platforms that can
generate immersive virtual content in the physical environ-
ment: projection-based AR and smartglasses. Our designs
considered the different characteristics of the two platforms:
(1) For projection, which can augment a large physical space,
we designed visual highlights with different patterns that are
directly projected onto the stairs to enhance their visibility
(Figure 1a). (2) For smartglasses that have a limited vertical
field of view (FOV), we designed visualizations in the user’s
central FOV to indicate the user’s exact position on the stairs
(Figure 1b).
We evaluated our visualizations on each platform with 12
PLV. We found that the visualizations on both platforms in-
creased participants’ self-reported psychological security.
Our visualizations also changed participants’ behaviors.
Many participants didn’t stare down at the stairs when walk-
ing with our visualizations; some stopped holding the railing.
Moreover, the visualizations on the projection-based AR
platform showed a trend to significantly reduce participants
walking time.
In summary, we contribute the first exploration of AR visu-
alizations to facilitate stair navigation for PLV. Our evalua-
tions demonstrated the effectiveness of our visualizations
and provide insights for the design of AR visualizations for
PLV that support other tasks as well.
Stair Navigation Experiences of PLV
Mobility is critical but challenging for PLV. Many studies
have shown that reduced visual functions hinder mobility [6,
10, 19, 44, 80] and increase the risk of mobility-related acci-
dents [5, 13, 22, 23, 31, 32]. For example, Leat and Lovie-
Kitchin [45] found that visual field loss reduced walking
speed, while reduced visual acuity and contrast sensitivity
impacted distance and depth perception.
Stair navigation is one of the most dangerous mobility chal-
lenges for PLV [5]. Legge et al. [47] found that failing to
detect descending stairs was more dangerous and had a
higher correlation with falls than failing to see obstacles or
ascending stairs. West et al. [80] measured 782 older adults’
visual abilities and collected self-reported mobility limita-
tions. They found that people with low visual acuity and low
contrast sensitivity reported difficulty walking up and down
stairs without help. Bibby et al. [8] also surveyed 30 PLV
about their mobility performance, finding that PLV reported
greater difficulty navigating curbs and descending stairs.
In the humancomputer interaction field, researchers also ex-
plored the challenges that PLV face during navigation, in-
cluding navigating stairs. Szpiro et al. [73] observed 11
PLV’s behaviors as they navigated to a nearby pharmacy.
They found that PLV struggled but used their vision exten-
sively, and lighting conditions affected their ability to notice
obstacles and uneven pavement on the ground. Zhao et al.
[86] conducted a more in-depth study observing 14 PLV
walking on different sets of stairs indoors and outdoors. They
found that most participants relied on their vision (e.g., look-
ing at contrast stripes) to navigate stairs. Besides the white
cane, which only four participants used, no technology was
used to assist with this task. Zhao et al.’s study emphasized
the need for tools that facilitate stair navigation for PLV.
Safe Navigation for Blind and PLV
Mobility problems for people who are blind and PLV can be
divided into two categories: wayfinding (i.e., the global
problem of planning and following routes from place to
place) and safe navigation (i.e., the local problem of taking
the next step safely without bumping into things or tripping)
[75]. Most prior research in this general area has focused on
wayfinding, both indoors [3, 27, 35, 38, 46, 62] and outdoors
[4, 12, 15, 29, 50]. Yet walking up and down stairs falls into
the latter category, which has received less attention.
Safe Navigation for Blind People
To facilitate safe navigation, researchers designed obstacle
avoidance systems for people who are blind (e.g., [1, 24, 48,
77]). By detecting obstacles with cameras or range finders,
these systems generated auditory [2, 39, 40, 53, 70, 78] or
tactile feedback [14, 52, 54, 71, 76] to notify blind users of
obstacles and their distance.
Since perceiving stairs is essential for safe navigation, many
obstacle avoidance systems also detected stairs [7, 17, 28,
34]. For example, Bhowmick et al. [7] designed IntelliNavi,
a wearable navigation system that combined a Kinect and an
earphone. With SURF descriptors and an SVM classifier, the
system recognized walls, stairs, and other obstacles and gen-
erated audio messages to safely guide a blind user through
and around these features. Capi and Toda [17] embedded
depth sensors and a PC into a wheeled walker. With the depth
sensors recognizing the environment, the system informed
blind users of the existence and position of obstacles, stairs,
and curbs using verbal directions or beeps. Moreover, Hub et
al. [36] presented an (unimplemented) concept for an indoor
navigation system that provided more specific information
about stairs, such as the number of stairs and the position of
the railing.
In addition to navigation systems, researchers have also pro-
posed stair detection algorithms [20, 30, 5760, 68, 79]. For
example, Murakami et al. [58] proposed a method that uses
an RGB-D camera to detect stairs. Cloix et al. [20] designed
an algorithm that detected descending stairs with a passive
stereo camera, achieving a 91% recognition rate in real-time.
Perez-Yus et al. [60] proposed a real-time recognition
method that detected, located, and parametrized stairs with a
wearable RGB-D camera, and could even work when the
stairs were partially occluded.
This prior research addressed only auditory feedback for
people who are blind, overlooking PLV’s preference to use
their remaining vision. In contrast, our work addresses this
gap by designing AR visualizations to assist PLV in navi-
gating stairs.
Safe Navigation for PLV
There has been little research on navigation systems for low
vision. No work has specifically focused on stairs.
In terms of low-tech tools, some PLV use optical devices to
enhance their visual abilities. Bioptics, monoculars, tele-
scopes, and binoculars are used for recognizing signs and ob-
stacles at a distance [81]. Some PLV occasionally use prisms
that are ground into glasses to expand their FOV. However,
these specialized tools often stigmatize users in social set-
tings [69]; thus, people avoid using them or abandon them
altogether [25]. Some PLV also use a white cane, especially
at night and in unfamiliar places, but many prefer not using
it because it exposes their disability [86].
Some research has contributed obstacle avoidance systems
for PLV [26, 33, 41, 64]. Everingham et al. [26] designed a
neural-network classification algorithm for a head-worn de-
vice that segmented scenes rendered in front of users’ eyes
and recolored objects to make obstacles more visible. Simi-
larly, Kinateder et al. [41] developed a HoloLens application
that recolored the scene with high contrast colors for PLV
based on the spatial information from the HoloLens. Besides
recoloring the scenes, Hicks et al. [28] and Rheede et al. [64]
built a real-time head-worn LED display with a depth camera
to aid navigation by detecting the distance to nearby objects
and changing the brightness of the objects to indicate their
distances. To our knowledge, our research is the first attempt
to facilitate stair navigation for PLV.
We sought to facilitate stair navigation by augmenting the
stairs with AR visualizations. In general, there are three types
of AR displays: video see-through, optical see-through, and
projection [88]. For each display type, devices exist (either
commercially or as research prototypes) with different form
factors and device characteristics. For example, a mobile de-
vice can be used as a video see-through AR platform. It is
hand-held with a limited FOV. Considering the different vis-
ual abilities of PLV and our new use case for AR, we did not
know a-priori what AR platform would be most appropriate
for the stair navigation task.
To determine what platforms would be appropriate, we be-
gan by conducting a formative study with 11 PLV (7 female,
4 male; age: 2870, mean = 40) to evaluate prototype visu-
alizations for a smartphone. A smartphone is a widely used
AR device, so it would be a practical choice with potential
for high immediate impact. We presented the real-time cap-
tured image of the stairs on the phone screen and enhanced
the stair edges with yellow highlights. However, participants
had difficulty perceiving the visualizations on the hand-held
phone screen. They switched their gaze between the phone
and the real stairs, hindering their safety during motion. All
participants said they would prefer an immersive experience
where visualizations are seamlessly incorporated into the
physical environment.
Based on the formative study, we narrowed down our target
platforms to immersive AR platforms, specifically (1) hand-
held projection-based AR, and (2) optical see-through smart-
glasses. These platforms would not require the user to switch
their gaze or hinder their ability to perceive motion [88]. We
designed and evaluated visualizations for both platforms,
given that each platform has its own strength: projection-
based AR can augment large physical surfaces but projects
content publicly, which may be better suited to private places
with few people (e.g., home, workspace); meanwhile, smart-
glasses present information only to the user, which may be
better for crowded public places (e.g., subway stations).
We first explored the design space of hand-held projection-
based AR, which combines a camera that recognizes the en-
vironment and a projector that projects visual contents into
that environment [61]. This platform has potential to facili-
tate mobility because it can project over a relatively large
area [88] and provide visual augmentations in people’s pe-
ripheral vision, which is shown to be important for stair nav-
igation [56].
Although there are no popular commercial devices in the
market, researchers have prototyped different hand-held pro-
jection-based AR platforms [16, 18, 63, 82]. With a growing
number of smartphones that have embedded depth sensors
(e.g., iPhone XR, Samsung Galaxy S10) and projectors (e.g.,
Samsung Galaxy Beam [67]), smartphones may support pro-
jection-based AR with depth-sensing capabilities in the near
future. Thus, we designed visualizations for such a projec-
tion-based AR smartphone to augment the stairs for PLV.
Visualization (and Sonification) Design
From an interaction perspective, we aimed to simulate use of
a flashlight, which is commonly used by PLV in dark places
[79]: when a user points the projection-based AR phone at
the stairs, it recognizes several stairs in front of her and pro-
jects visualizations on those stairs in real time (Figure 1a).
Inspired by the contrast stripes that many PLV used to dis-
tinguish stair edges [79], we project highlights on the stair
edges to increase their visibility.
According to Zhao et al. [86], PLV had difficulty detecting
stairs and recognizing the stair edges, especially at a distance.
As a result, they walked slowly, stared down to better see the
current and next stair, and shuffled their feet to feel the stair
edges. We therefore designed our visualizations to help them
perceive the stairs from a greater distance, so they can better
plan and prepare their steps.
To alert users of the presence of stairs as they approach, we
first generate auditory feedback to provide an overview of
the stairs, including the stair direction and number of stairs.
Zhao et al. [86] found that PLV sought this kind of infor-
mation, which at times was difficult to perceive. We provide
three different auditory feedback choices: (1) Sonification
that indicates stair direction: one “ding” sound for going up
and two “ding” sounds for going down, adapted from the
sonic alerts for some elevators; (2) a human voice that ver-
bally reported stair direction and number of stairs: “Ap-
proaching upstairs, 14 stairs going up;and (3) a combined
sonification and human voice: “ding, approaching upstairs,
14 stairs going up.”
Since locating the first and last stairs was most important but
challenging for PLV [86], we distinguish the first and last
stairs from the rest by projecting thick highlights on them
(Figure 2a), while projecting thin highlights on the middle
stairs (Figure 3a). We call the highlights on the first and last
stairs End Highlights, and we call those on the middle stairs
Middle Highlights. We needed a visible color for these high-
lights that would not be confused with natural light, so we
used yellow.
Beyond these highlights, we sought ways to further empha-
size the first and last stairs so that a user will notice them and
perceive their exact location from a distance. We designed
five animations to achieve this:
(1) Flash: Since a flash can attract people’s attention [83, 84],
we added this feature to the end highlights. The highlights
appear and disappear with a frequency of 1Hz.
(2) Flashing Edge: When the end highlight flashes, the user
may lose track of the edge position when the highlight dis-
appears. So in this design, we kept a stable line at the stair
edge while flashing the rest of the highlighted strip (Figure
2b). The flash occurs at a frequency of 1Hz.
(3) Moving Edge: Movement also attracts attention [51].
With a stable line at the stair edge, we added another line
moving towards the edge to generate movement (Figure 2c).
(4) Moving Horizontal Zebra: Since movement can be dis-
tracting [84], we design a more subtle movement effect with
a yellow and black zebra pattern moving back and forth at a
frequency of 1Hz (Figure 2d).
(5) Moving Vertical Zebra: Moving the highlight over the
edge of the stair may distort the perceived location of the
edge, so we also designed a zebra pattern that is perpendicu-
lar to the edge (Figure 2e).
Since a staircase typically has stairs of uniform size, the mid-
dle stairs usually do not require much of the user’s attention.
We designed two middle highlights to support the user in a
minimally obtrusive way.
(1) Dull Yellow Highlights: We reduced the lightness of the
original highlights on the middle stairs to 60% to make them
less obtrusive than the end highlights (Figure 3b).
(2) Blue Highlights: We set the middle highlights to blue
since it has a lower contrast with the stairs but still enhances
their visibility [87] (Figure 3c).
To support a range of visual abilities, the design alternatives
can be selected and combined by a user to optimize her ex-
perience for a particular environment.
Evaluation of Projection-Based AR Visualizations
We evaluated the visualizations for projection-based AR,
aiming to answer three questions: (1) How do PLV perceive
the different visualization designs? (2) How useful are the
visualizations for stair navigation? (3) How secure do people
feel when using our visualizations?
Participants. We recruited 12 PLV (6 female, 6 male; mean
age=53.9) with different low-vision conditions, as shown in
Table 1 (P1 P12). Eleven participants (all except P3) were
registered as legally blind, meaning that either (1) their best-
corrected visual acuity in their better eye was 20/200 or
worse, or (2) their visual field was 20°. We conducted a
phone screen to ensure participants were eligible.
Apparatus. The study was conducted at an emergency exit
staircase with eight stairs. To minimize the confounding ef-
fect of computer vision accuracy, we prototyped our design
with a Wizard of Oz protocol [65]. This involved mounting
a stationary projector on a tripod at the top of the set of stairs.
The projector was connected to a laptop that generated the
visualizations. We created all visualizations with Power-
Point. A researcher sat in front of the laptop to control the
visualizations manually, based on the participants position
and orientation (facing upstairs or downstairs). To simulate
the limited projection area of a handheld projector, we pro-
jected visualizations only on the three stairs in front of the
participant (Figure 1a).
Figure 3. Middle highlights: (a) Initial thin highlights with
bright yellow; (b) Dull Yellow Highlights; (c) Blue Highlights.
Figure 2: End highlights for first and last stairs. (a) Initial thick highlight with bright yellow; (b) Flashing Edge: the highlight
switches between thick (b1) and thin (b2); (c) Moving Edge; (d) Moving Horizontal Zebra; (e) Moving Vertical Zebra.
We asked the participant to hold a regular phone with the
back camera facing the stairs, assuming the projected visual-
izations were from the smartphone. We also implemented the
auditory feedback on the smartphone. One researcher con-
trolled the audio feedback with another smartphone via TCP.
Procedure. The study consisted of a single session that lasted
1.5 hours. We started the session with an interview, asking
each participant about their demographics, visual condition,
and technology use when navigating stairs. A licensed op-
tometrist conducted a confrontation visual field test and a
visual acuity test using a Snellen chart (Table 1). After the
interview, we walked the participant to the staircase and con-
tinued the study with a visualization experience session and
a stair navigation session.
During the visualization experience, we gave the participant
our prototype smartphone and explained how to use it. The
participant experienced our design in three phases: (1) Audi-
tory feedback when approaching the stairs, with three alter-
natives: sound, human voice, and the combination of them;
(2) End highlights on the first and last stairs with six design
alternatives (Figure 2); and (3) Middle highlights on the mid-
dle stairs with three design alternatives (Figure 3).
In each phase, we presented all design options to the partici-
pant and asked about their experiences, including whether or
not they liked the design, whether the design distracted them
from seeing the environment, and how they wanted to im-
prove it. For each design option, participants were encour-
aged to walk up and down the stairs. To avoid order effects,
we randomized the order of the design alternatives.
After the participant experienced all design alternatives in all
three phases, we asked them to select one alternative from
each phase to create a preferred combination. Participants
used this combination for the stair navigation portion.
During the stair navigation portion of the study, participants
conducted two stair navigation tasks: walking upstairs and
walking downstairs. They conducted each task in two condi-
tions: (1) walking in their original way (participants could
use a cane if desired, but nobody chose to use it); (2) walking
using our prototype with their preferred combinations. They
repeated each task in each condition five times.
We indicated the start points with yellow stickers on the
landings, three feet away from the top and bottom stairs. For
each task, participants stood at the starting point and started
the walking task when the researcher said, “Start.” The task
ended when both their feet first touched the landing. Partici-
pants were asked to walk as quickly and safely as possible.
We recorded the time for each task.
To reduce order effects, we used a simultaneous within-sub-
jects design, switching the task condition after each walking
up and down task. We counterbalanced the starting task
(up/down) and condition (with/without the prototype).
We ended the study with an exit interview, asking about the
participant’s general experience with the prototype. They
also gave Likert-scale scores for the usefulness and comfort
level of the prototype, as well as their psychological security
when using the prototype, ranging from 1 (strongly negative)
to 7 (strongly positive).
Analysis. We analyzed the effect of our visualizations on
participants’ walking time when navigating stairs. Our ex-
periment had one within-subject factor, Condition (Visuali-
zations, No Visualizations), and one measure, Time. We de-
Visual acuity
(Left Eye)
Visual acuity
(Right Eye)
Visual acuity
(Both Eyes)
Visual field
(Left Eye)
Visual field
(Right Eye)
Retinopathy of Prem-
aturity; Glaucoma
Inferior constriction
All fields constriction
Retinitis Pigmentosa
Doyne Macular Dys-
trophy; Glaucoma
Inferior nasal and superior
temporal fields constriction
Constricted in superior
Posterior Uveitis
All fields constriction
Temporal field con-
Flecked Retina Syn-
Inferior nasal field con-
Albinism with nys-
Steven Johnson's
Inferior temporal field con-
All fields constriction
All fields constriction
Macular Degenera-
tion (Juvenile)
Brain Tumor Re-
moval age 2
Inferior nasal fields con-
Inferior nasal fields con-
Achromatopsia (cone
All fields constriction
All fields constriction
Diabetic Retinopathy
Inferior and superior nasal
fields constriction
Inferior and superior na-
sal fields constriction
Table 1. Participant demographic information. Participants labeled with superscript ‘1’ were in the study for projection-based
AR, while those labeled with superscript ‘2’ were in the study for smartglasses.
fined a Trial (15) as one walking task. To validate counter-
balancing, we added another between-subject factor, Order
(two levels: WithWithout, WithoutWith), into our model.
An ANOVA found no significant effect of Order on walking
time (downstairs: F(1,10)=0.108, p=0.749; upstairs:
F(1,10)=0.007, p=0.937) for α = 0.05.
We analyzed the participants’ qualitative feedback by coding
the interview transcripts based on grounded theory [66].
Effectiveness of the Visualizations (and Sonification). All
participants felt our design was helpful and “[would make]
life easier” (P4), especially in relatively dark environments,
such as subway stations. They liked the idea of projecting
highlights on the stair edges to simulate the physical contrast
stripes. P9 said, Having [the highlights] this bright is really
good. Because usually [the contrast stripes] are painted, and
theyre about to fade out, and theyre not as vibrant and
bright as this is. This is great here because you can see it.”
Participants gave high scores to the usefulness and comfort
level of the visualizations, as shown in Figure 4.
Next, we report participants’ responses on all design alterna-
tives in the three design phases.
(1) Auditory feedback when approaching stairs. Four partic-
ipants chose the human voice since they felt it was friendlier
and more informative, reporting the number of stairs. Mean-
while, three participants (P2, P8, P7) chose the nonverbal
sound because they had relatively good vision and felt the
human voice was unnecessary. The other participants pre-
ferred the combination, feeling that the sound and human
voice complemented each other: the “ding” sound was an
alert in noisy environment and the human voice reported
more concrete information.
(2) End highlights. All participants felt that the end high-
lights were an important aspect of the design. “This is the
part where I probably trip the most, on that last step. The light
[end highlights] is really important because it defines the end
of the step, so youre not gonna miss a step” (P5).
Although we provided different visualizations (flash or
movement) to further enhance the end highlights, most par-
ticipants (seven out of 12) liked the original design. They felt
the thickness and brightness of the highlights sufficiently at-
tracted their attention and flashes and movements distracted
them. As P7 explained: “I guess because I dont see details,
when I see things moving, I kind of get the sense of not see-
ing it correctly. I prefer just still… You’ve got the thick [end
highlights] to distinguish from the thin [middle highlights].
This is nice.”
Three participants (P6, P4, P11) felt the flash effect grabbed
their attention more and alerted them. P6 and P4 preferred
the Flashing Edge since it helped them better track the stair
edges than the Flash. However, P11 preferred the Flash since
the thin stable highlight of the Flashing Edge gave him an
illusion of “another small step” (P11).
Two participants (P2, P3) liked the Moving Vertical Zebra
the most. They felt that the movement attracted their atten-
tion and the vertical zebra pattern also labeled the stair edges.
However, none of the participants liked the Moving Horizon-
tal Zebra since the parallel movement to the stair edge dis-
torted its appearance.
Although no participants chose the Moving Edge in the
study, P6 felt it could be helpful since it indicated direction.
She explained that “at least it shows you where to go.” How-
ever, most participants found it overwhelming; it made them
feel like the ground is going to move” (P9).
(3) Middle highlights. Eleven out of 12 participants found the
middle highlights useful. Projecting highlights onto the next
several steps gave participants a preview of the stairs and
helped them better prepare their steps, especially when there
were abnormal stairs. As P5 said,
“So you don’t have to guess what’s coming [with the middle
highlights]. Sometimes you can have a broken step, you can
have no step, or you can have a step that was not installed
properly. Sometimes staircases were defective and the dis-
tance between some of them is not even… With the [high-
lights], you can see the definition of the steps.” (P5)
Even on a typical set of stairs, participants wanted the middle
highlights to confirm that they are still on the stairs, which
made them feel safe. “It’s better with [the] lines. So I know
that this wont be my final step” (P10).
In terms of color, most participants preferred the bright yel-
low (seven out of 12), wanting to be alert on each step. “The
yellow gives me more alert and the blue gives me a little bit
more of a relaxed mode. But when I go up and down the
steps, I wanna be alert” (P5).
Figure 4: Diverging bars that demonstrate the distribution of participant scores (strongly negative 1 to strongly positive 7)
for usefulness, comfort level, and psychological security when using visualizations on projection-based AR. We label the
mean and SD under each category.
Meanwhile, four participants felt that the middle highlights
should be a different color from the end highlights. Three
participants liked the blue color since “it’s not as attracting
as yellow but still sticks out” (P9). No one liked the dull yel-
low since it was too subtle. One participant wanted red.
P6 was the only one who did not want the middle highlights.
She felt it unnecessary since the she could walk on stairs
knowing the position of the first stair and the number of stairs
(she counted stairs). The middle highlights distracted her
from seeing her surroundings.
Walking Time. Our visualizations reduced the time partici-
pants spent during stair navigation. For descending stairs,
participants’ navigation time was reduced by 6.42% when
using their preferred visualizations (mean=6.17s,
SD=1.93s) than when not using them (mean=6.59s,
SD=2.03s). With a paired t-test, we found a considerable
trend towards significance when evaluating the effect of
Condition on the time walking downstairs (t11=-2.131,
p=0.0565) with an effect size of 0.615 (Cohen’s d). P11 re-
marked on the increase in her speed: “This is the fastest Ive
used stairs ever! You dont understand, this is like Im back
to being me!
For ascending stairs, participants’ navigation time was re-
duced by 5.78% when using their preferred visualizations
(mean=5.84s, SD=1.59s) than when not using them
(mean=6.20s, SD=1.81s). With a paired t-test, we also found
a trend towards significant effect of Condition on the time
walking upstairs (t11=1.9894, p=0.0721).
Behavior Change. Based on our observations of the walking
tasks, some participants (e.g., P9, P4) looked down less when
using our design since they could use their lower peripheral
vision to notice the highlights. As P9 mentioned, I know
mentally Im looking in the bottom field of vision, even
though Im looking straight ahead…The [highlight] stands
out very bright and my peripheral catches it, it catches blue,
it catches the yellow…Without the system, I have to stare a
lot more at the stairs and, I have to look a little bit extra to
make sure that that is really the last step.
Some participants (e.g., P6, P3, P11) hesitated at the first and
last stairs and felt the stairs with their feet when walking
without our visualizations (especially in the first two trials of
the walking tasks). When using our visualizations, they
stopped feeling the stairs with their feet. Some participants
(e.g., P7, P11) walked without holding the railing when using
our visualizations. P10 also changed how he balanced his
body when using our prototype: without our design, he
walked down leaning his left shoulder forward instead of fac-
ing forward. He explained:
I noticed when [I walked] without the [highlights], Im
walking more down on my side when descending the stairs.
In case if I fall, then I fall at least more on my side as opposed
to falling forward. With the [highlights] on, I was walking
more straight down. I feel a lot more confident(P10).
Psychological Security. Our visualizations improved partic-
ipantspsychological security when walking on stairs. Par-
ticipants all gave high scores to their psychological security
when using our prototype (mean=6.6, SD=0.67), as shown in
Figure 4. They all felt more confident and safer when navi-
gating stairs with the projected visualizations. P6 and P8 also
said that the visualizations reduced their visual effort, so that
they could look at the surroundings (e.g., other people and
obstacles on the stairs), which also helped them feel safe.
Social Acceptance. Most participants were not concerned
about projecting highlights on stairs. They felt this technol-
ogy wascooland could even be beneficial for people who
are sighted, for example, in dark environments. P11 regarded
the prototype as an identity tool (similar with the identity
cane), which could indicate her disability to others, so that
other people won’t bump into her on stairs. Only P6 and P9
were concerned that this technology might “scare others” and
draw too much attention to themselves. They preferred de-
vices, such as smartglasses, that would show the visualiza-
tions only to them.
The second platform we explored was optical see-through
smartglasses. They present information only to the user and
do not need to project onto a physical surface [88]. Today,
this platform is more readily available than projection-based
AR. Beyond smartglasses prototypes developed by research-
ers [9, 42, 43], many early versions of products, such as Mi-
crosoft HoloLens [55] and Magic Leap One [49], mark a
trend towards mainstream smartglasses devices.
However, current optical see-through smartglasses have a
very limited FOV [88] (e.g., ca. 30° wide × 17° high for Ho-
loLens v1), largely limiting the area for presenting AR visu-
alizations. While the recently announced HoloLens v2 is es-
timated to have a 29° vertical FOV, it is still much smaller
than that of a typically-sighted human (120° vertical FOV).
With the limited vertical FOV, the highlight design on pro-
jection-based AR would not work well for the smartglasses.
To see the highlight on the current stair (Figure 5a), a user
would have to look nearly straight down to her feet (Figure
5b), hindering her ability to see her surroundings. This can
be potentially dangerous and is physically strenuous. As
such, our visualizations aim to facilitate a comfortable head
pose by indicating the user’s exact location on the stairs with-
out augmenting the stairs directly.
Figure 5. (a) The visual effect of adding highlights to stairs
with HoloLens. (b) A user stares down to see the highlights.
Visualization (and Sonification) Design
Similar to projection-based AR, when the user stands on the
landing, our system verbally notifies the user of the existence
of the stairs with stair direction and the number of stairs.
According to Zhao et al.’s study, knowing when the stairs
start and end can help PLV plan their steps, while the middle
stairs are less important because most stairs are uniform [86].
Thus, to better inform the user of their position on the stairs,
we distinguish a user’s position on a set of stairs based on
how close she is to a change in her step pattern. This change
can involve stepping down for the first time after walking on
a flat surface or stepping on a flat surface after stepping down
repeatedly. We provide feedback to indicate that a change is
approaching, and then that the change is about to occur.
Specifically, the following are the seven stages we used in
our design, described for descending stairs as an example
(Figure 6): (1) Upper landing: the flat surface that is more
than 3' away from the edge of the top stair; (2) Upper prepa-
ration area: 1.5'–3' away from the top stair edge where the
person should prepare to step down; (3) Upper alert area:
within 1.5' from the top stair edge where the person’s next
step would be stepping down; (4) Middle stairs: between the
edge of the top stair and the edge of the second-to-last stair,
where the person is stepping down repeatedly; (5) Lower
preparation area: the last stair, where the person is one step
away from the flat surface and should prepare for the immi-
nent flat surface; (6) Lower alert area: within 1.5' from the
last stair edge on the landing where the person’s next step is
on the flat surface (not stepping down); (7) Lower landing:
1.5' away from the last stair edge where the person is walking
on flat surface again. Our visualizations inform PLV of the
different stair stages via different design. We design two vis-
ualizations and one sonification.
(1) Glow visualization (Figure 7a–d): We generate a glow
effect at the bottom of the display to simulate the experience
of seeing the edge highlights on the stairs with peripheral vi-
sion. Unlike the highlights that are attached to the stair edges,
the glow is always at the bottom of the vertical FOV, so that
the user can hold their head at a comfortable angle and does
not need to look down to see the glow. We adjust the glow
color and size to inform the user of their current stage on the
Landing stages: thin red glow to indicate the flat surface.
Preparation stages: thick cyan glow, telling users to pre-
pare for the first surface level change or the end of surface
level changes.
Alert stages: thick yellow glow, indicating that the next
step is the first surface level change or the end of surface
level changes.
Middle stairs: thin blue steps to indicate the middle stairs.
(2) Path visualization (Figure 7e–g): Inspired by the railings,
which PLV used as a visual cue to see where the stairs start
and end [86], we designed this visualization to show the trend
of the stairs. The direction of the Path follows the stairs: it
goes straight forward along the landing, turns down (or up)
along the slope of the descending (or ascending) stairs, and
goes straight forward again when arriving at the landing. The
Path is generated at the user’s eye level with a fixed distance
from one side of the head (we adjusted its specific position
based on the user’s visual field and preference), making sure
that they can see it without looking too far down. The user
can thus observe the start and end of the stairs by looking at
the turning points of the Path. To better distinguish the land-
ing and the stairs, we colored the straight part of the visuali-
zation (over the landing) yellow and the slope blue. We
added virtual pillars to connect the Path to each stair to help
users associate the visualization with the physical stairs.
(3) Beep sonification: This sonification informs users of their
current position on the stairs. Similar to glow, we adjusted
the sound based on the different stages of the stairs:
Start landing stage: no sound.
Preparation stages: low-frequency beep, indicating users
should prepare for the first surface level change or the end
of surface level changes.
Alert stages: high-frequency beep, indicating that the next
step is the first surface level change or the end of surface
level changes.
Middle stairs: no sound.
End landing stage: audio description that verbally reports
“Stair ends.”
Evaluation of Smartglasses Visualizations
We conducted a user study to evaluate the visualizations we
designed for commercial smartglasses. We aim to answer:
Figure 6: The seven stages of the stairs.
Figure 7: Glow (ad) and Path (eg). Glow: (a) thin red glow on the landing; (b) thick cyan glow in the preparation area; (c) thick
yellow glow in the alert area; (d) thin blue glow on the middle of the stairs. Path: (e) view of the Path on the landing; (f) view of the
Path when getting close to the first stair; (g) view of the Path on the middle of the stairs.
(1) How do PLV perceive the visualizations on smartglasses?
(2) How effective are the visualizations for stair navigation?
(3) How secure do PLV feel when using our visualizations?
Participants. We recruited 12 PLV (5 female, 7 male; mean
age=51.6) with different low vision conditions (Table 1, P6
P17). All participants were legally blind. Seven participants
had taken part in the evaluation of our projection-based AR
visualizations, but they did not see the stairs used in this
study. We followed the same recruitment procedures as in
the previous study.
Apparatus. We built our prototype on Microsoft HoloLens
v1. We chose HoloLens because of its FOV (~34° diagonal),
binocular displays, and ability to be worn with eyeglasses.
Many lightweight smartglasses have only one display in
front of the right eye (e.g., Google Glass, North Focals), and
are unusable for PLV with vision only in the left eye. Other
options either have a smaller FOV (e.g., Epson Moverio BT-
300, 23° diagonal) or cannot be used with eyeglasses (e.g.,
Magic Leap One).
To minimize the confounding effect of general computer vi-
sion accuracy, we marked the position of the stairs with two
Vuforia image targets [37] (on the side walls at the top and
bottom landing of the stairs) that can be recognized by Ho-
loLens. This provided an anchor in the environment, which
enabled our application to determine the position of the user
on the stairs by tracking the motion of the HoloLens, improv-
ing the accuracy of our visualizations and sonification.
Procedure. The study consisted of a single session that lasted
about 1.5–2 hours. An initial interview asked about de-
mographics, visual condition, and use of tools when navi-
gating stairs. Next, a licensed optometrist on the team con-
ducted a confrontation visual field test and a visual acuity
test using a Snellen chart (Table 1). We then gave the Ho-
loLens to the participant and explained how to use it. After
the participant put on the HoloLens, the optometrist tested
her visual field and visual acuity again to measure the effect
of the HoloLens on the participant’s visual ability. We con-
tinued the study with a design exploration session and a stair
navigation session.
We conducted the design exploration session at an emer-
gency staircase with 12 stairs (different stairs than those in
the projection study). Participants wore the HoloLens and
experienced four different designs: Glow, Path, Beep, and
Edge Highlights as a baseline. Participants were allowed to
walk up and down the stairs to experience the design in-situ.
They thought aloud, talking about whether or not they liked
the design, whether the design distracted them, and how they
wanted to improve it. We counterbalanced by randomizing
the presentation order of the four designs. After the partici-
pant experienced all the design alternatives, we asked for
their preferred combination.
The stair navigation session was conducted at another stair-
case with 14 stairsa wider set of access stairs in a more
brightly lit and open environment. Participants performed
two stair navigation tasks: walking upstairs and walking
downstairs. They conducted each task in three conditions: (1)
walking on the stairs as they typically would (they could use
a cane if desired, but none chose to use it), (2) walking on the
stairs with HoloLens and no visualizations, and (3) walking
on the stairs with HoloLens and their chosen designs. Each
task in each condition was repeated five times.
We indicated the start and end points on the stairs with stick-
ers that were three feet away from the top and bottom steps
on the landings. For each task, the participant stood at the
starting point and started when the researcher said, “Start.”
The task ended when both her feet first arrived at the landing.
Participants were asked to walk as quickly and safely as pos-
sible during the task. We recorded the time for each task.
To reduce the effect of order on the results, we used a simul-
taneous within-subjects design by switching the task condi-
tion after each round of walking up and down. We also coun-
terbalanced the starting task (up/down) and the conditions.
The study ended with a final interview asking about the par-
ticipant’s general experience with the prototype. We asked
them to score the usefulness and comfort level of the proto-
type on a Likert scale, as well as their psychological security
when using the prototype, ranging from 1 (strongly negative)
to 7 (strongly positive).
Analysis. We analyzed the effect of our visualizations on
participants’ walking time when navigating stairs. Our ex-
periment had one within-subject factor, Condition (No Ho-
loLens; HoloLens w/o visualizations; Visualizations), and
one measure, Time. We defined a Trial (15) as one walking
task. We determined Time from the video we recorded dur-
ing the study. When analyzing data, we removed the first trial,
treating it as a practice trial for participants to get used to the
To validate counterbalancing, we added another between-
subject factor, Order (six levels based on the three condi-
tions), into our model. An ANOVA found no significant ef-
fect of Order on walking time (downstairs: F(5,6)=0.35,
p=0.338; upstairs: F(5,6)=0.445, p=0.804) and no significant
effect of the interaction between Order and Condition on
walking time (downstairs: F(10,12)=1.418, p=0.280, upstairs:
F(10, 12)=0.535, p=0.835).
We analyzed participants’ qualitative responses with the
same method we used in the previous study.
Experience with the Smartglasses. We first report the effect
of the HoloLens on participants’ visual abilities. Some par-
ticipants appreciated the tinted optics because they blocked
environmental glare. Three participants’ visual acuity im-
proved when wearing the HoloLens (P14: from 20/140 to
20/100, P7: from 20/400 to 20/200, P15: from 20/200 to
20/140). However, P12 experienced a decrease in visual acu-
ity (from 20/200 to 20/400). It is possible that the tint of the
HoloLens made the environment too dark for him to see. In
terms of visual field, no participants experienced a change
while wearing the HoloLens. All participants mentioned the
heaviness of the hardware, which potentially impacted their
experience negatively.
Effectiveness of the visualizations (and sonification). We
report participants’ feedback on each design alternative.
(1) Edge Highlights (Baseline). Most participants found it
difficult to use the Edge Highlights because of the limited
vertical FOV. Participants had to angle their head down a lot
to see the highlight on the current stair. They found it uncom-
fortable and unsafe to maintain that posture on the stairs, es-
pecially when walking down. P9 reported that, “To continue
seeing everything, my head has to be completely [down], my
chin is touching my chest.”
Nevertheless, some participants (e.g., P6, P10, P13) felt this
design was helpful because it provided a preview for future
steps, especially when they looked downstairs from the top
landing. Interestingly, P10 mentioned that he could combine
his own vision (that is not covered by the HoloLens) with the
Edge Highlights. He didn’t feel the need to look down all the
time because he has good peripheral vision to see the stairs,
and he could use the Edge Highlights on the HoloLens to
prepare for future steps and verify the last step.
(2) Glow. Most participants found Glow helpful and easy to
understand. They felt the different colors can effectively in-
form them of their stage on the stairs, and the thicker and
brighter glow colors at the preparation and alert area success-
fully attracted their attention. Moreover, participants enjoyed
the freedom to move their head in any direction while still
being able to see Glow. This enabled them to better explore
their surroundings and still be visually alerted about the stairs
without looking down. P9 described his experience:
“This one is my kind of style. Its subtle, simple, and I can
keep my head wherever I want at the same time. And [the
color of the Glow] changes exactly when I need to step. It
warns me when Im about to take my last stepIt’s very dis-
creet but not distracting. So Ill still be able to see people,
and things around me without falling over steps. If my real
glasses could do this, it would be good.”
However, two participants (P6, P14) had difficulty using
Glow because of difficulty distinguishing colors. P14 doesn’t
have color vision, while P6’s visual condition included auras
of various colors that interfere with the colors of Glow.
Moreover, some participants (e.g., P10, P12, P17) mentioned
that the blue glow on the middle stairs was difficult to notice,
especially in the bright environment for the walking tasks.
Not seeing the glow on the middle stairs distracted the par-
ticipants and made them feel uncertain about the stairs. As
P10 mentioned, “I want more information while Im going
down the stairs, The yellow color was helpful to let me know
that I’m at the last step...but I didnt really see that [blue glow
in the middle], I need to be reassured that Im still going
down the stairs.” P17 slowed down as she struggled to see
the blue glow when completing the walking tasks.
(3) Path. Half of the participants indicated that Path could be
helpful. They mentioned that Path gave them a clear over-
view of the stair trends, specifically where the stairs start and
end. P13 described his impression, “This is perfect because
if Im coming to the stairs, looking at the stairs and I wont
have to look down, I immediately know where [the stair] be-
gins and where it ends, as soon as my head turns to the [Path].”
P8 also felt Path could guide him along the stairs: “It’s like a
reinforced railing but it’s also like a guide [showing] where
I’m stepping. It’s like a good reference. I kinda like to have
the guide.” Moreover, three participants (e.g., P6, P9, P13)
interpreted Path as a reminder to look for the physical railing.
Interestingly, we found that participants had different prefer-
ences for Path’s position in their visual field. Many (e.g., P12,
P16) adjusted Path to a position where their vision was best.
Meanwhile, others adjusted it to a position that they felt was
the most intuitive to comprehend. For example, P9 and P15
adjusted Path so that it was in the center of their vision and
that they could use it in a similar fashion to a GPS guide. P14
moved Path lower so he can more easily associate the virtual
Path to the real staircase. As he said, “[Path] would be my
favorite if we were able to get it to [get close] to the stairs
instead hanging up in the middle of everything.”
However, half of the participants felt Path was distracting
and hard to understand. P6 even felt it was misleading to
have a virtual railing (Path) in a different place than the real
railing because it changed her perception of the width of the
staircase: “It suggests that there is a railing and then I feel I
have a very narrow staircase” (P6).
(4) Beep. All participants except for P17 felt Beep was help-
ful. P6 thought it could reduce cognitive load and enable her
to see the surroundings. As she said, “Its really interesting.
The more often I use it, the more I like the [Beep]… I don’t
have to watch out for visual [information] of the stairs. With
the audio, I just look at the [surrounding] or look at people in
front of me and I dont have to worry about [the stairs].
Thats actually easier. P14 also felt Beep could be a good
compensation when the visualizations are not visible in
bright environment.
On the other hand, P17 felt that Beep may not be distinguish-
able from environmental sounds: “The world around you is
so full of noise. I mean, if I use this in the city… you have
cars honking and everything like that, Im not sure if I would
react in time. P8 and P14 voiced the same concern about
environmental noise but explained that along with the visu-
alizations the sound would be recognizable.
Figure 8: Distribution of participants’ preferences for visualizations and sonification on HoloLens.
Preferences for visualizations (and sonification). Partici-
pants combined different visualizations and sonification
based on their preferences, as shown in Figure 8.
We found that most participants (10 out of 12) combined a
visualization with a sonification (Beep). While they all men-
tioned that visualizations were more effective than audio
feedback and used the visualization as a primary guide, par-
ticipants also appreciated the beep and used it as a secondary
complement to the visualizations. As P12 said, “Actually I
liked [Glow] more with the audio [Beep]. They augment
each other. I found it to be more useful together than sepa-
rate.” Only two participants did not combine the visualiza-
tion with the sonification: P7 used audio alone, and P17 used
Glow alone.
The most commonly chosen visualization was Glow, which
was preferred by eight participants. One participant (P14)
chose Path, while two participants (P6 and P10) chose Edge
Highlights. P13 combined all four designs because he used
each design for different purposes: Path as a reminder to look
for a railing, Edge Highlights to get an overview of the stairs,
and Glow when walking on stairs and scanning the environ-
ment for people or obstacles.
In general, participants felt that our prototype was helpful,
especially in unfamiliar places. They gave high scores
(mean=5.8, SD=1.65) for the usefulness of their preferred
visualizations and sonification. They also felt the visualiza-
tions were comfortable to see (mean=5.6, SD=1.73), as
shown in Figure 9.
Walking Time. In the walking tasks, the HoloLens itself had
a big impact on participants’ walking time when navigating
descending stairs. With ANOVA, we found that participants’
walking time significantly increased when they walked
downstairs wearing the HoloLens whether using our visuali-
zations or not (F(2,12)=8.783, p=0.0045). However, when
walking upstairs, there was no significant effect of Condition
on participants’ walking time (F(2,10)=2.924, p=0.092). Since
navigating descending stairs is more challenging, wearing a
new device can more easily affect people’s walking speed.
With the condition of wearing HoloLens without visualiza-
tions as the baseline, we analyzed the effect of our visualiza-
tions on PLV’s walking time. We found that there’s no sig-
nificant effect of Condition (HoloLens with visualizations vs.
HoloLens without visualizations) on participants walking
time for both ascending (F(1,10)=0.466, p=0.511) and de-
scending stairs (F(1,10)=0.114, p=0.742). Four participants
(P6, P8, P12, P17) slowed down a little on ascending stairs
with the visualizations, while five participants (P6, P13, P12,
P16, P17) slowed down on descending stairs with their pre-
ferred visualizations. Except for P17, who slowed down a lot
when walking downstairs with our visualizations, all other
participants’ times increased by less than 1 second. We in-
vestigated and found that P17 had a hard time seeing the blue
glow on middle stairs in the bright environment. She slowed
down and struggled to see the blue glow during walking tasks.
Psychological Security. While there is no significant im-
provement in walking speed when using the visualizations,
participants reported feeling safer and more confident when
using our design. P11 described her experience when using
our prototype, I love the fact that the [visualizations] are
there. Once you understand what they mean, you can actually
move more confidentlyI would be very safe instead of
falling down and kicking things.”
Participants gave scores to their psychological security dur-
ing stair navigation in three conditions (Figure 9): (1) walk-
ing as they typically would (mean=4.8, SD=1.60); (2) with
HoloLens but no visualizations (mean=3.9, SD=1.44); (3)
with preferred visualizations or sonification (mean=6.1,
SD=1.38). Paired Wilcoxon Signed-Rank tests showed that,
while wearing HoloLens significantly reduced participants’
psychological security (V=8, p=0.031), our visualizations
significantly increased participant psychological security
compared with not wearing HoloLens (V=21, p=0.050).
Behavior Change. Our design changed people’s behaviors
when walking on stairs. Two participants (P8, P15) walked
without holding the railing when using their preferred visu-
alizations. Moreover, we tracked participants’ head orienta-
tion with HoloLens during the walking tasks, and found that
some participants’ (e.g., P6, P9) head orientation changed
when using our visualizations. For example, Figure 10 shows
the head forward angle of P9 on each stair stage when walk-
ing downstairs with and without the visualizations. We found
Figure 9: Diverging bars that demonstrate the distribution of participant scores (strongly negative 1 to strongly positive 7) for the
usefulness and comfort level of the visualizations, and their psychological security in three conditions: without HoloLens, with Ho-
loLens but no visualizations, and with visualizations. We label the mean and SD under each category.
that, he looked much further down to the stairs when not us-
ing our visualizations, especially at the beginning and the end
of the stairs (e.g., preparation area, alert area).
Our research is the first to explore AR visualizations for peo-
ple with low vision in the context of stair navigation. Our
studies demonstrate the effectiveness of our designs with
both projection-based AR and smartglasses. We found that
our visualizations on both platforms largely increased peo-
ple’s psychological security, making them feel confident and
safe when walking on stairs. Moreover, the visualizations on
projection-based AR showed a trend towards significantly
reducing PLV’s walking time on stairs.
Participants had some common choices on the visualizations
on each platform. For projection-based AR, the stable thick
yellow highlights on first and last stairs were the most pre-
ferred (7/12). For highlights on middle stairs, most partici-
pants (7/12) preferred the most visible yellow highlights in-
stead of blue or dull yellow ones. For HoloLens, most partic-
ipants (6/12) chose the combination of Glow and Beep. Un-
like prior research, which showed that PLV had very differ-
ent preferences for visual augmentations [84, 85], our study
revealed that some common preferences among PLV cross
different visual abilities for stair navigation. This can poten-
tially set a foundation for future visualization design for stair
navigation and more general navigation systems.
We compared users’ experiences with the visualizations on
both platforms given that seven participated in both studies.
Most PLV (e.g., P10, P12) felt that the visualizations on pro-
jection-based AR were easier to use than those on the smart-
glasses. The highlights on projection AR were intuitive to
perceive because they directly enhance the stair edges that
participants were looking for. Meanwhile, the design on
smartglasses, especially Glow and Beep, proposed a new
way to perceive stairs: it divided the stairs into different
stages, providing only immediate information about the cur-
rent stair without a preview of what’s to come. This new stair
perception method increased participants’ cognitive load, be-
cause they had to associate the design with the physical
stairs, making them more cautious. This could be one major
reason why PLV’s walking time did not improve when using
smartglasses. P12 compared his experiences with the two
platforms, “The first experience [projection-based AR] gave
me a better sense of a direction as to where this was go-
ingBut the [glow] was like floating over the steps, and they
didnt stay fixed in place. That was one big difference. I like
the light fixed on the step.”
While our study focused on the design and evaluation of the
AR visualizations, we discuss the technical feasibility and
challenges for our AR stair navigation systems. The imple-
mentation of such a system could be challenging. For such a
dangerous task as stair navigation, the navigation system
should be highly accurate and fast since a small error could
lead to severe consequences (e.g., a slight shift of the edge
highlight could make the user fall). The system also needs to
tolerate the users body (e.g., hand, head) movement when
walking on stairs, which requires a tradeoff between speed
and stabilization. While many stair detection methods have
been presented in prior research [20, 58], algorithms that lo-
cate the exact position of each stair with high speed and ac-
curacy should be investigated and tested to support the stair
visualization systems we designed for PLV.
The system implementation should also take into account
different real-world situations. Our evaluation was con-
ducted indoors, with no other people around. However, the
real world could be much more complicated, raising all kinds
of challenges. For example, AR visualizations could be less
visible outdoors, crowded stairs could diminish the accuracy
of the stair recognition because the stair edges are blocked,
and the projected highlights may also disturb other people.
In future work, we will consider these real-world challenges
when developing AR stair navigation systems. For example,
besides recognizing stairs with computer vision, we will con-
sider instrumenting the environment (e.g., using RFID) to
foster accurate and fast stair recognition in a complex envi-
ronment. We will also add face detection to avoid projecting
in bystanders’ faces.
As with any study, ours had some limitations. First, the Ho-
loLens’s weight strongly diminished PLV’s experiences,
which may have influenced our results. Future studies should
refine and evaluate the design on more lightweight smart-
glasses. Second, because of the extreme head pitch required
to view the closest stairs caused by the small vertical FOV of
Figure 10: P9’s gaze direction when walking downstairs in two conditions: using HoloLens w/o visualizations and using his pre-
ferred visualizations on HoloLens. The x-axis represents each stair, while the y-axis represents the angle between the partici-
pant’s gaze direction and the horizontal surface. When the participant looks up (down), the angle is positive (negative).
the HoloLens, we designed visualizations in the users’ cen-
tral vision instead of adding highlights to the stairs in our
smartglasses prototype. More data could be collected to
quantify the head pitch angle to determine an effective verti-
cal FOV that allows PLV to use the stair highlights with a
comfortable head pose. Third, we asked participants to score
their feeling of psychological security, but these results could
be influenced by a novelty effect. Future research should
consider more objective measurements (e.g., biometrics) to
evaluate psychological security.
In this paper, we designed AR visualizations to facilitate stair
navigation for people with low vision. We designed visuali-
zations (and sonification) for both projection-based AR and
smartglasses based on the different characteristics of these
platforms. We evaluated the design on each platform with 12
participants, finding that both visualizations increased par-
ticipants’ psychological security, making them feel safer and
more confident when walking on stairs. Moreover, our de-
sign for projection-based AR showed a trend towards signif-
icantly reducing participants’ walking time on stairs.
This work was supported in part by the National Science
Foundation under grant no. IIS-1657315. Feiner was funded
in part by the National Science Foundation under grant no.
[1] Abu-Faraj, Z.O. et al. 2012. Design and development
of a prototype rehabilitative shoes and spectacles for
the blind. 2012 5th International Conference on
Biomedical Engineering and Informatics, BMEI 2012
(2012), 795799.
[2] Aguerrevere, D. et al. 2004. Portable 3D Sound / Sonar
Navigation System for Blind Individuals. 2nd LACCEI
Int. Latin Amer. Caribbean Conf. (2004).
[3] Ahmetovic, D. et al. 2017. Achieving Practical and
Accurate Indoor Navigation for People with Visual
Impairments. Proceedings of the 14th Web for All
Conference on The Future of Accessible Work - W4A
’17 (New York, New York, USA, 2017), 110.
[4] Ahmetovic, D. et al. 2016. NavCog: A Navigational
Cognitive Assistant for the Blind. Proceedings of the
18th International Conference on Human-Computer
Interaction with Mobile Devices and Services (2016),
[5] Archea, J.C. and Clin Geriatr, M. 1985. Environmental
Factors Associated with Stair Accidents by the Elderly.
Clinics in geriatric medicine. 1, 3 (Aug. 1985), 555
[6] Berger, S. and Porell, F. 2008. The Association
Between Low Vision and Function. Journal of Aging
and Health. 20, 5 (Aug. 2008), 504525.
[7] Bhowmick, A. et al. 2014. IntelliNavi: Navigation for
Blind Based on Kinect and Machine Learning.
Springer, Cham. 172183.
[8] Bibby, S.A. et al. 2007. Vision and self-reported
mobility performance in patients with low vision.
Clinical and Experimental Optometry. 90, 2 (Mar.
2007), 115123. DOI:
[9] Bimber, O. and Frohlich, B. 2002. Occlusion shadows:
Using projected light to generate realistic occlusion
effects for view-dependent optical see-through
displays. Proceedings - International Symposium on
Mixed and Augmented Reality, ISMAR 2002 (2002),
[10] Black, A.A. et al. 1997. Mobility Performance with
Retinitis Pigmentosa. Clinical and experimental
optometry. 80, 1 (Jan. 1997), 112.
[11] Blindness and Visual Impairment: 2017.
Accessed: 2018-09-14.
[12] Blum, J.R. et al. 2011. What’s around me? Spatialized
audio augmented reality for blind users with a
smartphone. International Conference on Mobile and
Ubiquitous Systems: Computing, Networking, and
Services. (2011), 4962.
[13] BOptom, R.Q.I. et al. 1998. Visual Impairment and
Falls in Older Adults: The Blue Mountains Eye Study.
Journal of the American Geriatrics Society. 46, 1 (Jan.
1998), 5864. DOI:
[14] Bouzit, M. et al. 2004. Tactile feedback navigation
handle for the visually impaired. IMECE2004 (Jan.
2004), 17.
[15] Campbell, M. et al. 2014. Where’s My Bus Stop?
Supporting Independence of Blind Transit Riders with
StopInfo. ASSETS ’14 Proceedings of the 16th
international ACM SIGACCESS conference on
Computers & accessibility. (2014), 1118.
[16] Cao, X. and Balakrishnan, R. 2006. Interacting with
dynamically defined information spaces using a
handheld projector and a pen. the 19th annual ACM
symposium on User interface software and technology
(2006), 225.
[17] Capi, G. and Toda, H. 2011. A new robotic system to
assist visually impaired people. IEEE International
Workshop on Robot and Human Interactive
Communication (2011), 259263.
[18] Choi, J. and Kim, G.J. 2013. Usability of one-handed
interaction methods for handheld projection-based
augmented reality. Personal and Ubiquitous
Computing. 17, 2 (Feb. 2013), 399409.
[19] Cimarolli, V.R. et al. 2012. Challenges faced by older
adults with vision loss: a qualitative study with
implications for rehabilitation. Clinical Rehabilitation.
26, 8 (Aug. 2012), 748757.
[20] Cloix, S. et al. 2016. Low-power depth-based
descending stair detection for smart assistive devices.
Eurasip Journal on Image and Video Processing. 2016,
1 (2016). DOI:
[21] Common Types of Low Vision:
vision?sso=y. Accessed: 2015-07-07.
[22] Cox, A. et al. 2005. Visual impairment in elderly
patients with hip fracture: causes and associations. Eye
(London, England). 19, 6 (Jun. 2005), 652656.
[23] Cummings, S.R. et al. 1995. Risk Factors for Hip
Fracture in White Women. New England Journal of
Medicine. 332, 12 (Mar. 1995), 767774.
[24] Dakopoulos, D. and Bourbakis, N.G. 2010. Wearable
Obstacle Avoidance Electronic Travel Aids for Blind:
A Survey. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews). 40, 1
(Jan. 2010), 2535.
[25] Dougherty, B.E. et al. 2011. Abandonment of low-
vision devices in an outpatient population. Optometry
and vision science : official publication of the
American Academy of Optometry. 88, 11 (Nov. 2011),
[26] Everingham, M.R. et al. 1999. Head-mounted mobility
aid for low vision using scene classification techniques.
International Journal of Virtual Reality. 3, (1999), 3
[27] Fiannaca, A. et al. 2014. Headlock: a Wearable
Navigation Aid that Helps Blind Cane Users Traverse
Large Open Spaces. Proceedings of ASSETS ’14
(2014), 1926.
[28] Filipe, V. et al. 2012. Blind Navigation Support System
based on Microsoft Kinect. Procedia Computer
Science. 14, (Jan. 2012), 94101.
[29] Hara, K. et al. 2015. Improving Public Transit
Accessibility for Blind Riders by Crowdsourcing Bus
Stop Landmark Locations with Google Street View:
An Extended Analysis. ACM Transactions on
Accessible Computing. 6, 2 (2015), 123.
[30] Harms, H. et al. 2015. Detection of ascending stairs
using stereo vision. IEEE International Conference on
Intelligent Robots and Systems. 2015-Decem, (2015),
[31] Harwood, R.H. et al. 2005. Falls and health status in
elderly women following first eye cataract surgery: a
randomised controlled trial. The British journal of
ophthalmology. 89, 1 (Jan. 2005), 539.
[32] Harwood, R.H. 2001. Visual problems and falls. Age
and Ageing. 30, SUPPL. 4 (Nov. 2001), 1318.
[33] Hicks, S.L. et al. 2013. A Depth-Based Head-Mounted
Visual Display to Aid Navigation in Partially Sighted
Individuals. PLoS ONE. 8, 7 (Jul. 2013), e67695.
[34] Huang, H.-C. et al. 2015. An Indoor Obstacle
Detection System Using Depth Information and Region
Growth. Sensors. 15, 10 (2015), 2711627141.
[35] Huang, J. et al. 2019. An augmented reality sign-
reading assistant for users with reduced vision. PLOS
ONE. 14, 1 (Jan. 2019), e0210630.
[36] Hub, A. et al. Augmented Indoor Modeling for
Navigation Support for the Blind.
[37] Image Targets:
Target-Guide. Accessed: 2019-07-04.
[38] Ivanov, R. 2010. Indoor navigation system for visually
impaired. The 11th International Conference on
Computer Systems and Technologies and Workshop for
PhD Students in Computing on International
Conference on Computer Systems and Technologies
(2010), 143.
[39] Kanwal, N. et al. 2015. A Navigation System for the
Visually Impaired: A Fusion of Vision and Depth
Sensor. Applied Bionics and Biomechanics. 2015,
(Aug. 2015), 116.
[40] Khambadkar, V. and Folmer, E. 2013. GIST: a
Gestural Interface for Remote Nonvisual Spatial
Perception. the 26th annual ACM symposium on User
interface software and technology (2013), 301310.
[41] Kinateder, M. et al. 2018. Using an Augmented Reality
Device as a Distance-based Vision AidPromise and
Limitations. Optometry and Vision Science. 95, 9
(2018), 727.
[42] Kiyokawa, K. et al. An optical see-through display for
mutual occlusion of real and virtual environments.
Proceedings IEEE and ACM International Symposium
on Augmented Reality (ISAR 2000) 6067.
[43] Kiyoshi Kiyokawa et al. 2003. An Occlusion-Capable
Optical See-through Head Mount Display for
Supporting Co-located Collaboration. Proceedings of
the 2nd IEEE/ACM International Symposium on Mixed
and Augmented Reality (2003), 133.
[44] Leat, S.J. and Lovie-Kitchin, J.E. 2008. Visual
function, visual attention, and mobility performance in
low vision. Optometry and Vision Science. 85, 11
(Nov. 2008), 10491056.
[45] Leat, S.J. and Lovie-Kitchin, J.E. 2008. Visual
Function, Visual Attention, and Mobility Performance
in Low Vision. Optometry and vision science : official
publication of the American Academy of Optometry.
85, 11 (2008), 10491056.
[46] Legge, G.E. et al. 2013. Indoor Navigation by People
with Visual Impairment Using a Digital Sign System.
PLoS ONE. 8, 10 (Oct. 2013), e76783.
[47] Legge, G.E. et al. 2010. Visual accessibility of ramps
and steps. Journal of Vision. 10, 11 (Sep. 2010), 88.
[48] Liu, H. et al. 2015. iSee: obstacle detection and
feedback system for the blind. Proceedings of the 2015
ACM International Joint Conference on Pervasive and
Ubiquitous Computing and Proceedings of the 2015
ACM International Symposium on Wearable
Computers - UbiComp ’15. (2015), 197200.
[49] Magic Leap:
[50] Mascetti, S. et al. 2016. ZebraRecognizer: Pedestrian
crossing recognition for people with visual impairment
or blindness. Pattern Recognition. 60, (Dec. 2016),
[51] McLeod, P. et al. 1988. Visual Search for a
Conjunction of Movement and Form is parallel.
Nature. 336, (1988), 403405.
[52] Meers, S. and Ward, K. 2005. A Substitute Vision
System for Providing 3D Perception and GPS
Navigation via Electro-Tactile Stimulation.
International Conference on Sensing Technology.
November (Nov. 2005), 551556.
[53] Meijer, P.B.L. 1992. An experimental system for
auditory image representations. IEEE Transactions on
Biomedical Engineering. 39, 2 (1992), 112121.
[54] Menikdiwela, M.P. et al. 2013. Haptic based walking
stick for visually impaired people. 2013 International
conference on Circuits, Controls and Communications
(CCUBE) (Dec. 2013), 16.
[55] Microsoft HoloLens | Official Site:
Accessed: 2015-07-07.
[56] Miyasike-daSilva, V. et al. 2019. A role for the lower
visual field information in stair climbing. Gait &
Posture. 70, (May 2019), 162167.
[57] Munoz, R. et al. 2016. Depth-aware indoor staircase
detection and recognition for the visually impaired.
2016 IEEE international conference on multimedia &
expo workshops (ICMEW) (2016), 1–6.
[58] Murakami, S. et al. 2014. Study on stairs detection
using RGB-depth images. 2014 Joint 7th International
Conference on Soft Computing and Intelligent Systems,
SCIS 2014 and 15th International Symposium on
Advanced Intelligent Systems, ISIS 2014. (2014), 1186
1191. DOI:
[59] Perez-Yus, A. et al. 2015. Stair Detection and
Modelling from a Wearable Depth Camera. (2015),
[60] Perez-Yus, A. et al. 2017. Stairs detection with
odometry-aided traversal from a wearable RGB-D
camera. Computer Vision and Image Understanding.
154, (2017), 192205.
[61] Pinhanez, C. 2001. The Everywhere Displays
Projector: A Device to Create Ubiquitous Graphical
Interfaces. International conference on ubiquitous
computing. Springer, Berlin, Heidelberg. 315331.
[62] Priyadarshini, A.R. 1024. Dual Objective Based
Navigation Assistance to the Blind and Visually
Impaired. International Journal of Innovative Research
in Computer and Communication Engineering. 2, 5
(1024), 43354342.
[63] Rapp, S. et al. 2004. Spotlight Navigation : Interacton
with a Handheld Projection Device. Advances in
Pervasive Computing (2004), 397400.
[64] van Rheede, J.J. et al. 2015. Improving mobility
performance in low vision with a distance-based
representation of the visual scene. Investigative
Ophthalmology and Visual Science. 56, 8 (2015),
48024809. DOI:
[65] Salber, D. and Coutaz, J. 1993. Applying the Wizard of
Oz Technique to the Study of Multimodal Systems.
Proceedings of EWHCI. (1993), 219230.
[66] Saldana, J. 2010. The Coding Manual for Qualitative
Researchers. The qualitative report. 15, 3 (2010), 754
[67] Samsung I8530 Galaxy Beam:
am-4566.php. Accessed: 2019-03-26.
[68] Shahrabadi, S. et al. 2013. Detection of indoor and
outdoor stairs. Iberian Conference on Pattern
Recognition and Image Analysis (2013), 847854.
[69] Shinohara, K. and Wobbrock, J.O. 2011. In the shadow
of misperception: assistive technology use and social
interactions. Proceedings of the 2011 annual
conference on Human factors in computing systems
(2011), 705714.
[70] Shoval, S. et al. 1994. Mobile robot obstacle avoidance
in a computerized travel aid for the blind. Proceedings
of the 1994 IEEE International Conference on Robotics
and Automation (1994), 20232028.
[71] Shoval, S. et al. 2003. NavBelt and the GuideCane.
IEEE Robotics and Automation Magazine. 10, 1 (Mar.
2003), 920.
[72] Summary Health Statistics for the U.S. Population:
National Health Interview Survey, 2004.: 2004.
df. Accessed: 2015-05-03.
[73] Szpiro, S. et al. 2016. Finding a store, searching for a
product: a study of daily challenges of low vision
people. Proceedings of the 2016 ACM International
Joint Conference on Pervasive and Ubiquitous
Computing. (2016), 6172.
[74] Szpiro, S. et al. 2016. How People with Low Vision
Access Computing Devices: Understanding Challenges
and Opportunities. Proceedings of the 18th
International ACM SIGACCESS Conference on
Computers and Accessibility (2016), 171180.
[75] Tjan, B.S. et al. 2005. Digital Sign System for Indoor
Wayfinding for the Visually Impaired. 2005 IEEE
Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR’05) - Workshops, 3030.
[76] Ulrich, I. and Borenstein, J. 2001. The GuideCane-
applying mobile robot technologies to assist the
visually impaired. IEEE Transactions on Systems,
Man, and Cybernetics - Part A: Systems and Humans.
31, 2 (Mar. 2001), 131136.
[77] Vera, P. et al. 2014. A smartphone-based virtual white
cane. Pattern Analysis and Applications. 17, 3 (Aug.
2014), 623632. DOI:
[78] Wahab, M.H.A. et al. 2011. Smart Cane: Assistive
Cane for Visually-impaired People. IJCSI International
Journal of Computer Science Issues. 8, 4 (2011), 21
27. DOI:
[79] Wang, S. and Tian, Y. 2012. Detecting stairs and
pedestrian crosswalks for the blind by RGBD camera.
2012 IEEE International Conference on Bioinformatics
and Biomedicine Workshops (Oct. 2012), 732739.
[80] West, C.G. et al. 2002. Is Vision Function Related to
Physical Functional Ability in Older Adults? Journal of
the American Geriatrics Society. 50, 1 (Jan. 2002),
136145. DOI:
[81] What Are Low Vision Optical Devices?
devices/1235. Accessed: 2015-10-11.
[82] Willis, K.D.D. and Poupyrev, I. 2011. MotionBeam: A
Metaphor for Character Interaction with Handheld
Projectors. the SIGCHI Conference on Human Factors
in Computing Systems (2011), 10311040.
[83] Yantis, S. and Jonides, J. 1990. Abrupt visual onsets
and selective attention: Voluntary versus automatic
allocation. Journal of Experimental Psychology:
Human Perception and Performance. 16, 1 (1990),
[84] Zhao, Y. et al. 2016. CueSee : Exploring Visual Cues
for People with Low Vision to Facilitate a Visual
Search Task. International Joint Conference on
Pervasive and Ubiquitous Computing (2016), 7384.
[85] Zhao, Y. et al. 2015. ForeSee: A Customizable Head-
Mounted Vision Enhancement System for People with
Low Vision. The 17th International ACM SIGACCESS
Conference on Computers and Accessibility. (2015),
[86] Zhao, Y. et al. 2018. “It Looks Beautiful but Scary:”
How Low Vision People Navigate Stairs and Other
Surface Level Changes. Proceedings of the 20th
International ACM SIGACCESS Conference on
Computers and Accessibility - ASSETS ’18 (New York,
New York, USA, 2018), 307320.
[87] Zhao, Y. et al. 2017. Understanding Low Vision
People’s Visual Perception on Commercial Augmented
Reality Glasses. Proceedings of the 2017 CHI
Conference on Human Factors in Computing Systems.
(2017), 41704181.
[88] Zhou, F. et al. 2008. Trends in augmented reality
tracking, interaction and display: A review of ten years
of ISMAR. Proceedings of the 7th IEEE International
Symposium on Mixed and Augmented Reality 2008,
ISMAR 2008 (Sep. 2008), 193202.
... In Imaginary Reality (IR) [7], perception is complemented by mental imagery, either partially [7,92] or fully [67], making action effective despite the scarcity or absence of the stimuli needed for sensorimotor coupling. These emerging environments, worlds and, ultimately, realities afford new sensorimotor experiences, such as amplified perception [2,39,106] and enhanced motor skills [62,73], but also perception that is diminished [65] and motricity that is reduced [55] on purpose. Users' sensorimotor abilities are repurposed in a process of mediation to support new skills and interactive experiences. ...
... The ways in which the world is perceived via the human senses and in which motor action is used to interact, manipulate, and model the world determine diverse manifestations of SRs. The world can be synthesized, mediated, or supported by computer technology from simple cues to understand, navigate, and interact with the world [43,106] to complex sensitive, adaptive, and responsive designs of smart environments [13]. Of a particular interest is ambient media [33,101] that define the communication of information in ambient intelligence environments [45], Azuma's [6] perspective on AR as a new form of media to address the experiential challenge of hybrid environments, and a recent work [91] that highlighted the similarities and overlap between the philosophies and visions of computing of augmented/mixed reality and ambient intelligence environments. ...
... The wayfinding problem is "the global problem of planning and following routes from place to place" [36]. However, our system solves this problem since it is able to guide a blind person to various directions using the left and right bracelets and a mobile application. ...
Full-text available
Every day, we engage in a variety of activities such as shopping, reading, swimming, and so on. Many people in our community, however, are unable to participate in such activities, due to a variety of eye problems. Directing a blind person to the optimal position (the center of a spot where there is enough space in all directions such that a blind person avoids various obstacles) is a challenge. This paper proposes wireless bracelets that are able to guide a blind person to the optimal position. The proposed system employs ultrasonic sensors in order to detect various obstacles in an indoor environment. It also makes use of the Firebase database and NodeMCU WiFi module to enable real-time communication with a blind individual. Furthermore, the suggested system includes a novel fall-detection mechanism. The proposed Internet of Things (IoT) system is evaluated in an indoor environment. Experiment results showed that the proposed system could efficiently direct a blind person to the optimal position. In comparison to the current state of the art, the proposed system is simpler, less expensive, and more efficient in determining the optimal position to which a blind person must navigate.
... They found that most users experienced less mental and physical demand as well as lower frustration using a smartglass display when compared with a mobile phone or a traditional navigation app. In addition, for people with disabilities, AR HMDs can be highly advantageous in assisting mobility and navigation, as opposed to using a hand-held device [32,58,96,97]. For example, individuals with low vision can often utilize their remaining vision to read the AR navigation information or navigate by a multi-modal interface in an ergonomic posture that is not possible with handheld devices. ...
Full-text available
Daily travel usually demands navigation on foot across a variety of different application domains, including tasks like search and rescue or commuting. Head-mounted augmented reality (AR) displays provide a preview of future navigation systems on foot, but designing them is still an open problem. In this paper, we look at two choices that such AR systems can make for navigation: 1) whether to denote landmarks with AR cues and 2) how to convey navigation instructions. Specifically, instructions can be given via a head-referenced display (screen-fixed frame of reference) or by giving directions fixed to global positions in the world (world-fixed frame of reference). Given limitations with the tracking stability, field of view, and brightness of most currently available head-mounted AR displays for lengthy routes outdoors, we decided to simulate these conditions in virtual reality. In the current study, participants navigated an urban virtual environment and their spatial knowledge acquisition was assessed. We experimented with whether or not landmarks in the environment were cued, as well as how navigation instructions were displayed (i.e., via screen-fixed or world-fixed directions). We found that the world-fixed frame of reference resulted in better spatial learning when there were no landmarks cued; adding AR landmark cues marginally improved spatial learning in the screen-fixed condition. These benefits in learning were also correlated with participants' reported sense of direction. Our findings have implications for the design of future cognition-driven navigation systems.
... The latest AR smart glasses are fully wearable devices with computational functions, providing various functionalities by freeing the user's hands [32]. For instance, Vuzix 2 developed AR smart glasses for navigation in unknown areas, while Zhao et al. developed an AR assistive navigation device [33]. Recently, Facebook has partnered with Ray-Ban and launched their Ray-Ban stories, which have raised important questions about ethical and privacy issues [34]. ...
Full-text available
Augmented and Virtual Reality (AR, VR), collectively known as Extended Reality (XR), are increasingly gaining traction thanks to their technical advancement and the need for remote connections, recently accentuated by the pandemic. Remote surgery, telerobotics, and virtual offices are only some examples of their successes. As users interact with AR and VR, they generate extensive behavioral data usually leveraged for measuring human activity, which could be used for profiling users’ identities or personal information (e.g., gender). However, several factors affect the efficiency of profiling, such as the technology employed, the action taken, the mental workload, the presence of bias, and the sensors available. To date, no study has considered all of these factors together and in their entirety, limiting the current understanding of XR profiling. In this work, we provide a comprehensive study on user profiling in virtual technologies (AR, VR). Specifically, we employ machine learning on behavioral data (i.e., head, controllers, and eye data) to identify users and infer their individual attributes (i.e., age, gender). Toward this end, we propose a general framework that can potentially infer any personal information from any virtual scenarios. We test our framework on eleven generic actions (e.g., walking, searching, pointing) involving low and high mental loads, derived from two distinct use cases: an AR everyday application (34 participants) and VR robot teleoperation (35 participants). Our framework limits the burden of creating technology- and actiondependent algorithms, also reducing the experimental bias evidenced in previous work, providing a simple (yet effective) baseline for future works. We identified users up to 97% F1-score in VR and 80% in AR. Gender and Age inference was also facilitated in VR, reaching up to 82% and 90% F1-score, respectively. Through an in-depth analysis of sensors’ impact, we found VR profiling resulting more effective than AR mainly because of the eye sensors’ presence.
Full-text available
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
Prior research on visual impairments has documented specific challenges that people with low vision face such as reading and mobility. Yet, much less focus has been given to the relationships between seemingly separate challenges such as mobility and social interactions; limiting the potential of services and assistive technologies for people with low vision. To address this gap, we conducted semi-structured interviews with 30 low vision participants and examined the relationships between challenges and coping strategies overarching three facets of life - functional, psychological, and social. We found that challenges in a specific area of life commonly interacted and impacted other facets of life and provide a conceptual map of these relationship. For example, challenges in mobility reduced social interactions, which in turn affected the psychological well-being. Moreover, participants repeatedly described how a seemingly specific functional challenge (i.e., seeing under different lighting conditions) influenced a wide range of activities, from mobility (e.g., seeing obstacles) to social interactions (e.g., seeing faces and interpreting non-verbal cues). Our results highlight the importance of considering the interrelationships between different facets of life for assistive technology development and evaluation.
Detecting and avoiding obstacles while navigating can pose a challenge for people with low vision, but augmented reality (AR) has the potential to assist by enhancing obstacle visibility. Perceptual and user experience research is needed to understand how to craft effective AR visuals for this purpose. We developed a prototype AR application capable of displaying multiple kinds of visual cues for obstacles on an optical see-through head-mounted display. We assessed the usability of these cues via a study in which participants with low vision navigated an obstacle course. The results suggest that 3D world-locked AR cues were superior to directional heads-up cues for most participants during this activity.
Full-text available
People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.
Conference Paper
Full-text available
Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed navigation tasks in two buildings and through two city blocks. The tasks involved walking in- and outdoors, across four staircases and two city blocks. We found that surface level changes were a source of uncertainty and even fear for all participants. Besides the white cane that many participants did not want to use, participants did not use technology in the study. Participants mostly used their vision, which was exhausting and sometimes deceptive. Our findings highlight the need for systems that support surface level changes and other depth-perception tasks; they should consider low vision people's distinct experiences from blind people, their sensitivity to different lighting conditions, and leverage visual enhancements.
Full-text available
SIGNIFICANCE For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. PURPOSE The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. METHODS An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. RESULTS In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. CONCLUSIONS Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.
Conference Paper
Full-text available
People with low vision have a visual impairment that affects their ability to perform daily activities. Unlike blind people, low vision people have functional vision and can potentially benefit from smart glasses that provide dynamic, always-available visual information. We sought to determine what low vision people could see on mainstream commercial augmented reality (AR) glasses, despite their visual limitations and the device's constraints. We conducted a study with 20 low vision participants and 18 sighted controls, asking them to identify virtual shapes and text in different sizes, colors, and thicknesses. We also evaluated their ability to see the virtual elements while walking. We found that low vision participants were able to identify basic shapes and read short phrases on the glasses while sitting and walking. Identifying virtual elements had a similar effect on low vision and sighted people's walking speed, slowing it down slightly. Our study yielded preliminary evidence that mainstream AR glasses can be powerful accessibility tools. We derive guidelines for presenting visual output for low vision people and discuss opportunities for accessibility applications on this platform.
Conference Paper
Full-text available
Methods that provide accurate navigation assistance to people with visual impairments often rely on instrumenting the environment with specialized hardware infrastructure. In particular, approaches that use sensor networks of Bluetooth Low Energy (BLE) beacons have been shown to achieve precise localization and accurate guidance while the structural modifications to the environment are kept at minimum. To install navigation infrastructure, however, a number of complex and time-critical activities must be performed. The BLE beacons need to be positioned correctly and samples of Bluetooth signal need to be collected across the whole environment. These tasks are performed by trained personnel and entail costs proportional to the size of the environment that needs to be instrumented. To reduce the instrumentation costs while maintaining a high accuracy, we improve over a traditional regression-based localization approach by introducing a novel, graph-based localization method using Pedestrian Dead Reckoning (PDR) and particle filter. We then study how the number and density of beacons and Bluetooth samples impact the balance between localization accuracy and set-up cost of the navigation environment. Studies with users show the impact that the increased accuracy has on the usability of our navigation application for the visually impaired.
Full-text available
Assistive technologies aim at improving personal mobility of individuals with disabilities, increasing their independence and their access to social life. They include mechanical mobility aids that are increasingly employed amongst the older people who rely on them. However, these devices might fail to prevent falls due to the under-estimation of approaching hazards. Stairs and curbs are among these potential dangers present in urban environments and living accommodations, which increase the risk of an accident. We present and evaluate a low-complexity algorithm to detect descending stairs and curbs of any shape, specifically designed for low-power real-time embedded platforms. Based on a passive stereo camera, as opposed to a 3D active sensor, we assessed the detection accuracy, processing time and power consumption. Our goal being to decide on three possible situations (safe, dangerous and potentially unsafe), we achieve to distinguish more than 94 % dangers from safe scenes within a 91 % overall recognition rate at very low resolution. This is accomplished in real-time with robustness to indoor/outdoor lighting conditions. We show that our method can run for a day on a smartphone battery.
Background: Locomotion on stairs is challenging for balance control and relates to a significant number of injurious falls. The visual system provides relevant information to guide stair locomotion and there is evidence that peripheral vision is potentially important. Research question: This study investigated the role of the lower visual field information for the control of stair walking. It was hypothesized that restriction in the lower visual field (LVF) would significantly impact gaze and locomotor behaviour specifically during descent and during transition phases emphasizing the importance of the LVF information during online control. Methods: Healthy young adults (n = 12) ascended and descended a 7-step staircase while wearing customized goggles that restricted the LVF. Three visual conditions were tested: full field of view (FULL); 30° (MILD), and 15° (SEVERE) of lower field of view available. Stride time, head pitch angle and handrail use were assessed during approach, transition steps (two steps at the top and bottom of the stairs) and middle step phases. Results: Transient downward head pitch angle increased with LVF restriction, while walk speed decreased and handrail use increased. Occlusion impaired stair descent more strongly than ascent reflected by a larger downward head pitch angles and slower walk times. LVF restriction had a greater influence on stride time and head angle during the approach and first transition compared to other stair regions. Significance: Information from the lower visual field is important to guide stair walking and particularly when negotiating the first few steps of a staircase. Restriction in the lower visual field during stair walking results in more cautious locomotor behaviour such as walking slower and using the handrails. In daily activities, tasks or conditions that restrict or alter the lower visual field information may elevate the risk for missteps and falls.
Conference Paper
Low vision is a pervasive condition in which people have difficulty seeing even with corrective lenses. People with low vision frequently use mainstream computing devices, however how they use their devices to access information and whether digital low vision accessibility tools provide adequate support remains understudied. We addressed these questions with a contextual inquiry study. We observed 11 low vision participants using their smartphones, tablets, and computers when performing simple tasks such as reading email. We found that participants preferred accessing information visually than aurally (e.g., screen readers), and juggled a variety of accessibility tools. However, accessibility tools did not provide them with appropriate support. Moreover, participants had to constantly perform multiple gestures in order to see content comfortably. These challenges made participants inefficient-they were slow and often made mistakes; even tech savvy participants felt frustrated and not in control. Our findings reveal the unique needs of low vision people, which differ from those of people with no vision and design opportunities for improving low vision accessibility tools.