ArticlePDF Available

iPhone video link FaceTime as an orientation tool: Remote O&M for people with vision impairment

Authors:
International Journal of Orientation & Mobility • Volume 7, Number 1, 201560
iPhone video link FaceTime as an
orientation tool: Remote O&M for
people with vision impairment
Nicole Holmes and Kelly Prentice
Two case study participants investigated the effectiveness of the Application
“FaceTime” as an O&M tool via the Apple iPhone. The participants included a
traveller (blind) who is an experienced long cane and guide dog user; and a qualied
O&M instructor. The traveller and instructor tested FaceTime in ve varying
scenarios including shop identication, product identication in a supermarket,
identication of buses at a transport interchange, orientation while free walking
in residential streets, and road intersection identication. It was found that the
information provided remotely by the instructor enhanced the independence of the
traveller since it could not be obtained via GPS or other means.
In recent years, people with vision
impairment have been able to access and use,
with their sighted peers, such mainstream
technologies as Global Positioning Systems
(GPS) applications (apps) and software.
Interestingly, some educational institutions
and service providers serving students with
disabilities are successfully using video
conferencing to reach those students who
would otherwise not have access to their
services (Dewald & Smyth, 2013-14; Royal
Institute for Deaf and Blind Children, 2015).
The development of accessible smart-
phone GPS apps for people with vision
impairment means that GPS can, with
varying degrees of inaccuracy, announce
a location, nearest cross streets, points of
interest, and give directions to a destination.
However, more recent technology, for
example, ‘FaceTime’ on the iPhone, have the
capacity to provide increasingly accurate and
reliable information that might be used by
people with vision impairment. This paper
explores the potential for using FaceTime
on the iPhone to assist a person with vision
impairment in a number of different day-
to-day situations in which GPS might not
be the most effective means of obtaining
accurate and reliable information.
GPS has been an evolving technology
in relation to the orientation and mobility
(O&M) of a person who is blind or vision
impaired for over 20 years. SenderoGroup
was the rst company to develop adaptive
GPS equipment, beginning with a laptop
computer carried in a backpack, followed by
accessible GPS software for the BrailleNote
notetaker, to the more recent GPS iPhone
apps. While this technology has proved
helpful to people with vision impairment,
it has not yet addressed particular mobility
issues, for example, identifying an object,
International Journal of Orientation & Mobility • Volume 7, Number 1, 2015 61
place, or landmark in real-time and with
rened accuracy. The CEO of Sendero
Group (Sendero Group, 2015) referred to
the “frustrating 50 feet”, a situation whereby
GPS technology can orientate the user
within approximately 50 feet (15.5 metres)
of where they want to be. GPS can provide
some independence to a traveller, however,
it leaves the traveller relying on prior
knowledge of the area or members of the
public to provide assistance.
Several studies have investigated the
use of video link to provide orientation to
a traveller with vision impairment. For
example, Garaj, Jirawimut, Ptasinski,
Cecelja, and Balachandran (2003) trialled
video link using two personal computers.
One computer was inside a backpack with
a camera on the traveller’s chest with a GPS
receiver on the shoulder; and the other was
a personal computer (PC) with onscreen
display used by the O&M instructor. The
instructor was located at a desk. This video
link system allowed the remote guide
(instructor) on the PC to see the traveller’s
location on a map, and also provided a video
link via cell towers. The trials revealed that
the system increased the independence of
the traveller with vision impairment on both
a macro and micro level. That is, the person
with vision impairment was provided by
the instructor with a verbal preview of the
route before travelling it, and was also pre-
warned about landmarks as the route was
travelled. Their study did not progress past
the trial, however, this particular video link
system appeared to improve the navigation
of the participant who was blind. That is, the
remote guide (instructor) detected when the
person moved off route and corrected him
accordingly.
Similarly, Baranski, Polanczyk, and
Strumillo (2010) used two terminals that
were connected via cell towers. One terminal
was a wearable and compact mobile device
with a digital camera, GPS receiver, and
headset worn by the traveller; and the other
was a personal computer used by the guide
(instructor). This research used both video
link and transmission of GPS data to allow
the instructor to see the location of the
traveller on a map. The difference between
the former and this study was that the
instructor was able to navigate the traveller
by controlling the GPS system while also
warning him of hazardous obstacles.
A further study by Baranski, Polanczyk,
and Strumillo (2014) used video camera
glasses and provided haptic feedback to the
traveller via two vibrating tactile bracelets.
This procedure allowed the remote operator
(instructor) to guide the traveller by
activating one of the two vibrating bracelets
rather than using audio. The study was
conducted in an outdoor environment and
was found to be effective for users who were
unfamiliar with the environment.
The introduction of smartphones and apps
led to a number of object recognition and
crowd sourcing apps to assist people with
vision impairment. For example, “VizWiz”
introduced the idea of taking a photo of
an object, pairing it with a question and
then receiving a response via web workers
(Bigham, Jayant, Miller, White, & Yeh,
2010). Similarly, “TapTapSee” developed
by the Royal National Institute for the Blind,
allows a person to take a photo of an object,
compare the photo to a database of items,
and then attempts to match the photo with
the particular item in the database (Holton,
2015). Both these apps reduce the need by
a person with vision impairment to organise
International Journal of Orientation & Mobility • Volume 7, Number 1, 201562
face-to-face assistance, and instead allows
access to assistance at any time of the day
or night. Although these two apps do not
permit video link, they have the potential for
independent identication of features in the
environment. The most recent iPhone app
for people who are blind or vision impaired
is “Be My Eyes.” This app allows sighted
volunteers to connect with people who are
blind anywhere in the world via live video
link. This development means that the
person who is blind can get assistance with
identifying objects in their environment at
any time through the iPhone or iPad (Holton,
2015).
“FaceTime” builds on the concept of
identifying features in the environment.
FaceTime is a free video calling capability
between Apple iPhones and iPads. It is a
live video link between iDevices that can be
created between two people. This process
only requires iPhones or iPads to work and
can stream live video and audio footage
between the two devices while only paying
for a data connection. Unlike ‘Be My Eyes’,
FaceTime allows the person who is vision
impaired to connect to an assistant of their
choice. Importantly, and in contrast to the
studies discussed, FaceTime is accessible
mainstream technology, and does not require
adaptive equipment or software to run.
Method
The traveller who is blind is an
experienced and independent traveller and
a regular user of GPS information. Using
her primary aids, rst, a guide dog and then
a long cane, she employed the FaceTime
application in the following scenarios in
known and unknown environments that she
identied as frustrating situations:
Locating shop entrances
Reading shop signage
Reading bus numbers and their
destinations as they drive past
Identifying various department store
sections
Identifying obstacles on regular routes
Negotiating barriers on footpaths
Identifying complex road crossings,
roundabouts , islands, angles, lights
Identifying street signs
These situations were divided into ve
scenarios and specic tasks (Table 1).
Equipment
The iPhone 4s employed Optus data
service for both the traveller and instructor.
A Lanyon that loops around the neck and
places the camera at chest level was used
by the traveller (Figure 1). This enabled the
traveller to be hands-free. The instructor
used the application FaceTime while sitting
at her workstation in her ofce (Figure 2).
Camera Positioning
Prior to the commencement of the trial the
instructor and traveller reviewed effective
ways to communicate the position of the
camera so that it could detect the traveller’s
environment. For example, they agreed to
use the base of the iPhone as a reference
point to tilt the camera down, bringing it
closer to the body, or up moving it away.
Once the angle of the phone was set this
way, they used left and right for scanning.
Movement directions were then given using
the ‘clock face method’ and angles, for
example, “move left to 10 o’clock”; “move
90 degrees to the right.”
International Journal of Orientation & Mobility • Volume 7, Number 1, 2015 63
Table 1. Five scenarios in which the FaceTime application was used.
Scenario Scenario Name Specic Task
A Locating Shop Entrances (and
Reading Signs)
The traveller will be given directions within a mall to
locate a specic shop with which she is unfamiliar. She
will then identify a product in a glass cabinet of the
instructors choosing.
B Multiple Buses at the Bus Stop At a bus bay where many buses stop, the buses and their
numbers will be identied.
C Unknown Obstacles The traveller will use FaceTime when walking in an open
area to identify an unknown obstacle, have it described by
the instructor, and receive directions to walk around it.
DIntersection Identication The traveller will use FaceTime at an intersection to
receive information about the layout before crossing the
road.
E Orientation to a Product The traveller will locate a specic product in a glass
cabinet and/or on a shelf.
Figure 1. Lanyon around the traveller’s
neck; iPhone screen inward so the
instructor’s face is not visible to the
public.
Figure 2. The instructor using the iPhone
screen to view the traveller’s
environment; and talking to the
traveller via the headphones.
International Journal of Orientation & Mobility • Volume 7, Number 1, 201564
Further, during the scenario’s the traveller
tested method of scanning items by rst,
moving the iPhone in her hand in front of
various items (hand scanning) and second,
by moving her body with the camera resting
on her chest connected to the Lanyon loop
around her neck (Figure 1).
Results
In general, FaceTime video link with an
O&M instructor appeared to enhance the
traveller’s independence (Table 2). When the
traveller was walking the instructor was able
to provide useful information about the type
of shops being approached e.g., hair salon
and newsagency. However, the instructor
also experienced some motion sickness
although this decreased when the traveller
held the phone rather than let it hang around
her neck. When the traveller was stationary
the level of detail provided by the instructor
increased as it was easier for her to see items
(e.g., jewellery in a glass cabinet) and the
environment (e.g., street sign) clearly.
Discussion
FaceTime video link with an O&M
instructor appeared to enhance the
Table 2. The effectiveness of FaceTime across the ve scenarios.
Scenario Outcome
A
Locating Shop Entrances
(and reading signs)
The camera angle was important so that the instructor could clearly see
the environment/signs/shop counters in front of the traveller. Stationary
orientation was required to give adequate orientation to a particular shop.
The traveller preferred to be hands free without the camera so that she
could carry items and open doors. Motion sickness was experienced by the
instructor when the camera was not being held by the traveller.
The instructor was able to provide information for the traveller to locate
several shop entrances. The instructor was also able to read most shop signs
to help identify the type of shop the traveller was passing. If shops were
unfamiliar and the signs too small then the instructor could not condently
identify the shop name. However, other clues (olfactory/auditory) would
usually allow identication of the type of shop e.g., bakery or café.
Cellular connection issues were experienced inside the shopping centre.
The strategy used to overcome this problem was that the instructor
communicated that she could not see the image, to which the traveller
moved forward until the connection was restored and she received feedback
from the instructor.
There were no signicant differences between the long cane and guide dog
travel. As expected, the dog did more of the work locating shop entrances
and less environmental detail was required.
International Journal of Orientation & Mobility • Volume 7, Number 1, 2015 65
independence of the traveller. Such tasks
as identifying shop signs, supermarket
products, obstacles, hazards on the
footpath, providing details about buses, and
intersection layout would have previously
been done face-to-face with either an O&M
instructor or another person with vision.
Having a video link with the instructor
meant that the traveller could request
assistance only when she believed it was
required, and feel condent that she was
receiving the correct technical orientation
information.
The main difference between the present
study and previous studies was in the
equipment used. While previous studies
used video links, these were not as portable
Scenario Outcome
B
Multiple Buses at the Bus
Stop
The instructor could detect a number of buses at a bus stand but not the
number of the bus unless it was positioned at the front of the queue.
Detection of bus numbers varied according to the distance the traveller was
from the bus and the bus type.
There was no difference in the type of primary aid used in this scenario.
C
Unknown Obstacles
The instructor could detect obstacles ahead while the traveller was walking
at a slow pace. The instructor did experience some motion sickness as the
person walked although this decrease when the traveller walked at a slower
pace and held the camera.
Depending on the environment, the shadows on the footpath impaired the
instructor’s ability to judge depth e.g., gutters/drop offs. The instructor could
provide information when moving about gross details but not ne details for
e.g., parked cars or obstacle on the path but not a detailed view of surrounds
or street signs. It was easier for the instructor to provide details about
obstacles when the traveller was stationary.
When travelling with the long cane the level and type of information that
was provided was detailed and technical. This was in comparison to when
using a guide dog because of the faster pace the dog walked.
D
Intersection Identication
The instructor could describe to the traveller the type of intersection/crossing
and direction the person needed to cross. However, the instructor found it
difcult to identify trafc ow and direction because of the high speed of
vehicles and the narrow camera view.
At some intersections the instructor was able to identify the visual change in
red (stop) and green (go) signals. Because of the narrow eld of camera view
of the intersection, it would never be recommended that this tool be used to
advise if a car was coming and whether or not it was safe to cross the road.
No difference was experienced when using either primary aids.
E
Orientation to a Product
Product identication was effective as long as the traveller could point the
camera towards products and hold it still to allow focus.
With systematic scanning, the desired products were found on the
supermarket shelf.
No difference between the primary aids.
International Journal of Orientation & Mobility • Volume 7, Number 1, 201566
and easy to access as FaceTime for the end
user.
No major differences were apparent
between use of a cane and a guide dog.
The instructor provided more detail about
obstacles ahead when the traveller walked
with the cane, whereas this was not required
as much when the guide dog did its job
correctly.
Providing information was easiest when
the traveller was stationary as the instructor
received a clear view of what was in
front of the camera and was not rushed to
decipher the traveller’s location. Stationary
orientation was used when the traveller
required a high level of detail for example,
identifying products, shop signage, or buses
at a bus stand. This result is in contrast to
previous studies, in which the remote guide
followed the traveller along a particular route
and was able to detect whether or not she
had gone off route. What previous studies
did not do, and what makes the traveller
with vision impairment more independent
using FaceTime, is using technology to do
the haphazard tasks for example, identify
products and bus numbers. Having the
condence to do such tasks without needing
to ask a member of the public means
that the traveller can complete the tasks
independently and efciently.
Although the study by Baranski,
Polanczyk, and Strumillo (2010) used
wearable camera technology on the traveller,
it did not increase the independence of the
traveller as the remote guide had complete
control of alerting the traveller to obstacles
in the environment, as well as controlling
the GPS system used to guide the traveller.
However, in the present study the traveller
was free to use GPS apps on the iPhone,
but this information was not controlled by
the O&M instructor. Instead, the traveller
had autonomy since she was able to travel
in the environments of her choice and could
initiate questions to the O&M instructor to
seek further information about features of
the environment.
Verbal communication in real-time
was important in the present study since it
allowed the O&M instructor to identify the
traveller’s location, and also permitted the
traveller to gain the information she needed
at any time. This result is in contrast to the
system where haptic feedback is given in
order to guide the traveller along a route.
Instructor familiarity with the
environment was not analysed but would be
an important element in further research. In
addition, to assist in the reduction of motion
sickness experienced by the instructor,
perhaps the camera needs to be xed in
some way though able to be hand-held when
required so that the traveller can use it for
example, to identify objects on shelves. It is
recommended that training be provided to
the traveller to explain the way the camera
works and to position the camera correctly
prior to the commencement of travel.
Conclusion
Results of the current study indicate that
FaceTime orientation with the iPhone might
be a useful tool to increase the independence
of a person with vision impairment.
FaceTime orientation appears to enhance the
traveller’s journey by giving an additional
level of information to that provided by any
GPS system. FaceTime provides information
in real-time and allows important two-way
communication and questioning between
the O&M instructor and traveller. The O&M
International Journal of Orientation & Mobility • Volume 7, Number 1, 2015 67
instructor is a trusted professional in the
area of orientation of people who are blind
or vision impaired, and thus the information
provided via FaceTime might be considered
high standard compared to that provided
by a member of the public. Advantages
of using FaceTime include: (i) cost and
resource efciency in that instructors are not
required to provide face-to-face training; (ii)
clients accessing an increasing number of
environments (iii) a cost efcient strategy
to expand an organisations mobility services
(iv) increase in autonomy for a traveller.
References
Baranski, P., Polanczyk, M., & Strumillo, P.
(2010). A remote guidance system for the
blind. e-Health Networking Applications
and Services, 386-390.
Bigham, J.P., Jayant, C., Miller, A., White,
B., & Yeh, T. (2010). VizWiz: LocateIt
- enabling blind people to locate objects
in their environment. Computer Vision
and Pattern Recognition Workshops
(CVPRW), 2010 IEEE Computer Society
Conference, 65-72.
Dewald, H. P., & Smyth, C. A. (2013-14).
Feasibility of O&M services for young
children with vision impairment using
teleintervention. International Journal of
Orientation & Mobility, 6(1), 83-92.
Garaj, V., Jirawimut, R., Ptasinski, P.,
Cecelja, F., & Balachandran, W. (2003).
A system for remote sighted guidance
of visually impaired pedestrians. British
Journal of Visual Impairment, 21(2),
55-63.
Holton, B. (2015). A review of the Be My
Eyes Remote Sighted Helper App for
Apple iOS. Access World. Retrieved
from http://www.afb.org/afbpress/
pub.asp?DocID=aw160202&utm_
content=bufferbb105&utm_
medium=social&utm_source=facebook.
com&utm_campaign=buffer
Royal Institute for Deaf and Blind Children.
(2015). Services across Australia.
Retrieved from http://www.ridbc.org.au/
services-across-australia
Scheggi, S., Talarico, A., & Prattichizzo,
D. (2014). A remote guidance system for
blind and visually impaired people via
vibrotactile haptic feedback,” Control
and Automation (MED), 20-23.
Sendero Group. (2015). Sendero Group:
Accessible location and navigation.
Retrieved from http://www.senderogroup.
com/
Nicole Holmes, B.A(Soc/Psych)., M.Spec.Ed
(Sensory Disability)., Access and Technology
Ofcer, Guide Dogs NSW/ACT, PO Box 1965, North
Sydney NSW 2059, Australia; e-mail:<nholmes@
guidedogs.com.au>. Kelly Prentice, BA (SocSc
Psych)/Teaching(Prim), M.Spec.Ed (Sensory
Disability), O&M Specialist, Guide Dogs NSW/ACT,
PO Box 1965, North Sydney NSW 2059, Australia;
e-mail:<kprentice@guidedogs.com.au>.
... (i) The communication medium between users and remote sighted assistants. Earlier prototypes used audio [91], images [19,70], one-way video using wearable digital cameras [27,43], or webcams [27], whereas the recent ones are using two-way video chat using smartphones [8,16,17,59]; (ii) The instruction form. RSA services are based on texts [72], synthetic speech [91], natural conversation [8,16,17], or vibrotactile feedback [30,104]). ...
... In previous study [73], the RSA agents expressed their frustrations with the users' expectation of the agents' quick start of assistance, which is usually not possible because most of places are new to the agents and thus they need some time to process the information to orient themselves. The fact that RSA agents' never being in the place physically but depending only on the limited map and the video feed is reported as a cause for the challenge in the following research work [43,59,65]. ...
... They also need to provide information about dynamic obstacles (e.g., moving cars and pedestrians) and stationary ones (e.g., parked cars and tree branches) [16,62]. However, agents found these tasks daunting due to the difficulties in estimating the distance and depth [65], reading signage and texts [59], and detecting/tracking moving objects [59,62] from the users' camera feed. Number of research work also found that it is almost impossible for agents to project/estimate out-of-frame potential either moving or static obstacles [16,27,43,62,65,91,104]. ...
Conference Paper
Full-text available
Remote sighted assistance (RSA) has emerged as a conversational assistive technology for people with visual impairments (VI), where remote sighted agents provide realtime navigational assistance to users with visual impairments via video-chat-like communication. In this paper, we conducted a literature review and interviewed 12 RSA users to comprehensively understand technical and navigational challenges in RSA for both the agents and users. Technical challenges are organized into four categories: agents’ difficulties in orienting and localizing the users; acquiring the users’ surroundings and detecting obstacles; delivering information and understanding user-specific situations; and coping with a poor network connection. Navigational challenges are presented in 15 real-world scenarios (8 outdoor, 7 indoor) for the users. Prior work indicates that computer vision (CV) technologies, especially interactive 3D maps and realtime localization, can address a subset of these challenges. However, we argue that addressing the full spectrum of these challenges warrants new development in Human-CV collaboration, which we formalize as five emerging problems: making object recognition and obstacle avoidance algorithms blind-aware; localizing users under poor networks; recognizing digital content on LCD screens; recognizing texts on irregular surfaces; and predicting the trajectory of out-of-frame pedestrians or objects. Addressing these problems can advance computer vision research and usher into the next generation of RSA service.
... To inspire suitable matching criteria in sighted and visually impaired collaboration, the remote service delivery of the Aira platform has showcased qualified "agents" who are considered trained in the user-friendly terminology and communication courtesy [29]. Along with the commercial development, one case study has ascertained the advantage to seek information about features of the environment from O&M specialists via Facetime [19]. Other related feasibility studies have also assigned individuals who are trained in O&M concepts as remote operators, leveraging their recognized qualifications in the design and adoption of remote mobility assistance [6,14,24]. ...
... We provided a floor plan to the remote guides, considering a real-world assistance scenario that such map information is a common method for sighted guides to rely on when navigating an unfamiliar indoor space. Also, for the remote guides in existing cost-friendly video link systems that have not incorporated the user localization inputs [19,22], the floor plan is the only available reference to capture the building layout from the user's camera. ...
... The implementation of various RSA services difers in three key areas: (i) the communication medium between users and remote sighted assistants. Earlier prototypes used audio [74], images [24,58], one-way video using portable digital cameras [31,42], or webcams [31], whereas the recent ones are using two-way video with smartphones [4,15,22,45]; (ii) the instruction form, e.g., via texts [60], synthetic speech [74], natural conversation [4,15,22], or vibrotactile feedback [35,84]; and (iii) localization technique, e.g., via GPS-sensor, crowdsourcing images or videos [24,59,78,93], fusing sensors [78], or using CV as discussed in the next subsection. ...
... Earlier prototypes used audio [56], images [57,20], one-way video using portable digital 2 Up to August 2021, the iPhone 12 Pro, Pro Max, 2020 and 2021 iPad Pro featured a LiDAR scanner. cameras [58,59], or webcams [59], whereas the recent ones are using two-way video with smartphones [60,61,24,25]; ...
Preprint
Remote sighted assistance (RSA) has emerged as a conversational assistive technology, where remote sighted workers, i.e., agents, provide real-time assistance to users with vision impairments via video-chat-like communication. Researchers found that agents' lack of environmental knowledge, the difficulty of orienting users in their surroundings, and the inability to estimate distances from users' camera feeds are key challenges to sighted agents. To address these challenges, researchers have suggested assisting agents with computer vision technologies, especially 3D reconstruction. This paper presents a high-fidelity prototype of such an RSA, where agents use interactive 3D maps with localization capability. We conducted a walkthrough study with thirteen agents and one user with simulated vision impairment using this prototype. The study revealed that, compared to baseline RSA, the agents were significantly faster in providing navigational assistance to users, and their mental workload was significantly reduced -- all indicate the feasibility and prospect of 3D maps in RSA.
... There are approximately 250 O&M specialists and guide dog mobility instructors in Australia and New Zealand, mostly itinerant, serving urban, rural and remote clients [15]. Dynamic videolink technologies like WhatsApp, FaceTime and the ROAM system are already used by some professionals to bridge huge distances and trouble-shoot travel challenges with clients in between visits [16,17]. ...
Article
Full-text available
Purpose: Orientation and Mobility (O&M) professionals teach people with low vision or blindness to use specialist assistive technologies to support confident travel, but many O&M clients now prefer a smartphone. This study aimed to investigate what technology O&M professionals in Australia and Malaysia have, use, like and want to support their client work, to inform the development of O&M technologies and build capacity in the international O&M profession. Materials and Methods: A technology survey was completed by professionals (n=36) attending O&M workshops in Malaysia. A revised survey was completed online by O&M specialists (n=31) primarily in Australia. Qualitative data about technology use came from conferences, workshops and interviews with O&M professionals. Descriptive statistics were analysed together with free-text data. Results: Limited awareness of apps used by clients, unaffordability of devices, and inadequate technology training discouraged many O&M professionals from employing existing technologies in client programs or for broader professional purposes. Professionals needed to learn smartphone accessibility features and travel-related apps, and ways to use technology during O&M client programs, initial professional training, ongoing professional development and research Conclusions: Smartphones are now integral to travel with low vision or blindness and early-adopter O&M clients are the travel tech-experts. O&M professionals need better initial training and then regular upskilling in mainstream O&M technologies to expand clients’ travel choices. COVID-19 has created an imperative for technology laggards to upskill for O&M tele-practice. O&M technology could support comprehensive O&M specialist training and practice in Malaysia, to better serve O&M clients with complex needs.
... FaceTime may be used on any Apple products including iPhone, iPad, iPod Touch, and MacBook; enables phone and video call communication, either one-on-one or in groups between Apple product users [32]. FaceTime allows the person who is vision impaired to connect to an assistant of their choice [33]. With FaceTime you can video group chat with up to 32 people, it is easy to use and you do not need internet connection. ...
Article
Full-text available
In December 2019, an outbreak of severe acute respiratory syndrome coronavirus 2 (SARSCoV-2) infection occurred in Wuhan, Hubei Province, China and spread across China and beyond. On February 12, 2020, WHO officially named the disease caused by the novel coronavirus as Coronavirus Disease 2019 (COVID-19). On January 30, 2020, WHO declared COVID-19 as the sixth public health emergency of international concern. An outbreak has posed significant threats to international health and the economy. It has raised intense attention not only within China but internationally. It has been declared a pandemic by the World Health Organization. WHO calls for social distancing with several measures such as Isolation, Quarantine, Closing schools, working from home instead of at the office, Restricting movement of people and the cancellation of mass gatherings, Cancelling or postponing conferences and large meetings, and not taking public transportation, including buses, subways, taxis, and rideshares. This leads to lockdown. Smartphone installed with relevant app brings people together even when this coronavirus (COVID-19) pandemic forces us apart. Life has to go on. Students need education, people need food and medication, and economy has to be stable. Social distancing should not be complicated. These apps make your loved ones, team members and favorites accessible. It makes people not feeling as if they are jailed, but take social distancing as a social responsibility. The apps are categorized into video conferencing, social video chats, medical, entertainment, health & fitness, food & drinks, and apps for visual & hearing impairments.
... However, 68% indicated that they would be interested in using this method if it was available in their geographical area, and if adequate training was provided. Holmes and Prentice (2015) presented a case study describing their experiences using the FaceTime application as a tool for orientation. The traveller who is blind connected with a remote O&M specialist using FaceTime on the iPhone to receive information and guidance when travelling in environments that had been identified as frustrating, for example, locating the entrance to a specific shop. ...
Article
This paper investigates the tools and practices used by Orientation and Mobility (O&M) specialists in instructing people who are blind or have low vision in concepts, skills, and techniques for safe and independent travel. Based on interviews with experienced instructors who practice in different O&M settings we find that a shortage of qualified specialists and restrictions on in-person activities during COVID-19 has accelerated interest in remote instruction and assessment, while widespread adoption of smartphones with accessibility support has driven interest in assistive apps. This presents both opportunities and challenges for a practice that is traditionally conducted in-person and assessed through qualitative observations. In response we identify multiple opportunities for HCI research in service of O&M, including: supporting a 'physician's assistant' model of remote O&M instruction and assessment, matching O&M instructors' clients with guide dogs, highlighting clients' progress towards O&M goals, and collaboratively planning routes and monitoring clients' independent travel progress.
Article
Remote sighted assistance (RSA) is an emerging navigational aid for people with visual impairments (PVI). Using scenario-based design to illustrate our ideas, we developed a prototype showcasing potential applications for computer vision to support RSA interactions. We reviewed the prototype demonstrating real-world navigation scenarios with an RSA expert, and then iteratively refined the prototype based on feedback. We reviewed the refined prototype with 12 RSA professionals to evaluate the desirability and feasibility of the prototyped computer vision concepts. The RSA expert and professionals were engaged by, and reacted insightfully and constructively to the proposed design ideas. We discuss what we learned about key resources, goals, and challenges of the RSA prosthetic practice through our iterative prototype review, as well as implications for the design of RSA systems and the integration of computer vision technologies into RSA.
Conference Paper
People with visual impairments (PVI) must interact with a world they cannot see. Remote sighted assistance (RSA) has emerged as a conversational assistive technology. We interviewed RSA assistants ("agents") who provide assistance to PVI via a conversational prosthetic called Aira (https://aira.io/) to understand their professional practice. We identified four types of support provided: scene description, navigation, task performance, and social engagement. We discovered that RSA provides an opportunity for PVI to appropriate the system as a richer conversational/social support tool. We studied and identified patterns in how agents provide assistance and how they interact with PVI as well as the challenges and strategies associated with each context. We found that conversational interaction is highly context-dependent. We also discuss implications for design.
Article
Full-text available
The demand for orientation and mobility (O&M) training for very young children with blindness or vision impairment (B/VI) and their families is increasing in the Early Intervention (EI) period. However, the extreme shortage of qualified O&M specialists to work with this population may be limiting their access to appropriate services. This study used a needs assessment survey to collect information about the feasibility of providing O&M services in EI using the alternative service delivery model of teleintervention. Responses from 121 individuals in the profession of EI for young children with B/VI provide insight on the current practices and perceptions of practitioners, educators, and administrators. The results of this study and its implications for future research are discussed.
Article
Full-text available
Sighted guidance is arguably the most efficient method for aiding visually impaired pedestrians in mobility. A sighted guide's verbal instructions compensate comprehensively for the insufficiency of visual input in navigation. Moreover, the companionship entails sharing of responsibilities and thus increases the blind traveller's sense of security during a journey. The disadvantages of the sighted guidance are that a sighted guide may not always be available or their presence may not be desirable because it restricts personal independence. This paper presents a novel system for navigation of visually impaired pedestrians, whereby advanced technologies were combined to allow a visually impaired user to remotely access the sighted guidance service. The user can choose when and for how long to use the system. The remote sighted guidance system is enabled by the integration of a remote vision facility with the Global Positioning System, the Geographic Information System and the third generation telecommunication network. A user trial is also reported in which the contribution of the system to the mobility of a visually impaired pedestrian was assessed. The results obtained lead to the conclusion that the remote sighted guidance is potentially a highly usable mobility aid.
Conference Paper
This paper explains the concept and reports tests of a remote guidance system for the blind. The system comprises two parts - a remote operator's terminal and a blind person's mobile terminal. The mobile terminal is a small electronic device which consists of a digital camera, GPS receiver and a headset. The two terminals are wirelessly connected via GSM and the Internet. The link transmits video from the blind traveller, GPS data and provides duplex audio communication between the terminals. The GPS location of the blind traveller and real time video captured from the mobile camera are displayed on the remote operator's terminal. The remote operator can navigate the traveller to a destination point and warn him of dangerous obstacles. System trials were carried out in an urban environment. As the system is in the development stage, mainly emphatic trials have been conducted so far. In this paper we list the main functional features of the system, its advantages and current shortcomings.
VizWiz: LocateIt -enabling blind people to locate objects in their environment. Computer Vision and Pattern Recognition Workshops (CVPRW)
  • J P Bigham
  • C Jayant
  • A Miller
  • B White
  • T Yeh
Bigham, J.P., Jayant, C., Miller, A., White, B., & Yeh, T. (2010). VizWiz: LocateIt -enabling blind people to locate objects in their environment. Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference, 65-72.
A remote guidance system for blind and visually impaired people via vibrotactile haptic feedback
  • S Scheggi
  • A Talarico
  • D Prattichizzo
Scheggi, S., Talarico, A., & Prattichizzo, D. (2014). A remote guidance system for blind and visually impaired people via vibrotactile haptic feedback," Control and Automation (MED), 20-23.
Sendero Group: Accessible location and navigation
  • Sendero Group
Sendero Group. (2015). Sendero Group: Accessible location and navigation. Retrieved from http://www.senderogroup. com/
A review of the Be My Eyes Remote Sighted Helper App for Apple iOS
  • B Holton
Holton, B. (2015). A review of the Be My Eyes Remote Sighted Helper App for Apple iOS. Access World. Retrieved from http://www.afb.org/afbpress/ pub.asp?DocID=aw160202&utm_
Access and Technology Officer
Nicole Holmes, B.A(Soc/Psych)., M.Spec.Ed (Sensory Disability)., Access and Technology Officer, Guide Dogs NSW/ACT, PO Box 1965, North Sydney NSW 2059, Australia; e-mail:<nholmes@ guidedogs.com.au>. Kelly Prentice, BA (SocSc Psych)/Teaching(Prim), M.Spec.Ed (Sensory Disability), O&M Specialist, Guide Dogs NSW/ACT, PO Box 1965, North Sydney NSW 2059, Australia; e-mail:<kprentice@guidedogs.com.au>.