ArticlePDF Available
www.nature.com/eye
COMMENT
OPEN
Meta smart glasses—large language models and the future for
assistive glasses for individuals with vision impairments
Ethan Waisberg ]]]
1
, Joshua Ong ]]]
2
, Mouayad Masalkhi ]]]
3
, Nasif Zaman
4
, Prithul Sarker ]]]
4
, Andrew G. Lee
5,6,7,8,9,10,11,12
and
Alireza Tavakkoli ]]]
4
© The Author(s) 2023
Eye; https://doi.org/10.1038/s41433-023-02842-z
INTRODUCTION
In late September 2023, Meta unveiled its second generation of
smart glasses in collaboration with Ray-Ban [1]. These smart
glasses come with several improvements, including enhanced
audio and cameras, over and a lighter design. The glasses are
equipped with an ultra-wide 12 megapixel camera and immersive
audio recording capabilities, allowing users to capture moments
with a high level of detail and depth (Fig. 1) [1, 2]. These smart
glasses are part of Meta’s efforts to develop AR and VR
technologies. In addition, the glasses are equipped with AI-
powered assistants like Meta AI [1].
Ray-Ban Meta smart glasses also represent a promising
development in assistive technology for individuals with visual
impairments and have the potential to significantly enhance their
quality of life. The field of assistive technology has been
advancing rapidly in recent years, particularly due to significant
advances in artificial intelligence [3] and augmented reality [4].
Envision is currently one of the leading smart glasses developers,
and their technology allows visual information to be articulated
into speech for individuals with vision impairments. A recent
update included GPT integration, allowing users to ask the glasses
specific questions, like to summarize text, or only reading vegan
items from a menu. GPT-4 [5]. Future updates will further increase
the usefulness of this integration [6].
The Envision smart glasses are built on the Google Glass
Enterprise Edition 2 (now discontinued), and the high price of
the Google smart glasses likely posed as a barrier of the
adoption to this helpful technology in vision impaired indivi-
duals. Lowering the cost of assistive technologies is essential,
as previous research in the UK found a staggeringly low
employment rate of 26% for blind and partially sighted working
age individuals [7].
By Meta attempting to make smart glasses a mainstream
technology, the cost of smart glasses will continue to decrease in
the coming years. The incorporated advanced camera technology
can provide real-time image processing, while the built in AI can
recognize objects and convert this visual information into speech [1].
An update planned within the next year is expected to allow users to
ask Meta AI questions about what they are looking at. Users can
potentially interact with these assistants to receive auditory
information about their environment, read text aloud, recognize
faces, or get directions, which can be invaluable for individuals with
visual impairments (Fig. 2). Future incorporation of GPS navigation
accompanied with audio cues facilitates self-navigating for indivi-
duals with visual impairments in new environments. Previous
research in the U.K. showed that nearly 40% of blind and partially
sighted individuals are not currently able to complete all of the
journey that they need or wish to make [7]. Better accessibility
through the usage of smart glasses can lead to greater independence
for individuals with vision impairments.
Meta hopes to incorporate augmented reality in future versions
of smart glasses, and describes the current stage as a stepping
stone to true augmented reality. Users with vision impairments
would benefit highly from true augmented reality glasses, with
potential features like magnification, contrast enhancement, and
color correction, enhancing their ability to see and navigate their
surroundings more effectively. Meta’s future augmented reality
work will be compared to the Apple Vision Pro, which is also
looking to make mixed reality devices mainstream [8, 9]. Further
research will also be required to minimize the variability between
various different VR/AR devices prior to clinical use [10]. We look
forward to continued advances in augmented reality with AI
integration, and believe this technology can revolutionize how
individuals with vision impairments interact with the world.
1
Department of Ophthalmology, University of Cambridge, Cambridge, UK.
2
Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann
Arbor, MI, USA.
3
University College Dublin School of Medicine, Belfield, Dublin, Ireland.
4
Human-Machine Perception Laboratory, Department of Computer Science and
Engineering, University of Nevada, Reno, Reno, NV, USA.
5
Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA.
6
Department of Ophthalmology, Blanton Eye
Institute, Houston Methodist Hospital, Houston, TX, USA.
7
The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA.
8
Departments of
Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA.
9
Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX,
USA.
10
University of Texas MD Anderson Cancer Center, Houston, TX, USA.
11
Texas A&M College of Medicine, Bryan, TX, USA.
12
Department of Ophthalmology, The University of
Iowa Hospitals and Clinics, Iowa City, IA, USA. email: ew690@cam.ac.uk
Received: 17 October 2023 Revised: 1 November 2023 Accepted: 10 November 2023
1234567890();,:
Fig. 1 Technology components of Ray-Ban Meta smart glasses. Reprinted without changes from Laurent C, Iqbal, M.Z., Campbell, A.G.
Adopting smart glasses responsibly: potential benefits, ethical, and privacy concerns with Ray-Ban stories. AI Ethics. under Creative Commons
Attribution 4.0 International License http://creativecommons.org/licenses/by/4.0/.
Fig. 2 Diagram of how smart glasses can provide auditory direction guidance for individuals with vision impairments.
E. Waisberg et al.
2
Eye
REFERENCES
1. Introducing the New Ray-Ban | Meta Smart Glasses. Meta. 2023. https://
about.fb.com/news/2023/09/new-ray-ban-meta-smart-glasses/.
2. Iqbal MZ, Campbell AG. Adopting smart glasses responsibly: potential benefits,
ethical, and privacy concerns with Ray-Ban stories. AI Ethics. 2023;3:325–327.
3. Waisberg E, Ong J, Paladugu P, Kamran SA, Zaman N, Lee AG, et al. Challenges of
artificial intelligence in space medicine. Space Sci Technol. 2022;2022:1–7.
4. Masalkhi M, Waisberg E, Ong J, Zaman N, Sarker P, Lee AG, et al. Apple vision pro
for ophthalmology and medicine. Ann Biomed Eng. 2023;51:2643–2646.
5. Waisberg E, Ong J, Masalkhi M, Kamran SA, Zaman N, Sarker P, et al. GPT-4: a new
era of artificial intelligence in medicine. Ir J Med Sci. 2023. https://doi.org/
10.1007/s11845-023-03377-8.
6. Paladugu PS, Ong J, Nelson N, Kamran SA, Waisberg E, Zaman N, et al. Generative
adversarial networks in medicine: important considerations for this emerging
innovation in artificial intelligence. Ann Biomed Eng. 2023;51:2130–2142.
7. Slade, J, Edwards, R. My Voice 2015: the views and experiences of blind and
partially sighted people in the UK. accessed 10 Oct. 2023.
8. Waisberg E, Ong J, Masalkhi M, Zaman N, Sarker P, Lee AG, et al. Apple Vision Pro
and why extended reality will revolutionize the future of medicine. Ir J Med Sci.
2023. https://doi.org/10.1007/s11845-023-03437-z.
9. Waisberg E, Ong J, Masalkhi M, Zaman N, Sarker P, Lee AG, et al. The future of
ophthalmology and vision science with the Apple Vision Pro. Eye. 2023. https://
doi.org/10.1038/s41433-023-02688-5.
10. Sarker P, Zaman N, Ong J, Paladugu P, Aldred M, Waisberg E, et al. Test–retest
reliability of virtual reality devices in quantifying for relative afferent pupillary
defect. Trans Vis Sci Technol. 2023;12:2.
AUTHOR CONTRIBUTIONS
EW—Writing. JO—Writing. MM—Writing, figure development. NZ—Review, intel-
lectual support. PS—Review, intellectual support. AGL—Review, intellectual support.
AT—Review, intellectual support.
COMPETING INTERESTS
The authors declare no competing interests.
ADDITIONAL INFORMATION
Correspondence and requests for materials should be addressed to Ethan Waisberg .
Reprints and permission information is available at http://www.nature.com/
reprints
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims
in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The images or other third party
material in this article are included in the article’s Creative Commons licence, unless
indicated otherwise in a credit line to the material. If material is not included in the
article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecom-
mons.org/licenses/by/4.0/.
© The Author(s) 2023
E. Waisberg et al.
3
Eye
... assistive technology for blind people. Waisberg et al. [86] proposed embedding LLMs in Ray-Ban glasses from Meta to capture the egocentric view of blind people. To enable blind people to access visual arts, Zhang et al. [98] designed an interface that enables blind individuals to access visual information from images using LLMs and VLMs. ...
... Five participants mentioned that the hardware was heavy for long-term use. We asked participants to imagine that the ObjectFinder prototype was implemented on more efficient hardware [86], such as the Ray-Ban glasses for image capturing paired with an iPhone for data processing, while actually trying on a pair of Ray-Ban glasses to evaluate their feel. With these enhancements, the Net Promoter Score (NPS) of ObjectFinder increased from =6.750, =2.659 to =8.125, =2.013, as illustrated in Figure 10. ...
... Although ObjectFinder possesses MLLM capabilities to infer surroundings beyond the immediate view, it lacks the ability to precisely remember explored spaces, rendering every space unfamiliar to its current version and thus providing known information to the user. Additionally, while participants preferred our ObjectFinder when paired with efficient hardware [86], we only had them try the hardware and imagine its implementation, as it was not actually deployed on more efficient hardware. ...
Preprint
Full-text available
Assistive technology can be leveraged by blind people when searching for objects in their daily lives. We created ObjectFinder, an open-vocabulary interactive object-search prototype, which combines object detection with scene description and navigation. It enables blind persons to detect and navigate to objects of their choice. Our approach used co-design for the development of the prototype. We further conducted need-finding interviews to better understand challenges in object search, followed by a study with the ObjectFinder prototype in a laboratory setting simulating a living room and an office, with eight blind users. Additionally, we compared the prototype with BeMyEyes and Lookout for object search. We found that most participants felt more independent with ObjectFinder and preferred it over the baselines when deployed on more efficient hardware, as it enhances mental mapping and allows for active target definition. Moreover, we identified factors for future directions for the development of object-search systems.
... A OrCam MyEye 6 , outra tecnologia relevante, oferece funcionalidades semelhantes com a vantagem de funcionar offline, garantindo acessibilidade mesmo emáreas sem conectividade com a internet. (Fig. 2) mostram como o uso de câmeras e sistemas de detecção pode melhorar a independência dos usuários, fornecendo descrições precisas do ambiente e avisos sobre obstáculos [2], [14], [15]. O desenvolvimento contínuo dessas tecnologias, especialmente em soluções de código aberto, abre novas oportunidades para a criação de 5 https://www.meta.com/smart-glasses/ ...
... permite que a comunidade científica e desenvolvedores de tecnologia assistiva colaborem para melhorar a precisão dos algoritmos e a experiência do usuário. Ao mesmo tempo, dispositivos como o Meta Quest 3 e o Ray-Ban Meta Smart Glasses já demonstraram que a integração de IA com HMDsé viável, desde que haja um equilíbrio entre desempenho e conforto para o usuário [2], [14], [15]. O avanço dessas tecnologias permitirá que as soluções se tornem mais acessíveis, promovendo uma inclusão mais ampla para pessoas com deficiência sensorial. ...
Conference Paper
Full-text available
Este artigo explora o uso de Head-Mounted Displays, com e sem integração de inteligência artificial, para melhorar a acessibilidade de pessoas com deficiências sensoriais. São analisadas três aplicações principais: ampliação da percepção cromática para daltônicos, audiodescrição para deficientes visuais e tradução em tempo real para deficientes auditivos. A discussão inclui os desafios e oportunidades na implementação dessas soluções, destacando o potencial de tecnologias open-source e a colaboração entre desenvolvedores e a comunidade científica.
... Eyeglasses are both socially acceptable and conveniently positioned near the eating region, making them ideal for this purpose [3,68]. Contemporary commercial eyewear devices (e.g., Vision Pro [2] and Meta smart glasses [76]), which integrate advanced sensing capabilities for diverse applications, highlight the potential of eyewear for dietary data collection in daily contexts. However, existing systems are only capable of handling simple dishes and idealized dining environments, which restricts their applicability in real-world settings. ...
Preprint
Growing awareness of wellness has prompted people to consider whether their dietary patterns align with their health and fitness goals. In response, researchers have introduced various wearable dietary monitoring systems and dietary assessment approaches. However, these solutions are either limited to identifying foods with simple ingredients or insufficient in providing analysis of individual dietary behaviors with domain-specific knowledge. In this paper, we present DietGlance, a system that automatically monitors dietary in daily routines and delivers personalized analysis from knowledge sources. DietGlance first detects ingestive episodes from multimodal inputs using eyeglasses, capturing privacy-preserving meal images of various dishes being consumed. Based on the inferred food items and consumed quantities from these images, DietGlance further provides nutritional analysis and personalized dietary suggestions, empowered by the retrieval augmentation generation module on a reliable nutrition library. A short-term user study (N=33) and a four-week longitudinal study (N=16) demonstrate the usability and effectiveness of DietGlance.
... Key drivers include lower electronic costs, widespread mobile access, and growing demand for fitness-enhancing devices. For instance, Meta's Ray-Ban glasses (Waisberg et al., 2024) blend fashion and tech, showcasing the potential of wearables (Lee et al., 2016). These innovations not only improve daily life but hold promise for aiding individuals with disabilities (Baig et al., 2019). ...
Conference Paper
This research focuses on developing a prototype apparel system that integrates intelligent autonomous agents, human-based sensors, wireless networks, a mobile app, and a small-scale zipper robot. The goal is to create a practical assistive device that dynamically adjusts to user needs, particularly for the elderly and those with self-care challenges. Unlike existing wearable technologies that are overly technical, this system prioritizes usability and adaptability. It enables autonomous control of zipper speed and direction based on user profiles, enhancing comfort and functionality. Initial testing demonstrated effective communication between the zipper robot and mobile app, with adaptive adjustments and manual override features. The prototype serves as a proof-of-concept for future intelligent wearable devices, aiming to improve independence and quality of life.
... To address these limitations, we propose YET to Intervene (YETI), a novel framework (seen in Figure 1) for proactive AI intervention in augmented reality (AR) environments. Our approach leverages lightweight, real-time algorithmic signals to enable proactive assistance through AR interfaces such as smart glasses [19]. This system bridges the gap between cloud-based AI capabilities and real-world applications by enabling direct visual observation of user activities. ...
Preprint
Full-text available
Multimodal AI Agents are AI models that have the capability of interactively and cooperatively assisting human users to solve day-to-day tasks. Augmented Reality (AR) head worn devices can uniquely improve the user experience of solving procedural day-to-day tasks by providing egocentric multimodal (audio and video) observational capabilities to AI Agents. Such AR capabilities can help AI Agents see and listen to actions that users take which can relate to multimodal capabilities of human users. Existing AI Agents, either Large Language Models (LLMs) or Multimodal Vision-Language Models (VLMs) are reactive in nature, which means that models cannot take an action without reading or listening to the human user's prompts. Proactivity of AI Agents on the other hand can help the human user detect and correct any mistakes in agent observed tasks, encourage users when they do tasks correctly or simply engage in conversation with the user - akin to a human teaching or assisting a user. Our proposed YET to Intervene (YETI) multimodal agent focuses on the research question of identifying circumstances that may require the agent to intervene proactively. This allows the agent to understand when it can intervene in a conversation with human users that can help the user correct mistakes on tasks, like cooking, using AR. Our YETI Agent learns scene understanding signals based on interpretable notions of Structural Similarity (SSIM) on consecutive video frames. We also define the alignment signal which the AI Agent can learn to identify if the video frames corresponding to the user's actions on the task are consistent with expected actions. These signals are used by our AI Agent to determine when it should proactively intervene. We compare our results on the instances of proactive intervention in the HoloAssist multimodal benchmark for an expert agent guiding a user to complete procedural tasks.
... The information is converted into audio feedback using OCR and gTTS, offering multilingual support. A built-in NLP API (voice assistant) enhances privacy and affordability, reflecting advancements in similar technologies, such as the Meta smart glasses outlined by Waisberg et al. [1], the AI-based shopping assistance system by Sweatha and Sathiya Priya [2], and the internet of things (IoT)-based vision alert system proposed by Annapurna et al. [3]. The smart glass system for the blind and visually impaired (BVI) features object, text, and face recognition using YOLOv8, KerasOCR, and FaceNet. ...
... These glasses feature a 12-megapixel wide-angle camera and advanced audio recording, aimed at improving the independence of visually impaired individuals in daily life. Although the glasses offer high accuracy and advanced technology, high cost and battery life issues are identified as challenges (Waisberg et al., 2023). ...
Article
Full-text available
Visually impaired individuals face significant challenges in accessing environmental information, social interactions, and safety in their daily lives. This article aims to increase the independence of visually impaired individuals and simplify their daily lives using an AI-supported image and sound recognition system. The developed prototype is a pair of glasses consisting of a Raspberry Pi 4 microprocessor, integrated headphones, touch controls, and a powerful battery management system. The electronic diagram includes the integration of a camera and microprocessor. The AI algorithms developed successfully perform tasks such as object recognition, text reading, and facial recognition. The article provides a comprehensive literature review and introduces the prototype to visually impaired users. Based on the feedback from the introduction, the prototype's ease of use and effectiveness were evaluated, and improvements were made based on user feedback. The algorithms were tested for accuracy, with the prototype achieving a 78% accuracy score, close to GPT-4's 82%. Additionally, tests with 20 users resulted in an 85% user satisfaction rate.
Article
Full-text available
Light manipulation is in high demand in science and industry. Smart glasses enable dynamic light control for indoor illumination, privacy protection, automobiles, and displays. However, existing technologies often limit single‐functionality (simultaneous bidirectional transparency or opacity) and strongly inherent user preferences. Here, a novel class of stimuli‐responsive smart glass featuring a switchable unidirectional light source using 2,2‐dimethoxy‐2‐phenylacetophenone (DMPAP)‐doped twisted hybrid polymer network liquid crystals (TH‐PNLCs) with a linear polarizer is developed. These DMPAP‐doped TH‐PNLCs exhibit a blue‐shift in spectra under cross‐polarizer, enhanced twist angle, and increased light amplitude compared with undoped devices. When subjected to external electric stimuli and edge light, the device emits light bi‐directionally with different polarizations. As a proof of concept, placing a transmissive polarizer in front of the homogeneous alignment layer (Surface A) enables light absorption and scattering toward vertical alignment (Surface B), ensuring privacy by blocking visibility from Surface A while keeping Surface B visible. Replacing the transmissive linear polarizer with a reflective one enhances light scattering, boosting intensity from Surface B, making the device ideal for energy‐efficient indoor lighting. Featuring fast switching times and robust endurance, this stimuli‐responsive smart glass offers a pioneering solution for green buildings and automobiles with enhanced privacy and indoor illumination.
Article
Full-text available
The advent of artificial intelligence (AI) and machine learning (ML) has revolutionized the field of medicine. Although highly effective, the rapid expansion of this technology has created some anticipated and unanticipated bioethical considerations. With these powerful applications, there is a necessity for framework regulations to ensure equitable and safe deployment of technology. Generative Adversarial Networks (GANs) are emerging ML techniques that have immense applications in medical imaging due to their ability to produce synthetic medical images and aid in medical AI training. Producing accurate synthetic images with GANs can address current limitations in AI development for medical imaging and overcome current dataset type and size constraints. Offsetting these constraints can dramatically improve the development and implementation of AI medical imaging and restructure the practice of medicine. As observed with its other AI predecessors, considerations must be taken into place to help regulate its development for clinical use. In this paper, we discuss the legal, ethical, and technical challenges for future safe integration of this technology in the healthcare sector.
Article
Full-text available
Apple unveiled its highly anticipated mixed-reality headset, called the Apple Vision Pro on June 5, 2023. The primary user interface relies on eye tracking, hand, gestures, cameras, and sensors, eliminating the need for physical controllers such as keyboards or touch screens. The refined capabilities of this technology can be utilized for diverse purposes, including but not limited to medical and surgical education, and remote medical consultations. All things considered, virtual reality is a highly promising area for the future of medicine, from improving medical education and vision screening to physical and psychological rehabilitation. We look forward to further innovations in this exciting area for years to come.
Article
Full-text available
The emergence of new technologies continues to break barriers and transform the way we perceive and interact with the world. In this scientific article, we explore the potential impact of the new Apple XR headset on revolutionizing accessibility for individuals with visual deficits. With its rumored exceptional 4-K displays per eye and 5000 nits of brightness, this headset has the potential to enhance the visual experience and provide a new level of accessibility for users with visual impairments. We delve into the technical specifications, discuss the implications for accessibility, and envision how this groundbreaking technology could open up new possibilities for individuals with visual deficits.
Article
Full-text available
Background: The swinging flashlight test (SFT) is one of the most prominent clinical tests for detecting the relative afferent pupillary defect (RAPD). A positive RAPD localizes the lesion to the affected afferent pupil pathway and is a critical part of any ophthalmic exam. Testing for an RAPD, however, can be challenging (especially when small), and there is significant intrarater and interrater variability. Methods: Prior studies have shown that the pupillometer can improve the detection and measurement of RAPD. In our previous research, we have demonstrated an automatic SFT by utilizing virtual reality (VR), named VR-SFT. We applied our methods to two different brands of VR headsets and achieved comparable results by using a metric, called RAPD score, for differentiating between patients with and without (control) RAPD. We also performed a second VR-SFT on 27 control participants to compare their scores with their first assessments and measure test-retest reliability of VR-SFT. Results: Even in the absence of any RAPD positive data, the intraclass correlation coefficient produces results between 0.44 and 0.83 that are considered of good to moderate reliability. The same results are echoed by the Bland-Altman plots, indicating low bias and high accuracy. The mean of the differences of measurements from test-retest ranges from 0.02 to 0.07 for different protocols and different devices. Conclusions: As variability among various VR devices is an important factor that clinicians should consider, we discuss the test-retest reliability of VR-SFT and the variability among various assessments and between two devices. Translational relevance: Our study demonstrates the critical necessity of establishing test-retest reliability measures when bridging virtual reality technology into the clinical setting for relevant afferent pupillary defect.
Article
Full-text available
GPT-4 is the latest version of ChatGPT which is reported by OpenAI to have greater problem-solving abilities and an even broader knowledge base. We examined GPT-4’s ability to inform us about the latest literature in a given area, and to write a discharge summary for a patient following an uncomplicated surgery and its latest image analysis feature which was reported to be able to identify objects in photos. All things considered, GPT-4 has the potential to help drive medical innovation, from aiding with patient discharge notes, summarizing recent clinical trials, providing information on ethical guidelines, and much more.
Article
Full-text available
The human body undergoes many changes during long-duration spaceflight including musculoskeletal, visual, and behavioral changes. Several of these microgravity-induced effects serve as potential barriers to future exploration missions. The advent of artificial intelligence (AI) in medicine has progressed rapidly and has many promising applications for maintaining and monitoring astronaut health during spaceflight. However, the austere environment and unique nature of spaceflight present with challenges in successfully training and deploying successful systems for upholding astronaut health and mission performance. In this article, the dynamic barriers facing AI development in space medicine are explored. These diverse challenges range from limited astronaut data for algorithm training to ethical/legal considerations in deploying automated diagnostic systems in the setting of the medically limited space environment. How to address these challenges is then discussed and future directions for this emerging field of research.
Article
Full-text available
The adoption of innovative wearable technologies is potentially increasing as a new trend. Jumping into the Augmented Reality (AR) and Metaverse, Facebook (now known as Meta) launched smart glasses partnering with Ray-Ban sunglasses brand’s parent company EssilorLuxottica. Ray-Ban Stories has several technical features for entertainment and socializing; more importantly, these features can be adopted in the future for more advanced wearables. However, these smart glasses also came with many ethical and privacy concerns along with their potential benefits. Furthermore, the unbridled deployment of these smart glasses brought several challenging questions for public social interaction when we will have more such devices in our lives. This short article has discussed the Ray-Ban stories’ ethical and privacy issues for social interaction and public places.
My Voice 2015: the views and experiences of blind and partially sighted people in the UK. accessed
  • J Slade
  • R Edwards
Slade, J, Edwards, R. My Voice 2015: the views and experiences of blind and partially sighted people in the UK. accessed 10 Oct. 2023.
MM-Writing, figure development. NZ-Review, intellectual support
  • Ew-Writing
  • Jo-Writing
EW-Writing. JO-Writing. MM-Writing, figure development. NZ-Review, intellectual support. PS-Review, intellectual support. AGL-Review, intellectual support. AT-Review, intellectual support.