Figure 2 - uploaded by Patricia Grant
Content may be subject to copyright.
Illustration of the BrainPort V100 concept.

Illustration of the BrainPort V100 concept.

Source publication
Article
Full-text available
Introduction This study was conducted to evaluate the functional performance of the BrainPort V100 device, an FDA-cleared sensory-substitution system, in persons who are profoundly blind (that is, have some or no light perception). Methods This was a prospective, single-arm, multicenter clinical investigation. Participants received 10 hours of dev...

Contexts in source publication

Context 1
... camera captures the scene as a greyscale digital image and forwards the image to the controller for processing. The visual in- formation is then transmitted to the dorsal surface of the tongue via electrotactile stim- ulation patterns representative of the cam- era image (see Figure 2). The image is dig- itized to 400 pixels; in the standard setting, white pixels are felt as strong stimulation, grey pixels as medium-strength stimulation, and black pixels as no stimulation. ...
Context 2
... camera captures the scene as a greyscale digital image and forwards the image to the controller for processing. The visual in- formation is then transmitted to the dorsal surface of the tongue via electrotactile stim- ulation patterns representative of the cam- era image (see Figure 2). The image is dig- itized to 400 pixels; in the standard setting, white pixels are felt as strong stimulation, grey pixels as medium-strength stimulation, and black pixels as no stimulation. ...

Citations

... Finally, the third test was conceived as a performance analysis of SSD that encodes stream data from a camera. This constitutes one of the main families of SSD, which includes well-known visual-to-auditory SSD projects such as vOICe, EyeMusic [23], or VAS, as well as visual-to-tactile SSD, e.g., BrainPort [24], etc. ...
Article
Full-text available
A navigation system for individuals suffering from blindness or visual impairment provides information useful to reach a destination. Although there are different approaches, traditional designs are evolving into distributed systems with low-cost, front-end devices. These devices act as a medium between the user and the environment, encoding the information gathered on the surroundings according to theories on human perceptual and cognitive processes. Ultimately, they are rooted in sensorimotor coupling. The present work searches for temporal constraints due to such human–machine interfaces, which in turn constitute a key design factor for networked solutions. To that end, three tests were conveyed to a group of 25 participants under different delay conditions between motor actions and triggered stimuli. The results show a trade-off between spatial information acquisition and delay degradation, and a learning curve even under impaired sensorimotor coupling.
... The BrainPort user's notable dissatisfaction with the positive impact of their device reflects criticism that despite being able to significantly improve ability on laboratory tasks (Grant et al., 2016), it does not aid functioning in daily life (Manduchi & Coughlan, 2012;Upson, 2007). The BrainPort user reported ambivalence towards device satisfaction, suggesting that the more physical aspects of the device were neither satisfying nor dissatisfying. ...
Article
Full-text available
Assistive technology (AT) devices are designed to help people with visual impairments (PVIs) perform activities that would otherwise be difficult or impossible. Devices specifically designed to assist PVIs by attempting to restore sight or substitute it for another sense have a very low uptake rate. This study, conducted in England, aimed to investigate why this is the case by assessing accessibility to knowledge, awareness, and satisfaction with AT in general and with sensory restoration and substitution devices in particular. From a sample of 25 PVIs, ranging from 21 to 68 years old, results showed that participants knew where to find AT information; however, health care providers were not the main source of this information. Participants reported good awareness of different ATs, and of technologies they would not use, but reported poor awareness of specific sensory substitution and restoration devices. Only three participants reported using AT, each with different devices and varying levels of satisfaction. The results from this study suggest a possible breakdown in communication between health care providers and PVIs, and dissociation between reported AT awareness and reported access to AT information. Moreover, awareness of sensory restoration and substitution devices is poor, which may explain the limited use of such technology.
... Until now, studies related to the education of students with disabilities have only provided support for admission opportunities in the transition from secondary education to higher education, and studies for adaptation to college life after admission are insufficient [8]. As a result of analyzing the many research [9][10][11], many studies are conducted on blind adults as well as blind children, but there are many child-centered studies in Korea. Therefore, further research is needed to expand to adults, including blind college students [5,12] and it is necessary to develop support programs that meet the needs of blind college students, such as ghost-writing and word work, and voice support computers [4]. ...
Article
Full-text available
The college entrance rate of the disabled is gradually increasing, and each university is trying to provide equal rights and opportunities for college students with disabilities. However, students with disabilities still have difficulty adapting to college life due to limitations in the range of experience and diversity, restrictions in walking ability, and restrictions on interaction with the environment. Visually impaired students cannot perform tasks given by universities independently without the help of others, but universities do not have a system that is helpful except for supporting helpers. Therefore, in this paper, we aimed to develop independent report generation software, VTR4VI (Voice to Report program for the Visually Impaired) for students with visual impairment by using mobile devices that are always in possession. Since the existing speech recognition document editing software is designed for non-visually impaired people, it is difficult for the visually impaired to use. Accordingly, the requirements of a report generator for blind students were identified so blind students could freely perform assignments or write reports without helpers, just like non-visually impaired students. This software can be easily used by clicking on the Bluetooth remote control instead of touching the phone screen, and the operation is simple. As a result of our usability evaluation, our VTR4VI will surely help the visually impaired to study and make a written report.
... According to Grant et al. [12], BrainPort V100 is an oral electronic vision aid that uses electro-tactile stimulation to help profoundly blind people with direction, mobility, and object recognition. The device is used in conjunction with other assistive devices like a normal white cane or a guide dog. ...
Article
Full-text available
Object recognition method is a computer vision technique for identifying objects in images. The main purpose of this system build is to put an end to blindness by constructing automated hardware with Raspberry Pi that enables a visually impaired person to detect objects or persons in front of them instantly, and inform what is in front of them through audio. Raspberry Pi receives data from a camera then processes it. In addition, the blind will listen to a voice narration via an audio receiver. This paper's key objective is to provide the blind with cost-effective smart assistance to explore and sense the world independently. The second objective is to provide a convenient portable device allows users to recognise objects without touch, having the system determine the object in front of them. The camera module attached in Raspberry Pi will capture image and the processor will then process it. Subsequently, the processed image sends data to the audio receiver narrating the detected object(s). This system will be very useful for a blind person to explore the world by listening to the voice narration. The generated voice narration after processing the image will help the blind to visualise objects in front of them.
... Recently, wearable systems to aid wayfinding for BVI have been evaluated in humans. Brainport is a sensory substitution device that provides sensation related to vision (patterned electrical stimulation of the tongue based on images captured by a camera), but mobility is slower than what can be achieved with a guide dog or long cane (Nau et al., 2015;Grant et al., 2016). Mobile technology, such as smartphones, tablets, and augmented reality headsets, provide an off-the-shelf, programmable platform to support wayfinding. ...
... Assistive technology for BVI wayfinding includes some systems that can detect objects or patterns, but do not perform scene understanding. Brainport is a sensory substitution system (Grant et al., 2016). In contrast, our system uses easily understood commands, and the users were able to maintain their preferred walking speed while crossing the street, which indicates that they were not slowed by decision making on how to respond. ...
Article
Full-text available
Independent travelling is a significant challenge for visually impaired people in urban settings. Traditional and widely used aids such as guide dogs and long canes provide basic guidance and obstacle avoidance but are not sufficient for complex situations such as street crossing. We propose a new wearable system that can safely guide a user with visual impairment at a signalized crosswalk. Safe street crossing is an important element of fully independent travelling for people who are blind or visually impaired (BVI), but street crossing is challenging for BVI because it involves several steps reliant on vision, including scene understanding, localization, object detection, path planning, and path following. Street crossing also requires timely completion. Prior solutions for guiding BVI in crosswalks have focused on either detection of crosswalks or classifying crosswalks signs. In this paper, we demonstrate a system that performs all the functions necessary to safely guide BVI at a signalized crosswalk. Our system utilizes prior maps, similar to how autonomous vehicles are guided. The hardware components are lightweight such that they can be wearable and mobile, and all are commercially available. The system operates in real-time. Computer vision algorithms (Orbslam2) localize the user in the map and orient them to the crosswalk. The state of the crosswalk signal (don’t walk or walk) is detected (using a convolutional neural network), the user is notified (via verbal instructions) when it is safe to cross, and the user is guided (via verbal instructions) along a path towards a destination on the prior map. The system continually updates user position relative to the path and corrects the user’s trajectory with simple verbal commands. We demonstrate the system functionality in three BVI participants. With brief training, all three were able to use the system to successfully navigate a crosswalk in a safe manner.
... D'autres systèmes utilisent une seule caméra ou une caméra stéréo [213,220]. Dans [221], une enquête approfondie sur l'applicabilité et la pertinence de ces systèmes a été élaboré. Par exemple, il a été montré que que ces dispositifs peuvent rencontrer de nombreux problèmes : ...
Thesis
Cette thèse s’inscrit dans le cadre de la navigation perceptuelle. Notre objectif est d’étudier et de concevoir un système d’interprétation d’une scène d’environnement intérieur, observée par un système multi-capteurs réunissant un capteur ultrason et une caméra RVB. Le système proposé peut être employé pour équiper un dispositif intelligent d’assistance aux non voyants, ou encore un robot opérant dans des espaces meublés. Dans un système d’interprétation de scène, les acquisitions faites par les capteurs, présentent des restrictions du monde réel et se trouvent affectées d’imperfections, qu’il convient de prendre en compte au lieu de les ignorer. Leur prise en compte dans notre système d’interprétation a été effectuée par l’emploi de la théorie des possibilités lors de la modélisation des données acquises. Les modèles adoptés sont des distributions de possibilités. L’analyse et l’interprétation de la scène acquise s’est en suite basée sur ces connaissances possibilistes. Le système d’aide à la navigation proposé dans ce travail, présente une description de la scène environnante selon un modèle simpliste, partageant le champ intercepté par les capteurs en trois zones majeures, à savoir : face, gauche et droite. Il fournit à l’utilisateur des informations concernant la distance qui le sépare des objets détectés, la rigidité matérielle de ces objets, ainsi que leur positionnement dans la scène (objet à gauche, objet en face, objet à droite). Les performances du système d’interprétation proposé sont évaluées en utilisant le prototype "NA_System", développé par l’équipe "Cybernics team" du laboratoire "CEM_Lab" de l’École nationale d’ingénieurs de Sfax (ENIS). Les résultats obtenus sont encourageants et montrent l’efficacité de la théorie des possibilités comme cadre de représentation de données acquises de différents capteurs. La stratégie d’interprétation de scène proposée s’est montrée efficace pour intégrer les informations issues de multiples sources de connaissances. Dans la chaine de traitement de données adoptées pour l’analyse et l’interprétation de la scène, de nouvelles approches ont été proposées, notamment pour la sélection d’attributs, la détection d’objets saillants, la classification, la fusion et le recalage de données issues de deux sources.
... This ranges from simple left/right indications to raw distance measurements, e.g., encoded as haptic stimuli intensity triggered by actuators. For the last few years, we observed an unprecedent growth in this field: guidance systems based on Bluetooth beacon networks or Global Navigation Satellite Systems (GNSS), e.g., [3][4][5]; artificial intelligence systems that recognizes key features of camera images ( [6]); or even general-purpose SSD (e.g., [7,8]) have been developed, some of which are currently available as free smartphone applications ( [2]). ...
... Furthermore, given that the ARCore-compatible smartphone captures both position and rotation of the users' head, it suffices to calculate the pose of the virtual camera and to generate the acoustic output. Analogously, if an appropriate haptic interface is included, e.g., an array of haptic actuators or an electrotactile display, recent visual-tactile SSD systems based on Bach-y-Rita pioneer TVSS [28] like BrainPort [7], Forehead Retina System [29] or HamsaTouch [30] can also be integrated. ...
Article
Full-text available
Herein, we describe the Virtually Enhanced Senses (VES) system, a novel and highly configurable wireless sensor-actuator network conceived as a development and test-bench platform of navigation systems adapted for blind and visually impaired people. It allows to immerse its users into “walkable” purely virtual or mixed environments with simulated sensors and validate navigation system designs prior to prototype development. The haptic, acoustic, and proprioceptive feedback supports state-of-art sensory substitution devices (SSD). In this regard, three SSD were integrated in VES as examples, including the well-known “The vOICe”. Additionally, the data throughput, latency and packet loss of the wireless communication can be controlled to observe its impact in the provided spatial knowledge and resulting mobility and orientation performance. Finally, the system has been validated by testing a combination of two previous visual-acoustic and visual-haptic sensory substitution schemas with 23 normal-sighted subjects. The recorded data includes the output of a “gaze-tracking” utility adapted for SSD.
... Early SSD research involved linking a moveable TV camera to an array of vibrating pins positioned on the users' back to create live 'tactile images' [5,6]. The current version of this device, termed the BrainPort, instead uses an eyeglass mounted camera to control patterns of electrical stimulation on the tongue [7]. However, this is expensive for most users and requires extensive training [8], although users can also benefit from customising to their preferences [9]. ...
Article
Full-text available
Depth, colour, and thermal images contain practical and actionable information for the blind. Conveying this information through alternative modalities such as audition creates new interaction possibilities for users as well as opportunities to study neuroplasticity. The 'SoundSight' App (www. Sound Sight. co. uk) is a smartphone platform that allows 3D position, colour, and thermal information to directly control thousands of high-quality sounds in real-time to create completely unique and responsive soundscapes for the user. Users can select the specific sensor input and style of auditory output, which can be based on anything-tones, rainfall, speech, instruments, or even full musical tracks. Appropriate default settings for image-sonification are given by designers, but users still have a fine degree of control over the timing and selection of these sounds. Through utilising smartphone technology with a novel approach to sonification, the SoundSight App provides a cheap, widely accessible, scalable, and flexible sensory tool. In this paper we discuss common problems encountered with assistive sensory tools reaching long-term adoption, how our device seeks to address these problems, its theoretical background, its technical implementation, and finally we showcase both initial user experiences and a range of use case scenarios for scientists, artists, and the blind community.
... Failing the test under these conditions proves that the device did not provide text reading vision in that study (failing the necessary condition). 4 Passing the test, however, would not have proven the opposite because of these confounding factors of letters in the MAFC paradigm used. ...
Article
Full-text available
Visual prostheses aim to restore, at least to some extent, vision that leads to the type of perception available for sighted patients. Their effectiveness is almost always evaluated using clinical tests of vision. Clinical vision tests are designed to measure the limits of parameters of a functioning visual system. I argue here that these tests are rarely suited to determine the ability of prosthetic devices and other therapies to restore vision. This paper describes and explains many limitations of these evaluations. Prosthetic vision testing often makes use of multiple-alternative forced-choice (MAFC) procedures. Although these paradigms are suitable for many studies, they are frequently problematic in vision restoration evaluation. Two main types of problems are identified: (1) where nuisance variables provide spurious cues that can be learned in repeated training, which is common in prosthetic vision, and thus defeat the purpose of the test; and (2) even though a test is properly designed and performed, it may not actually measure what the researchers believe, and thus the interpretation of results is wrong. Examples for both types of problems are presented. Additional problems arise from confounding factors in the administration of tests are pointed as limitations of current device evaluation. For example, head tracing of magnified objects enlarged to compensate for the system's low resolution, in distinction from the scanning head (camera) movements with which users of prosthetic devices expand the limited field of view. Because of these problems, the ability to perform satisfactorily on the clinical tests is necessary but insufficient to prove vision restoration, therefore, additional tests are needed. I propose some directions to pursue in such testing. Translational relevance: Numerous prosthetic devices are being developed and introduced to the market. Proving the utility of these devices is crucial for regulatory and even for post market acceptance, which so far has largely failed, in my opinion. Potential reasons for the failures despite success in regulatory testing and directions for designing improved testing are provided. It is hoped that improved testing will guide improved designs of future prosthetic systems and other vision restoration approaches.
... SSDs convey visual information through other sensory modalities such as touch or audition. Understanding the information normally processed by one modality when presented in another may not be an intuitive and automatic task [3], but rather a learned skill like reading or language that requires training and practice [4,5] (but see Stiles and Shimojo [6] for results indicating that perception is intuitive, to some extent, with an auditory sensory substitution). ...
... Studies conducted to evaluate the functional performance of the BrainPort also pointed out the necessity of training by a professional and following the training home selfpractice by users in activities of daily living; and they described a structured training protocol which is carried out over a few 3-h sessions and usually lasts for a total of 10-15 h [e.g. 3,5,8]. The training protocol starts with familiarising participants with the device, its components, purpose, and limitations, how it converts visual information to tactile stimulation on the tongue, and how to interpret the stimulation. ...
... Although participants were trained by following the structured training protocol in the aforementioned studies [3,5,8], details related to their individual performance during the training have not been reported. For example, Grant et al. [5] reported that 57 participants who completed their study were trained by an experienced BrainPort trainer prior to functional testing with the device. ...
Article
Purpose: Visual sensory substitution devices (SSDs) convey visual information to a blind person through another sensory modality. Using a visual SSD in various daily activities requires training prior to use the device independently. Yet, there is limited literature about procedures and outcomes of the training conducted for preparing users for practical use of SSDs in daily activities. Methods: We trained 29 blind adults (9 with congenital and 20 with acquired blindness) in the use of a commercially available electro-tactile SSD, BrainPort. We describe a structured training protocol adapted from the previous studies, responses of participants, and we present retrospective qualitative data on the progress of participants during the training. Results: The length of the training was not a critical factor in reaching an advanced stage. Though performance in the first two sessions seems to be a good indicator of participants' ability to progress in the training protocol, there are large individual differences in how far and how fast each participant can progress in the training protocol. There are differences between congenital blind users and those blinded later in life. Conclusions: The information on the training progression would be of interest to researchers preparing studies, and to eye care professionals, who may advise patients to use SSDs. IMPLICATIONS FOR REHABILITATION There are large individual differences in how far and how fast each participant can learn to use a visual-to-tactile sensory substitution device for a variety of tasks. Recognition is mainly achieved through top-down processing with prior knowledge about the possible responses. Therefore, the generalizability is still questionable. Users develop different strategies in order to succeed in training tasks.