September 2023
·
24 Reads
·
3 Citations
Neuropsychologia
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
September 2023
·
24 Reads
·
3 Citations
Neuropsychologia
October 2022
·
86 Reads
·
7 Citations
Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.
March 2022
·
198 Reads
·
10 Citations
Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.
November 2020
·
73 Reads
·
4 Citations
Reading is a unique human cognitive skill and its acquisition was proven to extensively affect both brain organization and neuroanatomy. Differently from western sighted individuals, literacy rates via tactile reading systems, such as Braille, are declining, thus imposing an alarming threat to literacy among non-visual readers. This decline is due to many reasons including the length of training needed to master Braille, which must also include extensive tactile sensitivity exercises, the lack of proper Braille instruction and the high costs of Braille devices. The far-reaching consequences of low literacy rates, raise the need to develop alternative, cheap and easy-to-master non-visual reading systems. To this aim, we developed OVAL, a new auditory orthography based on a visual-to-auditory sensory-substitution algorithm. Here we present its efficacy for successful words-reading, and investigation of the extent to which redundant features defining characters (i.e., adding specific colors to letters conveyed into audition via different musical instruments) facilitate or impede auditory reading outcomes. Thus, we tested two groups of blindfolded sighted participants who were either exposed to a monochromatic or to a color version of OVAL. First, we showed that even before training, all participants were able to discriminate between 11 OVAL characters significantly more than chance level. Following 6 hours of specific OVAL training, participants were able to identify all the learned characters, differentiate them from untrained letters, and read short words/pseudo-words of up to 5 characters. The Color group outperformed the Monochromatic group in all tasks, suggesting that redundant characters’ features are beneficial for auditory reading. Overall, these results suggest that OVAL is a promising auditory-reading tool that can be used by blind individuals, by people with reading deficits as well as for the investigation of reading specific processing dissociated from the visual modality.
November 2014
·
268 Reads
·
22 Citations
Visual-to-auditory Sensory Substitution Devices (SSDs) are non-invasive sensory aids that provide visual information to the blind via their functioning senses, such as audition. For years SSDs have been confined to laboratory settings, but we believe the time has come to use them also for their original purpose of real-world practical visual rehabilitation. Here we demonstrate this potential by presenting for the first time new features of the EyeMusic SSD, which gives the user whole-scene shape, location & color information. These features include higher resolution and attempts to overcome previous stumbling blocks by being freely available to download and run from a smartphone platform. We demonstrate with use the EyeMusic the potential of SSDs in noisy real-world scenarios for tasks such as identifying and manipulating objects. We then discuss the neural basis of using SSDs, and conclude by discussing other steps-in-progress on the path to making their practical use more widespread.
December 2012
·
171 Reads
·
57 Citations
Visual-to-auditory sensory-substitution devices allow users to perceive a visual image using sound. Using a motor-learning task, we found that new sensory-motor information was generalized across sensory modalities. We imposed a rotation when participants reached to visual targets, and found that not only seeing, but also hearing the location of targets via a sensory-substitution device resulted in biased movements. When the rotation was removed, aftereffects occurred whether the location of targets was seen or heard. Our findings demonstrate that sensory-motor learning was not sensory-modality-specific. We conclude that novel sensory-motor information can be transferred between sensory modalities.
December 2012
·
17 Reads
Experimental setup and procedure illustration
November 2012
·
38 Reads
·
1 Citation
Journal of Molecular Neuroscience
September 2012
·
113 Reads
·
10 Citations
Distance information is critical to our understanding of our surrounding environment, especially in virtual reality settings. Unfortunately, as we gage distance mainly visually, the blind are prevented from properly utilizing this parameter to formulate 3D cognitive maps and cognitive imagery of their surroundings. We show qualitatively that with no training it is possible for blind and blindfolded subjects to easily learn a simple transformation between virtual distance and sound, based on the concept of a virtual guide cane (paralleling in a virtual environment the “EyeCane”, developed in our lab), enabling the discrimination of virtual 3D orientation and shapes using a standard mouse and audio-system.
... It is possible that affective encoding of positive stimuli in our sample of hightrauma-exposed children might also rely upon this ventral visual pathway and rely less on connections with the hippocampus and amygdala. The ventral visual stream is capable of undergoing rapid plasticity (Arbel et al., 2023). A study using transcranial directcurrent stimulation (tDCS) found that tDCS to the ventral visual stream improved memory encoding in adults (Zhao & Woodman, 2021). ...
September 2023
Neuropsychologia
... The first involves a conversion of full images to audio or tactile stimulation [42,[44][45][46][47]. On the practical level, the main advantage of such methods is their ability to preserve and convey a large amount of information present in the visual scene. This way, following a learning process, the brain is able to utilize its inherent abilities or develop new ones for comprehending various dimensions of information from the scene [48][49][50][51][52][53]. The main disadvantage of these methods relates directly to their advantage-in order to reach a meaningful level of comprehension pertaining to the scene in its entirety, extremely lengthy training times are required. ...
October 2022
... Eighteen congenitally blind native Polish speakers (10 females, mean age: 34.4 years, SD = 6.9) and 18 sighted adults (11 females, mean age: 35.2 years, SD = 8.5) matched in gender and age with the blind group (all ps > 0.34) participated in the study. Congenitally blind people are a hard-to-nd clinical population, and such a sample size is comparable to or even larger than reported in previous behavioral studies with blind individuals (e.g., 8 participants in [47]; 17 participants in [48]; or 12 participants in [49]). All participants reported no additional sensory or motor disabilities, neurological problems, or psychiatric disorders. ...
March 2022
... Specifically, we suggest that the addition of the 'color' feature to soundscapes might have provided another discrimination feature among auditory pixels, and thus might have increased discriminability among face features. Crucially, these results are in line with another recent work from our team which similarly showed that color-to-timbre mapping enhanced discrimination of auditory letters and boosts reading performance via the EyeMusic SSD compared to identical monochrome soundscapes 16 . Future studies exploring the advantages of auditory SSD-conveyed colors by directly comparing colorful to monochromatic complicated soundscapes will advance our understanding of the extent to which the visual domain compares to the auditory domain in image perception. ...
November 2020
... Participants were sometimes asked to describe their subjective experience in the course of the experiment, and these free reports were collected and reported (see, for example, Grespan Several attempts have been made to directly investigate the subjective SS experience with self-reports. Most commonly, researchers applied interviews (Karam, Russo, & Fels, 2009;Nagel et al., 2005), questionnaires ( Abboud et al., 2014;Auvray et al., 2007;Froese et al., 2012;Karam et al., 2009), and free reports ( Grespan et al., 2008;Maidenbaum et al., 2012;Ortiz et al., 2011). The studies usually adopted some initial assumptions about SS phenomenology. ...
September 2012
... In the context of spatial navigation, tactile sensory augmentation devices that provide access to previously unknown information have been suggested as a way to boost spatial navigation. For instance, the "EyeCane" is a cane that offers blind users a novel sense of distances toward objects in space [36][37][38]. It uses an infrared signal to measure the distance toward a pointed object and produces a corresponding auditory signal. ...
November 2012
Journal of Molecular Neuroscience
... The vertical dimension is mapped into a musical pentatonic pitch scale. The system has been slightly modified in a recent version (Maidenbaum et al., 2014b) with an increased image resolution of 50 × 30 and a hexatonic scale. This type of sonification using horizontal scans, has proven useful for specific tasks. ...
November 2014
... This example indicates that the acquisition of a letter or digit depends on successful integration between visual and auditory information (for a review, see Blomert & Froyen, 2010;Raij et al., 2000). As mentioned before, letters or digits are multimodally experienced also during learning to write as it involves motor programs and action plans in addition to vision and/or audition (see Fridland, 2021;Levy-Tzedek et al., 2012;Prinz, 1997). However, one outstanding question that still remains to answer is: does this kind of acquired knowledge transfer to a new sensory modality, such as tactile modality that is typically not used to encode linguistic inputs while learning? ...
December 2012