Figure - uploaded by Santiago Real Valdés
Content may be subject to copyright.
Delay statistics of the update packages.

Delay statistics of the update packages.

Source publication
Article
Full-text available
Herein, we describe the Virtually Enhanced Senses (VES) system, a novel and highly configurable wireless sensor-actuator network conceived as a development and test-bench platform of navigation systems adapted for blind and visually impaired people. It allows to immerse its users into “walkable” purely virtual or mixed environments with simulated s...

Contexts in source publication

Context 1
... lost packages, which amount to a total of 1.4% of those transmitted, were discarded. The Table 1 and the first two graphs (Figure 8) refer to the input of the acoustic feedback algorithm The environment of the first test was composed of three columns with a diameter of 2 m; each one is made of a different "acoustic" material. The users were told that there are surrounded by three columns, but they did not know of their starting position and orientation, nor the relative position of those columns. ...
Context 2
... lost packages, which amount to a total of 1.4% of those transmitted, were discarded. The Table 1 and the first two graphs (Figure 8) refer to the input of the acoustic feedback algorithm analogously, while the latter two graphs (Figure 9) refer to the input of the haptic feedback algorithm. analogously, while the latter two graphs (Figure 9) refer to the input of the haptic feedback algorithm. ...

Citations

Article
Full-text available
Introduction: Visual-to-auditory sensory substitution devices are assistive devices for the blind that convert visual images into auditory images (or soundscapes) by mapping visual features with acoustic cues. To convey spatial information with sounds, several sensory substitution devices use a Virtual Acoustic Space (VAS) using Head Related Transfer Functions (HRTFs) to synthesize natural acoustic cues used for sound localization. However, the perception of the elevation is known to be inaccurate with generic spatialization since it is based on notches in the audio spectrum that are specific to each individual. Another method used to convey elevation information is based on the audiovisual cross-modal correspondence between pitch and visual elevation. The main drawback of this second method is caused by the limitation of the ability to perceive elevation through HRTFs due to the spectral narrowband of the sounds. Method: In this study we compared the early ability to localize objects with a visual-to-auditory sensory substitution device where elevation is either conveyed using a spatialization-based only method (Noise encoding) or using pitch-based methods with different spectral complexities (Monotonic and Harmonic encodings). Thirty eight blindfolded participants had to localize a virtual target using soundscapes before and after having been familiarized with the visual-to-auditory encodings. Results: Participants were more accurate to localize elevation with pitch-based encodings than with the spatialization-based only method. Only slight differences in azimuth localization performance were found between the encodings. Discussion: This study suggests the intuitiveness of a pitch-based encoding with a facilitation effect of the cross-modal correspondence when a non-individualized sound spatialization is used.