Seeing with the Brain

Department of Orthopedics and Rehabilitation, University of Wisconsin–Madison, Madison, Wisconsin, United States
International Journal of Human-Computer Interaction (Impact Factor: 0.85). 04/2003; 15(2):285-295. DOI: 10.1207/S15327590IJHC1502_6
Source: DBLP


We see with the brain, not the eyes (Bach-y-Rita, 1972); images that pass through our pupils go no further than the retina. From there image information travels to the rest of the brain by means of coded pulse trains, and the brain, being highly plastic, can learn to interpret them in visual terms. Perceptual levels of the brain interpret the spatially encoded neural activity, modified and augmented by nonsynaptic and other brain plasticity mechanisms (Bach-y-Rita, 1972, 1995, 1999, in press). However, the cognitive value of that information is not merely a process of image analysis. Perception of the image relies on memory, learning, contextual interpretation (e. g., we perceive intent of the driver in the slight lateral movements of a car in front of us on the highway), cultural, and other social fac-tors that are probably exclusively human characteristics that provide "qualia" (Bach-y-Rita, 1996b). This is the basis for our tactile vision substitution system (TVSS) studies that, starting in 1963, have demonstrated that visual information and the subjective qualities of seeing can be obtained tactually using sensory sub-stitution systems., The description of studies with this system have been taken

Download full-text


Available from: Mitchell Tyler, Apr 21, 2014
  • Source
    • "Advances in the area of Human Machine Input (HMI) for patients suffering with spinal cord injuries have been seen in the academic arena. Brain Wave Analysis has been highlighted as possible solution to the challenge currently faced by these patients [1], however early indications show that this technology while feasible for the general populace, may not be ready for patients who are just starting with HMI. "
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the key challenges facing spinal cord injury (SCI) patients in today's modern society is the ability to interface with the digital world around them. This paper outlines a new method of human-computer interaction using a technique that was previously used in Speech and Language Therapy. The system is centred around the user manipulating their tongue to activate and deactivate switches on a dental retainer plate to identify the tongues location. The location of the tongue can then converted into a desired user input.
    Full-text · Conference Paper · Nov 2015
  • Source
    • "In the research reported here we have picked up on the work presented in [1] [2] [3] [4] [5] in order to investigate the learning potential of humans with regard to shape representations on the tongue using electro-tactile arrays. However, in line with [11] "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper details an investigation into sensory substitution by means of direct electrical stimulation of the tongue for the purpose of information input to the human brain. In particular, a device has been constructed and a series of trials have been performed in order to demonstrate the efficacy and performance of an electro-tactile array mounted onto the tongue surface for the purpose of sensory augmentation. Tests have shown that by using a low resolution array a computer-human feedback loop can be successfully implemented by humans in order to complete tasks such as object tracking, surface shape identification and shape recognition with no training or prior experience with the device. Comparisons of this technique have been made with visual alternatives and these show that the tongue based tactile array can match such methods in convenience and accuracy in performing simple tasks.
    Full-text · Dataset · Oct 2013
  • Source
    • "In order to increase the sensing range, Electronic Travel Aids (ETAs) combine various sensors to scan the environment and provide different types of feedback [2][3]. Various approaches are used to represent the spatial information to the user, such as vibrations as used in the Ultracane [4], 3D auditory representations [5][6], transforming the 3D image into a bas-relief surface that can be tactily explored [7], etc. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes an instrumented handle with multimodal augmented haptic feedback, which can be integrated into a conventional white cane to extend the haptic exploration range of visually impaired users. The information extracted from the environment through a hybrid range sensor is conveyed to the user in an intuitive manner over two haptic feedback systems. The first renders impulses that imitate the impact of the real cane with a distant obstacle. In combination with the range sensors, this system allows to “touch” and explore remote objects, thus compensating for the limited range of the conventional white cane without altering its intuitive usage. The impulses are generated by storing kinetic energy in a spinning inertia wheel, which is released by abruptly braking the wheel. Furthermore, a vibrotactile interface integrated into the ergonomic handle conveys the distance to obstacles to the user. Three vibrating motors located along the index finger and hand are activated in different spatiotemporal patterns to induce a sense of distance through apparent movement. The realized augmented white cane not only increases the safety of the user by detecting obstacles from a further distance and alerting about those located at the head level, but also allows the user to build extended spatial mental models by increasing the sensing range, thereby allowing anticipated decision making and thus more natural navigation.
    Full-text · Conference Paper · Oct 2010
Show more

We use cookies to give you the best possible experience on ResearchGate. Read our cookies policy to learn more.