Seeing with the Brain
Department of Orthopedics and Rehabilitation Medicine, and
Department of Biomedical Engineering
University of Wisconsin
Mitchell E. Tyler
Kurt A. Kaczmarek
Department of Biomedical Engineering, and
Department of Rehabilitation Medicine
University of Wisconsin
We see with the brain, not the eyes (Bach-y-Rita, 1972); images that pass through
our pupils go no further than the retina. From there image information travels to
the rest of the brain by means of coded pulse trains, and the brain, being highly
plastic, can learn to interpret them in visual terms. Perceptual levels of the brain
interpret the spatially encoded neural activity, modified and augmented by
nonsynaptic and other brain plasticity mechanisms (Bach-y-Rita, 1972, 1995,
1999, in press). However, the cognitive value of that information is not merely a
process of image analysis. Perception of the image relies on memory, learning,
contextual interpretation (e.g., we perceive intent of the driver in the slight lateral
movements of a car in front of us on the highway), cultural, and other social fac-
tors that are probably exclusively human characteristics that provide “qualia”
(Bach-y-Rita, 1996b). This is the basis for our tactile vision substitution system
(TVSS) studies that, starting in 1963, have demonstrated that visual information
and the subjective qualities of seeing can be obtained tactually using sensory sub-
stitution systems.1The description of studies with this system have been taken
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION, 15(2), 285–295
Copyright © 2003, Lawrence Erlbaum Associates, Inc.
Requests for reprints should be sent to Paul Bach-y-Rita, Department of Rehabilitation Medicine,
University of Wisconsin, 1300 University Ave, Room 2756, Madison, WI 53706. E-mail: pbachyri@
1The term “sensory substitution” is typically defined as the use of one human sense to receive in-
formation normally received by another sense (Kaczmarek, 1995). However, in the context of medi-
ated reality systems, which may incorporate multiple modalities of both sensing and display, the use
of one sense (in this article, touch) to display information normally acquired via another human sense
(e.g., visual information acquired by a video camera) or alternatively via a “non-natural” sense such
as sonar ranging, could be considered to be a form of sensory augmentation (i.e., addition of informa-
from our previous reports. Various sensory substitution devices intended for re-
habilitation have been extensively reviewed by Collins (1985), Kaczmarek and
Bach-y-Rita (1995), and Szeto and Riso (1990), while Mann (1997, 1998) describes
a clothing-based radar-to-vibrotactile system that uses novel signal processing
techniques to tactually convey both reflected energy and velocity information for
both artistic and rehabilitative purposes.
The TVSS may be characterized as a humanistic intelligence system. It repre-
sents a symbiosis between instrumentation—for example, an artificial sensor ar-
ray (TV camera)—computational equipment, and the human user. Consistent
with the terminology of this issue, this is made possible by “instrumental sensory
plasticity,” the capacity of the brain to reorganize when there is: (a) functional de-
mand, (b) the sensor technology to fill that demand, and (c) the training and
psychosocial factors that support the functional demand. To constitute such sys-
tems then, it is only necessary to present environmental information from an arti-
ficial sensor in a form of energy that can be mediated by the receptors at the hu-
man-machine interface, and for the brain, through a motor system (e.g., a
head-mounted camera under the motor control of the neck muscles), to deter-
mine the origin of the information.
2. HUMAN-MACHINE INTERFACE: THE SKIN
Asimple example of sensory substitution system is a blind person navigating with
a long cane, who perceives a step, a curb, a foot, and a puddle of water, but during
ical sensors are located), or of moving the arm and hand holding the cane. Rather,
he perceives elements in his environment as mental images derived from tactile in-
formation originating from the tip of the cane. This can now be extended into other
domains with modern technology and the availability of artificial sensory recep-
tors, such as (a) a miniature TV camera for blind persons, (b) a MEMS technology
accellerometer for providing substitute vestibular information for persons with bi-
lateral vestibular loss, (c) touch and shear-force sensors to provide information for
spinal cord injured persons, (d) instrumented condom for replacing lost sex sensa-
tion, or (e) a sensate robotic hand (Bach-y-Rita, 1999).
In our first sensory substitution project, we developed tactile vision substitu-
tion systems (TVSS) to deliver visual information to the brain via arrays of stimu-
lators in contact with the skin of one of several parts of the body (abdomen, back,
286 Bach-y-Rita, Tyler, Kaczmarek
tion to an existing sensory channel). This usage is supported by the existence of transmodal percep-
tion—the observation that multiple human senses (even artificial ones) can mediate similar percep-
tual constructs based on underlying information that is essentially amodal, or not specific to any
given sensory system (Epstein, 1985). We therefore suggest that, at least in multimodality systems,
new nomenclature may be needed to independently specify (a) the source of the information (type of
environmental sensor, or virtual model); (b) the type of human information display (visual, auditory,
tactual, etc.); and finally (c) the role of the information (substitutive or augmentative), all of which
may play a role in reality mediation.
thigh). Optical images picked up by a TV camera were transduced into a form of
energy (vibratory or direct electrical stimulation) that could be mediated by the
skin receptors. In these sensory substitute systems, the visual information
reaches the perceptual levels for analysis and interpretation via somatosensory
pathways and structures. After sufficient training with the TVSS, our subjects re-
ported experiencing the image in space, instead of on the skin (see, e.g., Figure 1).
They learn to make perceptual judgments using visual means of analysis, such as
perspective, parallax, looming and zooming, and depth judgments (Bach-y-Rita,
Collins, Saunders, White, & Scadden, 1969; cf., Bach-y-Rita, 1972, 1989, 1995,
1996, 1999; Bach-y-Rita, Kaczmarek, & Meier, 1998; Bach-y-Rita, Kaczmarek, Ty-
ler, & Garcia-Lara, 1998; Bach-y-Rita, Webster, Tompkins, & Crabb, 1987;
Kaczmarek & Bach-y-Rita, 1995; White, Saunders, Scadden, Bach-y-Rita, & Col-
Although the TVSS systems have only had between 100 and 1032 point arrays,
the low resolution has been sufficient to perform complex perception and
“eye”-hand coordination tasks. These have included facial recognition, accurate
judgment of speed and direction of a rolling ball with over 95% accuracy in batting
were performed on an electronics company assembly line with a 100 point
Seeing With the Brain287
a modified Optacon. The tactile image is picked up with one finger statically placed
on the 6 × 24 vibrotactile array. LED monitor in foreground is visual representation
of active pattern on the tactile display, which is obtained by the child’s
Child reproducing perceived image of a teachers hand as displayed on
vibrotactile array clipped to the work-bench against which the blind worker
(substituting for the ocular piece of a dissection microscope) was delivered to the
human-machine interface (Bach-y-Rita, 1995, pp. 187–193).
3. TACTILE “COLORS”
In the TVSS studies cited above, the stimulus arrays presented only black-white
information, without gray scale. However, the tongue electrotactile system does
present gray-scaled pattern information, and multimodal and multidimensional
stimulation is possible. Simultaneously, we have also modeled the electrotactile
stimulation parameter space to determine how we might elicit tactile “colors.”
Aiello (1998a, 1998b) has identified six stimulus parameters: the current level, the
pulse width, the interval between pulses, the number of pulses in a burst, the
burst interval, and the frame rate. All six parameters in the waveforms can, in
principle, be varied independently within certain ranges, and may elicit poten-
tially distinct responses. For example, in a study of electrical stimulation of the
skin of the abdomen, Aiello (1998a) suggested that the best way to encode inten-
sity information independent of other percept qualities with a multidimensional
stimulus waveform was through modulation of the energy delivered by the stim-
ulus. In that case, the energy was varied in such a way that the displacement in
the parameter space, corresponding to a given transition between energy levels,
was minimal (gradient mode of stimulation). Although the gradient mode of
stimulation requires a real-time fulfillment of mathematical constraints among all
the parameters, its implementation could be included within a microelectronic
package for signal treatment.
4. A TONGUE-BASED MACHINE INTERFACE
The skin systems we have used previously have allowed us to demonstrate the
principle, but have had practical problems (Bach-y-Rita, Kaczmarek, Tyler, et al.,
1998; Kaczmarek & Bach-y-Rita, 1995). The tongue interface overcomes many of
of an electrolytic solution, saliva, assures good electrical contact. The results ob-
tained with a small electrotactile array developed for a study of form perception
with a finger tip demonstrated that perception with electrical stimulation of the
tongue is somewhat better than with finger-tip electrotactile stimulation, and the
tongue requires only about 3% of the voltage (5-15 V), and much less current
(0.4-2.0 mA), than the finger-tip. The electronic system has been described else-
where (Bach-y-Rita, Kaczmarek, Tyler, et al., 1998).
The development of a practical human-machine interface, via the tongue
(Bach-y-Rita, Kaczmarek, & Meier, 1998; Bach-y-Rita, Kaczmarek, Tyler, et al.,
288 Bach-y-Rita, Tyler, Kaczmarek
1998), presents the opportunity to progress from laboratory prototypes to useful
devices. The interface should permit the development of sensory systems that are
two-dimensional display on the tongue array can reach the brain, and with a train-
ing program, can become part of a new sensory system.2
4.1. Tongue Display Unit
use on the fingertip and tongue (Kaczmarek & Tyler, 2000). Our results indicated
that for the tongue, the latter (voltage control) has somewhat preferable stimula-
ing MEMS technology. The present tongue display unit (TDU, see Figure 2) has
output coupling capacitors in series with each electrode guarantee zero dc current
to minimize potential skin irritation. The output resistance is approximately 1 kΩ.
The design also employs switching circuitry to allow all electrodes that are not ac-
tive or “on image” to serve as the electrical ground for the array, affording a return
path for the stimulation current.
4.2. Tongue Display
Electrotactile stimuli are delivered to the dorsum of the tongue via flexible elec-
trode arrays (Figure 3) placed in the mouth, with connection to the stimulator ap-
paratus (TDU) via a flat cable passing out of the mouth. The tongue electrode ar-
ray and cable are made of a thin (100 µm) strip of polyester material (Mylar®)
onto which a rectangular matrix of gold-plated copper circular electrodes has
been deposited by a photolithographic process similar to that used to make
printed circuit boards. In the “virtual ground” configuration shown, the elec-
trodes are separated by 2.34 mm (center to center). All exposed metallic surfaces
are gold plated for biocompatibility. Our present model has a 12 × 12 matrix of
electrodes, although other patterns are easily fabricated with changes in litho-
graphic artwork. Our studies have shown that this configuration is much more
rugged and reliable, and spatial resolution performance was comparable to that
for the same geometry with a concentric ground plane (Kaczmarek & Tyler,
Seeing With the Brain 289
2It should be noted that while the tongue display described in this article is strictly speaking a dis-
play or output device, there is nothing to prevent the incorporation of pressure sensors or electrical im-
pedance monitoring to sense the position and force of the tongue against the array, thereby creating a
bidirectional mouth-based interface. It should also be noted that the tongue display is not presently in-
sensors (Savoy et al.,1998).
4.3. Waveform Parameters
The electrotactile stimulus consists of 40-µs pulses delivered sequentially to each
of the active electrodes in the pattern. Bursts of three pulses each are delivered at
a rate of 50 Hz with a 200 Hz pulse rate within a burst. This structure was shown
previously to yield strong, comfortable electrotactile percepts (Kaczmarek et al.,
1992). Positive pulses are used because they yield lower thresholds and a supe-
rior stimulus quality on the fingertips (Kaczmarek et al., 1994) and on the tongue
(Kaczmarek, unpublished pilot studies).
4.4. Perceived Pattern Intensity Compensation
In previous studies, we have determined that both the threshold of sensation and
useful range of intensity, as a function of location on the tongue, are significantly
inhomogeneous. Specifically, the front and medial portions of the tongue have a
relatively low threshold of sensation, whereas the thresholds for the rear and lat-
eral regions of the stimulation area are as much as 32% higher (Tyler & Braun,
2000). We believe that this is due primarily to the differences in tactile sensor den-
stimulation varies as a function of location, and in a pattern similar to that for
290 Bach-y-Rita, Tyler, Kaczmarek
TDU-1 with 144-point tongue array & laptop PC.
threshold. To compensate for this sensory inhomogeneity, we have developed a set
of algorithms that allows the user to individually adjust both the mean stimulus
level and the range of available intensity (as a function of tactor location) on the
tongue. The algorithms are based on a linear regression model of the experimental
5. IMPLICATIONS FOR MEDIATED PERCEPTION
It now appears possible to develop tactile human-machine interface systems that
are practical and cosmetically acceptable. For blind persons, a miniature TV cam-
era, the microelectronic package for signal treatment, the optical and zoom sys-
modified image wirelessly could be included in a glasses frame. For the mouth, an
electrotactile display, a microelectronics package, a battery compartment, and the
FM receiver will be built into a dental retainer. The stimulator array could be a
sheet of electrotactile stimulators of approximately 27 × 27 mm. All of the compo-
nents including the array could be a standard package that attaches to the molded
retainer with the components fitting into the molded spaces of standard dimen-
Seeing With the Brain291
Close-up of 144-point (12 x 12) “virtual ground” electrotactile tongue
sions. Although the present system uses 144 tactile stimulus electrodes, future sys-
tem’s conceptual design.
For all applications, the tongue display system may be essentially the same,
but the source of the information to be delivered to the brain through the hu-
man-machine interface would determine the sensor instrumentation for each ap-
plication. Thus, as examples, for hand amputees or quadriplegics, the source
would be sensors on the surface of the hand prosthesis; for astronauts, the source
would be sensors on the surface of the astronaut glove; for night vision, the
source would be an infra-red camera. Similarly, for blind persons the system
would employ a camera sensitive to the visible spectrum; and for pilots and race
car drivers whose primary goal is to avoid the retinal delay (much greater than
the signal transduction delay through the tactile system) in the reception of infor-
mation requiring very fast responses, the source would be built into devices at-
tached to the automobile or airplane. Robotics and underwater exploration sys-
tems would require other instrumentation configurations, each with wireless
transmission to the tongue display.
Examples of these potential applications include a microgravity sensor that
could provide vestibular information to an astronaut or a high performance pilot,
and robotic and minimally invasive surgery devices that include MEMS technol-
ogy sensors to provide touch, pressure, shear force, and temperature information
to the surgeon, so that a cannula being manipulated into the heart could be “felt”
as if it were the surgeon’s own finger. Present recreational activities can be ex-
panded: a video game might include dimensions of a simulated situation that are
not transmittable via the visual and auditory interfaces used by present video
games, and a moving picture might provide a wide range of information through
the tactile human-machine interface from artificial sensors. We have received
DARPA support to explore the feasibility of presenting navigational and other in-
formation to Navy SEALs under water, and comparable applications are being
explored at present.
For mediated reality systems using visible or infrared light sensing, the image
acquisition and processing can now be performed with advanced CMOS based
photoreceptor arrays that mimic some of the functions of the human eye. They of-
fer the attractive possibility to convert light into electrical charge and to collect and
further process the charge on the same chip. These “Vision Chips” permit the
ularly well suited to portable vision mediation systems. A prototype camera chip
the conventional 1.2 µm double-metal double-poly CMOS process. The chip fea-
tures adaptive photoreceptors with logarithmic compression of the incident light
intensity. The logarithmic compression is achieved with a FET operating in the
sub-threshold region and the adaptation by a double feedback loop with different
gains and time constants. The double feedback system generates two different log-
arithmic response curves for static and dynamic illumination respectively follow-
ing the model of the human retina.
292Bach-y-Rita, Tyler, Kaczmarek
The vision substitution system has been discussed here in the context of a hu-
manistic intelligence system. In some ways, the use of such systems differs from
the use of natural sensory systems. For example, we found that while experi-
enced blind TVSS subjects could perceive faces and printed images, they were
very disappointed when perception was not accompanied by qualia: A Playboy
centerfold carried no emotional message, and the face of a girl-friend or a wife
created an unpleasant response since it did not convey an affective message. We
consider this to be comparable to the lack of emotional contact of curse-words in
a language that has been learned as an adult. It is possible that the emotional
content could be developed over a long period of usage. On the other hand, a
blind infant using a vision substitution system smiles when he recognizes a toy
and reaches for it, and a blind 10-year-old child perceiving a flickering candle
flame by means of a TVSS is enchanted. These issues of qualia have been ex-
plored elsewhere (Bach-y-Rita, 2002).
one of a class of humanistic intelligence systems made possible by instrumental
sensory plasticity. Comparable systems can be developed for persons with other
sensory losses, such as deafness, tactile sensation loss (e.g., caused by Leprosy or
diabetes), and bilateral vestibular loss. Given that it is possible to provide informa-
sors, it should be possible to develop humanistic intelligence systems for not only
sensory loss, but also to augment and manipulate that sensory information to af-
ford the user a superior perceptual experience.
In comparison to biology, human-machine interface technology is in its early in-
fancy. The brain is much more complex and efficient than any present or even fore-
developed functions such as the ability to identify and localize potential mates at
great distances in moths, and the pattern perception and homing and complex mo-
tor and social behavior of Monarch butterflies—defy simulation on the most ad-
vanced computers (Bach-y-Rita, 1995), and all of that is managed in a tiny speck of
cess to biological-like capabilities.
The instrumental aspects of humanistic intelligence devices based on instru-
mental sensory plasticity provide only information acquisition, but training and
practical use will lead to function comparable to our biological systems. Al-
though the technology of the devices is crude in comparison to the biological sys-
tems, the brain is a very noise-tolerant processor. It can put up with a messy im-
age on the retina (especially under adverse conditions such as a foggy night), and
extract precise information by processing methods that we are just beginning to
understand. From this ensemble of data, it can obtain a complex experience, in-
Seeing With the Brain293
cluding qualia. It is the enormous plasticity of the brain that allows us to develop
humanistic intelligence devices.
Aiello, G. L. (1998a). Multidimensional electrocutaneous stimulation. IEEE Transactions of
Rehabilitation Engineering, 6, 1–7.
Aiello, G. L. (1998b). Tactile colors in artificial sensory communication. In Proceedings of the
1998 International Symposium on Information Theory & its Applications (pp. 82–86). Mexico
City, Mexico: National Polytechnic Institute of Mexico.
Ajdukovic, D. (1984). The relationship between electrode areas and sensory qualities in elec-
trical human tongue stimulation. Acta Otolaryngol, 98, 152–157.
Bach-y-Rita, P. (1972). Brain mechanisms in sensory substitution. New York: Academic Press.
Bach-y-Rita, P. (1989). Physiological considerations in sensory enhancement and substitu-
tion. Europa Med Phsm, 2, 107–128.
Bach-y-Rita, P. (1995). Nonsynaptic diffusion neurotransmission and late brain reorganization.
New York: Demos-Vermande.
Bach-y-Rita, P. (1996). Conservation of space and energy in the brain. Restorative Neurology
and Neuroscience, 10, 1–3.
Bach-y-Rita, P. (1999). Theoretical aspects of sensory substitution and of neurotransmit-
ter-related reorganization in spinal cord injury. Spinal Cord, 37, 465–474.
Bach-y-Rita, P. (2000). Conceptual issues relevant to present and future neurologic rehabili-
tation. In H. Levin & J. Grafman (Eds.), Neuroplasticity and reorganization of function after
brain injury (pp. 357–379). New York: Oxford University Press.
Bach-y-Rita, P. (2002) Sensory substitution and qualia. In A. Noe & E. Thompson (Eds.), Vi-
sion and mind (pp. 497–514). Cambridge, MA: MIT Press.
Bach-y-Rita, P., Collins, C. C., Saunders, F., White, B., & Scadden, L. (1969).Vision substitu-
tion by tactile image projection. Nature, 221, 963–964.
Bach-y-Rita, P., Kaczmarek, K., & Meier, K. (1998).The tongue as a man-machine interface: A
wireless communication system. In Proceedings of the 1998 International Symposium on In-
formation Theory & Its Applications (pp. 79–81). Mexico City, Mexico: National Polytechnic
Institute of Mexico.
Bach-y-Rita, P., Kaczmarek K., Tyler M., & Garcia-Lara, J. (1998). Form perception with a
49-point electrotactile stimulus array on the tongue. Journal of Rehabilitation and Research
Development, 35, 427–430.
Bach-y-Rita, P., Webster, J., Tompkins, W., & Crabb, T. (1987). Sensory substitution for space
gloves and for space robots. In G. Rodriques (Ed.), Proceedings of the Workshop on Space
Robots (pp. 51–57). Pasadena, CA: Jet Propulsion Laboratories.
Collins, C. C. (1985). On mobility aids for the blind. In D. H. Warren & E. R. Strelow (Eds.),
Electronic spatial sensing for the blind (pp. 35–64). Dordrecht, The Netherlands: Matinus
Epstein, W. (1985). Amodal information and transmodal perception. In D. H. Warren & E. R.
Strelow (Eds.), Electronic spatial sensing for the blind (pp. 421–430). Dordrecht, The Nether-
lands: Matinus Nijhoff.
Kaczmarek, K. A. (1995). Sensory augmentation and substitution. In J. D. Bronzino (Ed.),
CRC handbook of biomedical engineering (pp. 2100–2109). Boca Raton, FL: CRC Press.
Kaczmarek, K. A., & Bach-y-Rita, P. (1995).Tactile displays. In W. Barfield & T. Furness, III
(Eds.), Virtual environments and advanced interface design (pp. 349–414). Oxford, England:
Oxford University Press.
294 Bach-y-Rita, Tyler, Kaczmarek
Kaczmarek, K. A., & Tyler, M. E. (2000).Effect of electrode geometry and intensity control Download full-text
method on comfort of electrotactile stimulation on the tongue. Proceedings of the American
Society of Mechanical Engineers, Dynamic Systems Central Division, 1239–1243.
Kaczmarek, K. A., Webster, J. G., & Radwin, R. G. (1992). Maximal dynamic range
electrotactile simulation waveforms. IEEE Transactions of Biomedical Engineers, 39(7),
Mann, S. (1997). VibraVest/ThinkTank: Existential technology of synthetic synesthesia for
the visually challenged. Paper presented at the Eighth International Symposium on Elec-
tronic Arts, Art Institute of Chicago.
for intelligent processing. Proceedings of the IEEE, 86(11), 2123–2151.
sense. In R. V. Smith & J. H. Leslie, Jr. (Eds), Rehabilitation engineering (pp. 29–78). Boca
Raton, FL: CRC Press.
Tyler, M. E., & Braun, J. G. (2000). Spatial mapping of electrotactile sensation threshold and
range on the tongue. Manscript submitted for publication.
the skin. Perception & Psychophysics, 7, 23–27.
Seeing With the Brain295