The brain as a flexible task machine: implications for visual rehabilitation using noninvasive vs. invasive approaches.
ABSTRACT The exciting view of our brain as highly flexible task-based and not sensory-based raises the chances for visual rehabilitation, long considered unachievable, given adequate training in teaching the brain how to see. Recent advances in rehabilitation approaches, both noninvasive, like sensory substitution devices (SSDs) which present visual information using sound or touch, and invasive, like visual prosthesis, may potentially be used to achieve this goal, each alone, and most preferably together.
Visual impairments and said solutions are being used as a model for answering fundamental questions ranging from basic cognitive neuroscience, showing that several key visual brain areas are actually highly flexible, modality-independent and, as was recently shown, even visual experience-independent task machines, to technological and behavioral developments, allowing blind persons to 'see' using SSDs and other approaches.
SSDs can be potentially used as a research tool for assessing the brain's functional organization; as an aid for the blind in daily visual tasks; to visually train the brain prior to invasive procedures, by taking advantage of the 'visual' cortex's flexibility and task specialization even in the absence of vision; and to augment postsurgery functional vision using a unique SSD-prostheses hybrid. Taken together the reviewed results suggest a brighter future for visual neuro-rehabilitation.
- SourceAvailable from: bio.huji.ac.il[show abstract] [hide abstract]
ABSTRACT: We have recently demonstrated using fMRI that a region within the human lateral occipital complex (LOC) is activated by objects when either seen or touched. We term this cortical region LOtv for the lateral occipital tactile-visual region. We report here that LOtv voxels tend to be located in sub-regions of LOC that show preference for graspable visual objects over faces or houses. We further examine the nature of object representation in LOtv by studying its response to stimuli in three modalities: auditory, somatosensory and visual. If objects activate LOtv, irrespective of the modality used, the activation is likely to reflect a highly abstract representation. In contrast, activation specific to vision and touch may reflect common and exclusive attributes shared by these senses. We show here that while object activation is robust in both the visual and the somatosensory modalities, auditory signals do not evoke substantial responses in this region. The lack of auditory activation in LOtv cannot be explained by differences in task performance or by an ineffective auditory stimulation. Unlike vision and touch, auditory information contributes little to the recovery of the precise shape of objects. We therefore suggest that LOtv is involved in recovering the geometrical shape of objects.Cerebral Cortex 12/2002; 12(11):1202-12. · 6.83 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: This study compares the 'tactile-visual' acuity of the tongue for 15 early blind participants with that of 24 age-matched and sex-matched sighted controls. Snellen's tumbling E test was used to assess 'visual' acuity using the tongue display unit. The tongue display unit is a sensory substitution device that converts a visual stimulus grabbed by a camera into electro-tactile pulses delivered to the tongue via a grid made out of electrodes. No overall significant difference was found in thresholds between early blind (1/206) and sighted control (1/237) participants. We found, however, a larger proportion of early blind in the two highest visual acuity categories (1/150 and 1/90). These results extend earlier findings that it is possible to measure visual acuity in the blind individuals using the tongue. Moreover, our data demonstrate that a subgroup of early blind participants is more efficient than controls in conveying visual information through the tongue.Neuroreport 01/2008; 18(18):1901-4. · 1.40 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: To determine to what extent subjects implanted with the Argus II retinal prosthesis can improve performance compared with residual native vision in a spatial-motor task. High-contrast square stimuli (5.85 cm sides) were displayed in random locations on a 19″ (48.3 cm) touch screen monitor located 12″ (30.5 cm) in front of the subject. Subjects were instructed to locate and touch the square centre with the system on and then off (40 trials each). The coordinates of the square centre and location touched were recorded. Ninety-six percent (26/27) of subjects showed a significant improvement in accuracy and 93% (25/27) show a significant improvement in repeatability with the system on compared with off (p<0.05, Student t test). A group of five subjects that had both accuracy and repeatability values <250 pixels (7.4 cm) with the system off (ie, using only their residual vision) was significantly more accurate and repeatable than the remainder of the cohort (p<0.01). Of this group, four subjects showed a significant improvement in both accuracy and repeatability with the system on. In a study on the largest cohort of visual prosthesis recipients to date, we found that artificial vision augments information from existing vision in a spatial-motor task. Clinical trials registry no NCT00407602.The British journal of ophthalmology 09/2010; 95(4):539-43. · 2.92 Impact Factor
Current Biology 21, 363–368, March 8, 2011 ª2011 Elsevier Ltd All rights reservedDOI 10.1016/j.cub.2011.01.040
A Ventral Visual Stream Reading Center
Independent of Visual Experience
Lior Reich,1Marcin Szwed,4,5,6,7Laurent Cohen,4,5,8
and Amir Amedi1,2,3,*
1Department of Medical Neurobiology, Institute for Medical
Research Israel-Canada, Faculty of Medicine
2The Edmond and Lily Safra Center for Brain Sciences
3Cognitive Science Program
The Hebrew University of Jerusalem, Jerusalem 91220, Israel
4Universite ´ Pierre et Marie Curie–Paris 6, Faculte ´ de Me ´decine
Pitie ´-Salpe ˆtrie `re, IFR 70, 75006 Paris, France
5INSERM, ICM Research Center, UMRS 975,
75634 Paris, France
6INSERM, Cognitive Neuroimaging Unit U992
7NeuroSpin Center, Commissariat a ` l’E´nergie Atomique
IFR 49, 91191 Gif-sur-Yvette, France
8Department of Neurology, Assistance Publique–Ho ˆpitaux
de Paris, Groupe Hospitalier Pitie ´-Salpe ˆtrie `re,
75651 Paris Cedex 13, France
The visual word form area (VWFA) is a ventral stream visual
area that develops expertise for visual reading [1–3]. It is
activated across writing systems and scripts [4, 5] and
encodes letter strings irrespective of case, font, or location
in the visual field  with striking anatomical reproducibility
across individuals . In the blind, comparable reading
expertise can be achieved using Braille. This study investi-
gated which area plays the role of the VWFA in the blind.
One would expect this area to be at either parietal or bilateral
occipital cortex, reflecting the tactile nature of the task and
crossmodal plasticity, respectively [7, 8]. However, accord-
ing to the metamodal theory , which suggests that brain
areas are responsive to a specific representation or compu-
tation regardless of their input sensory modality, we pre-
dicted recruitment of the left-hemispheric VWFA, identically
to the sighted. Using functional magnetic resonance
imaging, we show that activation during Braille reading in
blind individuals peaks in the VWFA, with striking anatom-
ical consistency within and between blind and sighted.
Furthermore, the VWFA is reading selective when con-
trasted to high-level language and low-level sensory
controls. Thus, we propose that the VWFA is a metamodal
reading area that develops specialization for reading regard-
less of visual experience.
Activation of the Visual Word Form Area during Braille
Words Reading in the Congenitally Blind
In order to investigate whether the visual word form area
(VWFA) is activated while reading words regardless of sensory
modality and visual experience, we used functional magnetic
resonance imaging (fMRI) to image the neural activity in eight
congenitally blind subjects while reading via touch using
Braille. Before statistical analysis, standard preprocessing
procedures were performed (see  and the Supplemental
Information available online for detailed experimental proce-
dures). For the main contrast of Braille words reading versus
nonsense Braille, data were analyzed on several levels using
various approaches: (1) region of interest (ROI) analysis in
the sighteds’ VWFA, (2) whole-brain group analysis, (3) proba-
bilistic mapping showing the consistent activations across
individuals, and (4) distribution plot of blind and sighted indi-
viduals’ peaks, based on single-subject analysis.
We first looked at the blinds’ pattern of activation in the
VWFA ROI, as reported originally in sighted subjects (;
Talairach coordinates [TC]  242, 257, 26). The result
was clear cut: we found a highly significant preference for
Braille words over nonsense Braille stimuli in the canonical
left-hemispheric VWFA (p < 1027, t = 9.270; Figure 1A).
We next investigated whether the VWFA is the main peak of
activation or whether it is just one of many brain areas more
responsive to Braille words than to nonsense Braille. To this
end, we performed a whole-brain analysis of this contrast,
masked by voxels that were significantly activated by Braille
words versus baseline (thus discarding areas showing negli-
gible activation or even deactivation to Braille words but a
larger deactivation to nonsense Braille). We found robust acti-
vation in the entire left ventral occipitotemporal cortex all the
way to V1 (Figure 1B; see [13–15]). Critically, the blind group’s
peak of activation (i.e., the most significant cluster across the
entire brain) was located specifically in the occipitotemporal
sulcus, at coordinates practically identical to those reported
in sighted (Figures 1B and 1C; blind TC 238, 260, 28; sighted
TC 242, 257, 26; the difference between the two groups’
peaks was within 122 functional voxels in all axes). Thus, the
VWFA is the area most selective to reading, independent of
the modality in which words are presented.
Anatomical Selectivity and Reproducibility of the Blinds’
VWFA across Individual Subjects
ibility across individual subjects . Is it also reproducible in
the blind? To answer this, we created a probabilistic map (Fig-
ure 2A) showing the overlap and reproducibility between all
individual blind subjects for the main contrast. The coordi-
nates of the most consistent area across the entire brain,
activated in 100% of our blind subjects, were again virtually
identical to the sighteds’ VWFA peak (blind TC 239, 259,
27; sighted TC 242, 257, 26).
We further explored this reproducibility by plotting all indi-
vidual subjects’ peaks for the main contrast, sampled using
the same criterion reported in the sighted (: the individual
subject peak closest to the group analysis peak). The plot
clearly showed that all individual peaks were very closely
packed around the occipitotemporal sulcus (Figure 2B,
marked in red circles). Interestingly, the variance in the peak
locations among the blind was very small in all three axes
and was similar to that of the sighted, with a trend for even
smaller variance in the blind in the y and z axes (sighted data
from ; standard deviation: blind x = 3.4, y = 3.6, z = 3.3;
sighted x = 3.4, y = 5.4, z = 5.8).
To further explore the consistency between the blind and
sighted populations, we plotted together the peaks of all
individuals from both groups (sighted data from ; Fig-
ure 2B). The left panel represents all subjects without a group
tag, demonstrating thatthe groups cannot be distinguished by
simple examination of the peaks’ distribution; the right panel
includes a group tag for each individual. For the purpose of
illustration, we conducted a k-means clustering analysis,
which in our case was designed to partition n = 24 observa-
tions (16 sighted, 8 blind) into k = 2 clusters, so as to minimize
the within-cluster sum of squares . Both clusters show
a mixture of peaks of both blind and sighted individuals
rather than a distinct anatomical cluster for each population
(Figure 2C). To statistically test the contribution of the group
factor to the variance, we conducted a multivariate analysis
of variance (MANOVA) with three dependent variables, one
for each axis . The populations were statistically indistin-
guishable in the y (p > 0.1, F < 3) and z (p > 0.05, F < 3.5)
axes, whereas in the x axis there was a quantitatively small
effect (4 mm difference) that clearly reached significance (p <
0.005, F > 9). Note that the difference between the average
peak (based on the single-subject peaks) of the two groups
was very small (less than 2 functional voxels in all axes;
4 mm in the x and z; 3 mm in the y). Note that both the k-means
and the MANOVA yielded this very small difference between
the blind and sighted individuals’ VWFA locations, in spite of
additional external factors that are likely to increase the vari-
ance between the groups (e.g., the use of different scanners).
Finally, another characteristic of the sighteds’ VWFA is its
anatomical invariance to reading across the left and right
visual fields . In line with this, we found consistent left-lat-
eralized VWFA activation in all blind subjects, even though
they read Braille using different hands (see Table S1).
Functional Selectivity of the VWFA to Braille Reading
Previous studies in the blind showed that the entire visual
cortex, peaking in V1, is taken over by language-related
semantic functions , e.g., verb generation (VG), a task
that entails understanding a heard noun word and covertly
generating a corresponding verb . Therefore, as a supple-
mental analysis, we studied whether the activation in the
VWFA ROI during Braille reading was significantly larger than
during VG, each relative to its low-level control condition
(nonsense Braille and verb generation control [VGc], respec-
tively). Namely, we tested the interaction between sensory
modality (written versus heard) and cognitive processing
(perceptual versus language-related). The activation for Braille
the VWFA is specific to reading and that its activation during
Braille reading cannot be reduced to modality-independent
general language processing.
The VWFA as a Metamodal Area
unimodal areas, which process information from one specific
sensory modality, and higher-order multimodal integration
areas (reviewed in ). The metamodal theory of brain func-
including those commonly considered unimodal (e.g., the
‘‘visual’’ VWFA), are essentially characterized by the represen-
tation or computation that they support or the task that they
perform rather than by their main input sense. Support for
this theory has come from findings showing that an area in
the ventral visual stream, the lateral occipital tactile-visual
area (LOtv), is responsive to objects’ shape regardless of the
input sensory modality and/or visual experience [10, 22–24].
Similarly, the middle occipital gyrus has been shown to be
it has been demonstrated that the animate-inanimate organi-
zation of the ventral visual cortex prevails independently of
input modality and visual experience . In the present study,
Figure 1. Visual Word Form Area Is the Peak of
Braille Words Reading Activation across the
Entire Brain of the Congenitally Blind
(A) The parameter estimates of blind subjects’
activationsforBraille wordsand nonsense Braille
conditions, sampled from sighteds’ visual word
form area (VWFA) region of interest (ROI). During
Braille word epochs, subjects were instructed to
covertly read abstract Braille words. In the
nonsense Braille condition, subjects swept their
reading finger over a surface homogenously
covered by a repeated full six-dot Braille sign
(which is not part of the Braille alphabet) and
were instructed to maintain the same sweep
speed as in reading Braille words. The activation
shows a highly significant preference for Braille
words (p < 1027, t = 9.270). Error bars represent
standard error of the mean.
(B and C) Statistical parametric map calculated
using across-subjects (n = 8)hierarchical random
effects general linear model  for the contrast
of Braille words versus nonsense Braille, pre-
sented on an inflated brain (B) and on brain
sections (C). Activation was found in the ‘‘visual’’
ventral stream, with the peak of activation in the
VWFA. We used a statistical threshold criterion
of p < 0.05 corrected for multiple comparisons
across the entire brain (for more details, see
Supplemental Experimental Procedures).
Current Biology Vol 21 No 5
we put the metamodal theory to a critical test. All of our
subjects were congenitally blind and hence had no visual
experience during development or familiarity with visual
reading. Nevertheless, we showed selective activations to
Braille words at the VWFA ROI (Figure 1A; Figure 2D; the
VWFA was actually the most significant area across the entire
brain; see Figures 1B and 1C), high anatomical reproducibility
of the VWFA within and between blind and sighted subjects
(Figures 2A–2C), and left lateralization regardless of the
Thus, the main functional properties of the VWFA as identified
pendent of the sensory modality of reading, and even more
surprisingly do not require any visual experience. To the best
of our judgment, this provides the strongest support so far
for the metamodal theory. Hence, the VWFA should also be
referred to as the tactile word form area, or more generally
as the (metamodal) word form area.
The metamodality of the VWFA (for simplicity, we maintain
tion that emphasize the predictive coding of sensory input
it has been shown  that the activation of the fusiform face
area, an equivalent of the VWFA specialized for face
perception, depends on subjects’ expectations and on the
mismatch between those expectations and the actual sensory
input, but not on whether stimuli actually depict faces or
houses. Similarly, the VWFA might predict the sensory conse-
quences of words. The metamodality of the VWFA can explain
its ability to apply top-down predictions to both visual and
Written-Word Processing Chain along the Ventral
cipitotemporal cortex, integrating the current findings with
previous literature. Braille reading has been shown to involve
higher-order regions, probably reflecting a combination of the
various components of reading [13–15, 30]. For instance,
previous studies focused on the functional relevance of the
occipital pole for single-Braille-letter identification  or the
recruitment of V1 for somatosensory processing  or
used a task that combined Braille reading with higher-order
language processing . However, this is the first study
testing directly the role of the VWFA for word form processing,
showing that Braille words not only significantly activate the
VWFA but peak in the VWFA.
Figure 2. VWFA Is Reading Selective and Shows
Astonishing Anatomical Consistency within the
(A) Probabilistic map for the contrast of Braille
words versus nonsense Braille (same contrast
as in the group-level analysis in Figure 1), based
onthe statisticalparametricmapsof allindividual
blind subjects independently (p < 0.05 cor-
rected). The map shows the overlapping clusters
across a determined percent of subjects. The
most consistently activated voxels (activated in
100% of blind subjects) are in the VWFA.
(B) Plot of individual peak activations, demon-
strating the spatial reproducibility of the VWFA
in blind (current study) and sighted (data from
) subjects. At left, all subjects’ peaks (both
blind and sighted) are represented by blue
squares. Note the overlap of the two groups of
subjects. At right, there is a group tag (blind or
sighted; red circles and green triangles, respec-
tively) for each individual.
(C) k-means clustering of the 24 individual peaks
into k = 2 clusters. Red and green represent blind
and sighted individuals, respectively; squares
and Xs represent the two resulting clusters;
a black star marks the center of each cluster.
Both clusters contain peaks of both blind and
sighted. This analysis and the multivariate anal-
ysis of variance (see Results; for more details,
see Supplemental Experimental Procedures)
further support the anatomical consistency
between blind and sighted.
(D) Parameter estimates of blind subjects’ activa-
tions for Braille words, nonsense Braille, verb
generation (VG), and verb generation control
(VGc) conditions, sampled from sighteds’ VWFA
ROI. In VG, subjects heard a noun and covertly
subjects heard noise sounds and performed
a one-back task. Each noise matched a noun
from the VG condition in duration, average ampli-
tude, and temporal envelope by multiplying the
epoch’s spectrum by the noun’s temporal envelope. The interaction contrast of (Braille words 2 nonsense Braille) 2 (verb generation 2 verb generation
control) was highly significant, suggesting that the VWFA in the blind is most selective to reading. Error bars represent standard error of the mean.
Is the Visual Word Form Area Visual?
Another related study, carried out by Bu ¨chel and colleagues
, focused specifically on the semantic component of
reading by contrasting meaningful words versus meaningless
letter strings in both blind and sighted. The peak of activation
in both groups was anterior to the VWFA by about 2 cm (Bu ¨-
chel’s peak TC 236, 240, 220; blinds’ VWFA peak TC 238,
260, 28). Interestingly, previous studies in the sighted have
suggested a visual anterior-posterior word processing chain
along the left ventral occipitotemporal cortex, with preference
for semantics anteriorly and word form posteriorly [31–33].
processing chain are actually metamodal: the posterior
reading-specific VWFA and the anterior associative semantic
areas. This supports the notion that the same anatomical
organization and reading mechanisms are largely shared by
both blind and sighted populations, further supporting the
How Does Tactile Information Reach the VWFA,
and How Does VWFA Project Back?
The activation of the VWFA by tactile reading raises two main
questions regarding the routes through which somatosensory
information reaches the VWFA and how the VWFA projects
back to predict or modulate the somatosensory input (see
above). The related connectivity literature in humans is sparse
and not decisive. Previous studies suggest at least three
potential bottom-up pathways:
(1) A thalamocortical pathway, involving rerouting of the
information between thalamic nuclei, as in the blind
mole rat .
(2) Corticocortical connections between somatosensory
cortex and V1, as supported by recent primate studies
[7, 35–37]. Some of these connections, which generate
multisensory responses in the ‘‘unisensory’’ primary
sensory cortex, might exist in the normally developed
brain [7, 35] and could be enhanced or unmasked in the
individuals [9, 38]. In the blind, the constant flow of
modal connections using Hebbian mechanisms .
According to these two bottom-up options, tactile informa-
tion would be relayed in V1 before being processed in the
VWFA. One may speculate that V1 computes simple geomet-
rical features of Braille letters, comparable to its role in pro-
cessing line orientation and edge detection during vision.
This is supported by studies showing causal involvement of
V1 in single-letter identification of Braille signs [14, 38]. From
V1, information might continue to flow in the ventral ‘‘visual’’
stream up to the VWFA.
(3) The third bottom-up option is direct corticocortical
connections between high-order somatosensory areas
and VWFA. If such connections exist, they would be
comparable to the connections reported for metamodal
shape processing between the intraparietal sulcus and
the LOtv .
Regarding the backward-predictive or modulatory projec-
tions from the VWFA on the sensory input, there is less relevant
direct evidence, so one can only speculate. However, it is clear
tant in the primate brain. For instance, there are 10–20 times
more feedback projections from primary visual cortex to the
lower-level visual thalamus (V1 to lateral geniculate nucleus)
than there are corresponding feed-forward bottom-up connec-
tions . This is true also for higher-order ‘‘visual’’ areas (e.g.,
between V4 and V1 ). Feedback projections have also been
demonstrated anatomically between visual areas and somato-
sensory cortex . Similarly, a recent study in humans has
shown bidirectional functional connectivity between LOtv in
the ventral stream and somatosensory areas . It is clear
that additional anatomical and functional connectivity studies,
as well as time-resolved techniques, are needed to establish
which of the above routes and mechanisms actually prevail.
Implications for the Origin of the VWFA
Reading is a recent invention (visual reading was invented
about 5400 years ago, and Braille has been in use for less
pressure for the evolution, in the biological sense, of a dedi-
cated brain module. Thus, reading relies on existing brain
structures and functions. Several hypotheses have been put
forward to account for the inherent biases predisposing the
VWFA to be consistently recruited for reading.
One possibility is that these biases are visual in nature. The
area that eventually harbors the VWFA would originally
perform a specific type of computation particularly suitable
to the encoding of written material. Such computation might
be based on viewpoint-invariant line junctions, shape features
that are particularly useful for reading . Another possible
visual bias includes a preference of the lateral fusiform cortex
for foveal rather than peripheral stimuli , because words
are read in the center of the visual field. However, the metamo-
dal nature of the VWFA, demonstrated here, runs counter to
any such purely visual-based hypotheses.
Another possibility is that the VWFA performs a general
language function . However, we found a highly significant
preference for Braille words over VG (relative to their controls;
Figure 2D) in the VWFA in the blind. Furthermore, the tight and
reproducible anatomical localization of the VWFA during
Braille reading (group general linear model, Figures 1B and
1C; probabilistic map, Figure 2A; single subjects’ peaks, Fig-
ure 2B) is in contrast with the widespread activation found
for language-related tasks in the blind’s visual cortex
[20, 38]. This pattern rather suggests a robust and specific
involvement of the VWFA in reading.
A third explanation is that the VWFA binds ‘‘simple features
into more elaborate shape descriptions‘‘  and then links
these descriptions to higher-order stimulus properties such
as their associated sound and meaning. This might be accom-
plished thanks to its particularly direct connection to perisyl-
vian language areas compared to other parts of the ventral
visual stream . This view is the most compatible with our
results after some adaptation to the metamodal framework:
in the case of Braille readers, this function would not be exclu-
sively limited to vision but could also include the tactile
modality. This is in line with the more general claim of the
distributed domain-specific hypothesis [26, 47, 48], according
to which domain-specific organization within a given region is
determined not only by the characteristics of its processing
but also by the spatial pattern of anatomical and functional
connectivity. Such connectivity determines how information
in that region relates to salient information that is computed
elsewhere. In our case, despite the integration of sensory
information from the tactile rather than visual modality, the
functional connectivity of the VWFA to language areas still
Current Biology Vol 21 No 5
dictates development toward processing the same object
domain (reading). However, the VWFA does not necessarily
extract information from words in a classical bottom-up
manner. An alternative possibility, which also relies on its
connections to both sensory and language areas, is that it
predicts the tactile or visual form of stimuli that have linguistic
content, as described above. Such predictions would benefit
from the proximity of the VWFA to language areas, which
would generate top-down priors on the basis of semantic
In conclusion, we propose that the VWFA is a multisensory
integration area that possibly binds simple features into
more elaborate shape descriptions. Its specific anatomical
location and its strong connectivity to language areas enable
it to bridge high-level perceptual word representation and
language-related components of reading. It is therefore the
most suitable region to be taken over during reading acquisi-
tion, even when reading is acquired via touch without prior
Experimental Procedures and can be found with this article online at
Information includes onetable andSupplemental
We wish to thank S. Dehaene and E. Zohary for invaluable input to the work
presented inthe manuscript. L.R.is supported bythe Samuel andLottieRu-
din Foundation. M.S. and A.A. are supported by the International Human
Frontier Science Program Organization (HFSPO). A.A.’s research is also
supported bytheIsrael Science Foundation (grantnumber1530/08);aEuro-
pean Union Marie Curie International Reintegration Grant (MIRG-CT-2007-
205357); the Edmond and Lily Safra Center for Brain Sciences; and the
Alon, Sieratzki, and Moscona funds.
Received: November 17, 2010
Revised: January 7, 2011
Accepted: January 14, 2011
Published online: February 17, 2011
1. McCandliss, B.D., Cohen, L., and Dehaene, S. (2003). The visual word
form area: Expertise for reading in the fusiform gyrus. Trends Cogn.
Sci. (Regul. Ed.) 7, 293–299.
2. Shaywitz, S.E., and Shaywitz, B.A. (2008). Paying attention to reading:
The neurobiology of reading and dyslexia. Dev. Psychopathol. 20,
3. Dehaene, S., Pegado, F., Braga, L.W., Ventura, P., Nunes Filho, G.,
Jobert, A., Dehaene-Lambertz, G., Kolinsky, R., Morais, J., and
Cohen, L. (2010). How learning to read changes the cortical networks
for vision and language. Science 330, 1359–1364.
4. Bolger, D.J., Perfetti, C.A., and Schneider, W. (2005). Cross-cultural
effect on the brain revisited: Universal structures plus writing system
variation. Hum. Brain Mapp. 25, 92–104.
5. Qiao, E., Vinckier, F., Szwed, M., Naccache, L., Valabre `gue, R.,
Dehaene, S., and Cohen, L. (2010). Unconsciously deciphering hand-
writing: Subliminal invariance for handwritten words in the visual word
form area. Neuroimage 49, 1786–1799.
6. Cohen, L., Lehe ´ricy, S., Chochon, F., Lemer, C., Rivaud, S., and
Dehaene, S. (2002). Language-specific tuning of visual cortex?
Functional properties of the visual word form area. Brain 125, 1054–
7. Noppeney, U. (2007). The effects of visual deprivation on functional and
structuralorganizationofthe humanbrain. Neurosci. Biobehav.Rev.31,
8. Amedi, A., Merabet, L.B., Bermpohl, F., and Pascual-Leone, A. (2005).
The occipital cortex in the blind: Lessons about plasticity and vision.
Curr. Dir. Psychol. Sci. 14, 306–311.
9. Pascual-Leone, A., and Hamilton, R. (2001). The metamodal organiza-
tion of the brain. Prog. Brain Res. 134, 427–445.
10. Amedi, A.,Raz, N., Azulay,H., Malach, R.,and Zohary, E. (2010). Cortical
activity during tactile exploration of objects in blind and sighted
humans. Restor. Neurol. Neurosci. 28, 143–156.
11. Cohen, L., Dehaene, S., Naccache, L., Lehe ´ricy, S., Dehaene-Lambertz,
G., He ´naff, M.A., and Michel, F. (2000). The visual word form area:
Spatial and temporal characterization of an initial stage of reading in
normal subjects and posterior split-brain patients. Brain 123, 291–307.
Human Brain (New York: Thieme).
13. Sadato, N., Pascual-Leone, A., Grafman, J., Iban ˜ez, V., Deiber, M.P.,
Dold, G., and Hallett, M. (1996). Activation of the primary visual cortex
by Braille reading in blind subjects. Nature 380, 526–528.
14. Cohen, L.G., Celnik, P., Pascual-Leone, A., Corwell, B., Falz, L.,
Dambrosia, J., Honda, M., Sadato, N., Gerloff, C., Catala ´, M.D., and
Hallett, M. (1997).Functionalrelevanceofcross-modalplasticity inblind
humans. Nature 389, 180–183.
15. Bu ¨chel, C., Price, C., and Friston, K. (1998). A multimodal language
region in the ventral visual pathway. Nature 394, 274–277.
16. Cohen, L., Jobert, A., Le Bihan, D., and Dehaene, S. (2004). Distinct
unimodal and multimodal regions for word processing in the left
temporal cortex. Neuroimage 23, 1256–1270.
17. MacKay, D.J.C. (2003). Information Theory, Inference, and Learning
Algorithms (Cambridge: Cambridge University Press).
18. Hair, J.F., Tatham, R.L., Anderson, R.E., and Black, W. (1998).
Multivariate Data Analysis, Fifth Edition (New York: Prentiss Hall).
19. Noppeney, U., Friston, K.J., and Price, C.J. (2003). Effects of visual
deprivation on the organization of the semantic system. Brain 126,
20. Burton, H., Diamond, J.B., and McDermott, K.B. (2003). Dissociating
cortical regions activated by semantic and phonological tasks: A
21. Calvert, G.A., and Thesen, T. (2004). Multisensory integration:
Methodological approaches and emerging principles in the human
brain. J. Physiol. Paris 98, 191–205.
22. Amedi, A., Malach, R., Hendler, T., Peled, S., and Zohary, E. (2001).
Visuo-haptic object-related activation in the ventral visual pathway.
Nat. Neurosci. 4, 324–330.
23. Lacey, S., Tal, N., Amedi, A., and Sathian, K. (2009). A putative model of
multisensory object representation. Brain Topogr. 21, 269–274.
24. James, T.W., James, K.H., Humphrey, G.K., and Goodale, M.A. (2006).
Do visual and tactile object representations share the same neural
substrate? In Touch and Blindness: Psychology and Neuroscience,
M.A. Heller and S. Ballesteros, eds. (Mahwah, NJ: Lawrence Erlbaum
Associates), pp. 139–155.
25. Renier, L.A., Anurova, I., De Volder, A.G., Carlson, S., VanMeter, J., and
Rauschecker, J.P. (2010). Preserved functional specialization for spatial
processing in the middle occipital gyrus of the early blind. Neuron 68,
26. Mahon, B.Z., Anzellotti, S., Schwarzbach, J., Zampini, M., and
Caramazza, A. (2009). Category-specific organization in the human
brain does not require visual experience. Neuron 63, 397–405.
27. Ma, W.J., and Pouget, A. (2008). Linking neurons to behavior in multi-
sensory perception: A computational review. Brain Res. 1242, 4–12.
28. Friston, K. (2003). Learning and inference in the brain. Neural Netw. 16,
29. Egner, T., Monti, J.M., and Summerfield, C. (2010). Expectation and
surprise determine neural population responses in the ventral visual
stream. J. Neurosci. 30, 16601–16608.
30. Burton, H. (2003). Visual cortex activity in early and late blind people.
J. Neurosci. 23, 4005–4011.
31. Moore, C.J., and Price, C.J. (1999). Three distinct ventral occipitotem-
poral regions for reading and object naming. Neuroimage 10, 181–192.
32. Vinckier, F., Dehaene, S., Jobert, A., Dubus, J.P., Sigman, M., and
Cohen, L. (2007). Hierarchical coding of letter strings in the ventral
stream: Dissecting the inner organization of the visual word-form
system. Neuron 55, 143–156.
33. Binder, J.R., Medler, D.A., Westbury, C.F., Liebenthal, E., and
Buchanan, L. (2006). Tuning of the human left fusiform gyrus to sublex-
ical orthographic structure. Neuroimage 33, 739–748.
34. Bronchti, G., Heil, P., Sadka, R., Hess, A., Scheich, H., and Wollberg, Z.
(2002). Auditory activation of ‘‘visual’’ cortical areas in the blind mole rat
(Spalax ehrenbergi). Eur. J. Neurosci. 16, 311–329.
Is the Visual Word Form Area Visual?
35. Kayser, C., and Logothetis, N.K. (2007). Do early sensory cortices inte-
grate cross-modal information? Brain Struct. Funct. 212, 121–132.
36. Fujii, T., Tanabe, H.C., Kochiyama, T., and Sadato, N. (2009). An inves-
tigation of cross-modal plasticity of effective connectivity in the blind
by dynamic causal modeling of functional MRI data. Neurosci. Res.
37. Bavelier, D., and Neville, H.J. (2002). Cross-modal plasticity: Where and
how? Nat. Rev. Neurosci. 3, 443–452.
38. Pascual-Leone, A., Amedi, A., Fregni, F., and Merabet, L.B. (2005). The
plastic human brain cortex. Annu. Rev. Neurosci. 28, 377–401.
39. Deshpande, G., Hu, X., Stilla, R., and Sathian, K. (2008). Effective
connectivity during haptic perception: A study using Granger causality
analysis of functional magnetic resonance imaging data. Neuroimage
40. Salin, P.A., and Bullier, J. (1995). Corticocortical connections in the
visual system: Structure and function. Physiol. Rev. 75, 107–154.
41. Felleman, D.J., and Van Essen, D.C. (1991). Distributed hierarchical
processing in the primate cerebral cortex. Cereb. Cortex 1, 1–47.
42. Szwed, M., Cohen, L., Qiao, E., and Dehaene, S. (2009). The role of
invariant line junctions in object and visual word recognition. Vision
Res. 49, 718–725.
43. Hasson, U., Levy, I., Behrmann, M., Hendler, T., and Malach, R. (2002).
Eccentricity bias as an organizing principle for human high-order object
areas. Neuron 34, 479–490.
44. Price, C.J., and Devlin, J.T. (2003). The myth of the visual word form
area. Neuroimage 19, 473–481.
45. Starrfelt, R., and Gerlach, C. (2007). The visual what for area: Words and
pictures in the left fusiform gyrus. Neuroimage 35, 334–342.
46. van der Mark, S., Klaver, P., Bucher, K., Maurer, U., Schulz, E., Brem, S.,
Martin, E., and Brandeis, D. (2011). The left occipitotemporal system in
reading: Disruption of focal fMRI connectivity to left inferior frontal and
inferior parietal language areas in children with dyslexia. Neuroimage
47. Mahon, B.Z., and Caramazza, A. (2009). Concepts and categories: A
cognitive neuropsychological perspective. Annu. Rev. Psychol. 60,
48. Peelen, M.V., and Caramazza, A. (2010). What body parts reveal about
the organization of the brain. Neuron 68, 331–333.
49. Friston, K.J., Holmes, A.P., Price, C.J., Bu ¨chel, C., and Worsley, K.J.
Neuroimage 10, 385–396.
Current Biology Vol 21 No 5