ArticlePDF Available

Abstract and Figures

Specific increases of reaction times (RTs) were found in normal subjects, when endogenous spatial cues and targets were separated by the vertical visual meridian (VM) or by the vertical auditory (AM) meridian, when targets were either visual or auditory. The aim of this study was to assess if this effect could be attributed to longer RTs needed to shift activation between the hemispheres, or rather to different spatial maps underlying visual and auditory attention. We tested the VM effect in deaf subjects. If the shifting of activation from one hemisphere to the other causes the increase in RTs, then no differences between normal and sensory disabled people should take place, as the incoming perceptual information in the residual modality uses the same neural pathways while crossing the vertical meridian. Conversely, if the vertical meridian effects are related to the spatial representation systems underlying endogenous orienting mechanisms, then the lack of the auditory perceptual system in deaf people may have determined different organization processes in the brain circuits, strongly affecting the orienting mechanisms of spatial attention. Compared with a control group of hearing subjects, we found no evidence of the VM effect in deaf subjects. This finding, jointly with those of a previous experiment which showed no AM effect on blind subjects (Olivetti Belardinelli & Santangelo 2005) supports the idea of different spatial maps underlying visual and auditory attention, and suggests that their co-existence may induce interference effects in space processing, giving rise to the anisotropic representation of visual and auditory spaces, observed in normal subjects.
Content may be subject to copyright.
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
RESEARCH PAPER
Are vertical meridian effects due to audio-visual interference? A new
confirmation with deaf subjects
MARTA OLIVETTI BELARDINELLI
1,2
, VALERIO SANTANGELO
1
, FABIANO BOTTA
1
&
STEFANO FEDERICI
2,3
1
Department of Psychology, University of Rome ‘‘La Sapienza’’, Rome,
2
ECONA, Interuniversity Center for Research on
Cognitive Processing in Natural and Artificial Systems, Rome, and
3
Department of Human and Educational Sciences,
University of Perugia, Perugia, Italy
Accepted July 2006
Abstract
Purpose. Specific increases of reaction times (RTs) were found in normal subjects, when endogenous spatial cues and
targets were separated by the vertical visual meridian (VM) or by the vertical auditory (AM) meridian, when targets were
either visual or auditory. The aim of this study was to assess if this effect could be attributed to longer RTs needed to shift
activation between the hemispheres, or rather to different spatial maps underlying visual and auditory attention.
Method. We tested the VM effect in deaf subjects. If the shifting of activation from one hemisphere to the other causes the
increase in RTs, then no differences between normal and sensory disabled people should take place, as the incoming
perceptual information in the residual modality uses the same neural pathways while crossing the vertical meridian.
Conversely, if the vertical meridian effects are related to the spatial representation systems underlying endogenous orienting
mechanisms, then the lack of the auditory perceptual system in deaf people may have determined different organization
processes in the brain circuits, strongly affecting the orienting mechanisms of spatial attention.
Results. Compared with a control group of hearing subjects, we found no evidence of the VM effect in deaf subjects.
Conclusions. This finding, jointly with those of a previous experiment which showed no AM effect on blind subjects (Olivetti
Belardinelli & Santangelo 2005) supports the idea of different spatial maps underlying visual and auditory attention, and
suggests that their co-existence may induce interference effects in space processing, giving rise to the anisotropic
representation of visual and auditory spaces, observed in normal subjects.
Keywords: Endogenous orienting, deaf subjects, visual meridian effect, visual and auditory attention, separated-but-linked
hypothesis
Introduction
Differences in the spatial representation of the visual
field have been a topic of interest in cognitive
neuroscience for many years. In particular, the stu dy
of visual field asymmetries (e.g., differences in
processing visual stimuli coming from the right vs.
left hemifield) has proved to be an efficient way to
shed light upon the functional organization of the
human brain. Several studies have shown an
asymmetrical representation of space in relat ion to
the vertical (either visual or auditory) meridians [1 2].
The vertical visual meridian (VM), which refers
to the position of eyes, can be defined as an
imaginary axi s lying on fixation. The vertical auditory
meridian (AM), whi ch refers to the position of the
ears, can be defined as an imaginary axis aligned to
the centre of the head (see Figure 1). A dissociation
between the VM and the AM has been recently
demonstrated [3] as concerns endogenous orienting
mechanisms of attention (i.e., when you voluntarily
direct your attentional resources over a source of
interest; in comparison with exogenous orienting
mechanisms of attention, i.e., when abrupt stimuli
reflexively capture your att ention, e.g., [4]). In the
case of auditory targets, reaction times (RTs) were
slower on trials in which cues and targets were
located on the opposite sides of the AM, rather than
Correspondence: Marta Olivetti Belardinelli, Department of Psychology, University of Rome ‘‘La Sapienza’’, via dei Marsi 78, 00185 Roma, Italy.
E-mail: marta.olivetti@uniroma1.it
Disability and Rehabilitation, May 2007; 29(10): 797 804
ISSN 0963-8288 print/ISSN 1464-5165 online ª 2007 Informa UK Ltd.
DOI: 10.1080/09638280600919780
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
when cues and targ ets were positioned opposite to
the VM, or were not separated by any meridian. The
AM effect for auditory stimuli was observed when
targets were cued by either endogenous visual or
endogenous auditory cues (i.e., digits from 1 to 4
that indicated the most probable location on the
incoming targets). Similarly, participants showed a
VM effect when they had to respond to visual targets,
preceded by endogenous visual cues. These results
seem to suggest that visual and auditory orienting are
subserved by different neural systems, at least as far
as the endogenous orienting of spa tial attention is
concerned. Given that the typical patterns of cross-
modal validity effects were found (i.e., crossmodal
audio-visual links), the res ults were interpreted as
being compatible with the ‘‘separated-but-linked’’
hypothesis [5]. At the same time, these findings
provided further basic specifications of that model, in
the sense that ‘‘the ‘separable’ part of the hypothesis
might consist of the use of different spatial maps by
the two attention systems, with different and specific
features’’ [3, p. 961].
To provide convin cing evidence for this explana-
tion, the following alternative hypothesis should be
excluded. Accord ing to this hypothesis, vertical
meridian effects might be attributed to longer RTs,
needed to shift activation from the hemisphere where
the cue is delivered to the one where the target is
delivered, when cue and target are separated by
either a vertical visual or auditory meridian. If this is
the case, the vertical meridian effects should be found
also in subjects deprived of one of the two senses here
considered, i.e., vision or hearing. In the present study,
we maintained this rationale in order to assess
endogenous orienting of spatial attention in people
with only one of the two sensory modalities of space
processing (i.e., blind or deaf people). If the shifting of
activation from one hemisphere to the other causes the
increase in RTs, then no differences between normal
and sensory disabled people should take place.
Consequently, the VM effect in deaf and the AM
effect in blind people should be detected, as the
incoming perceptual information in the residual
modality uses the same neural pathways, from the
peripheral to the central nervous system, while cross-
ing the vertical meridian. Conversely, if the vertical
meridian effects are related to the spatial representa-
tion systems underlying endogenous orienting mech-
anisms, then the lack of either the visual (in blind
people) or the auditory (in deaf people) perceptual
system may have determined different organization
processes in the brain circuits, strongly affecting the
orienting mechanisms of spatial attention. In particu-
lar, neither a VM effect in deaf people nor an AM
effect in blind people should be expected, since in both
cases only one spatial representation system does exist.
Figure 1. Sensory interference between visual and auditory modalities: (a) superimposition of visual and auditory fields in normal people in
case of subjects staring to the right (incongruence between visual and auditory meridians); (b) superimposition of visual and auditory fields in
normal people in case of subjects staring straight ahead (congruence between visual and auditory meridians); (c) superimposition of visual
and auditory fields in normal people in case of subjects staring to the left (incongruence between visual and auditory meridians); (d) visual
field in blind people; (e) auditory field in deaf people.
798 M. Olivetti Belardinelli et al.
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
The AM effect in blind subjects was recently
investigated [6] using auditory cues and auditory
targets. The AM effect was not observed in blind
people, and this was interpreted as a first evidence of
different spatial maps, underlying endogenous or-
ienting of spatial att ention. Given that the main
difference between normal and blind subjects is the
absence of visual functions in the latter group, the
authors argued that the existenc e of both AM and
VM effects in normal subjects could be due to the
co-existence of visual and the auditory modalities,
resulting in interference effects. More specifically, two
different types of sensory interference between
auditory and visual modalities might be taken into
account.
The former consists of a functional interference and
it is strictly connected to the meridian effects. Visual
and auditory meridians can be considered as the axis
of symmetry of visual and auditory sensory systems,
respectively. Usually there is a tendency of the two
systems to act synergically in space (see Figure 1b).
Gaze shifting from left to right is usually accom-
panied by analogous movements of the head in that
direction, and vice versa. Thus, the two meridians
often overlap, even though this overlap should not
be taken for granted. Consider situations in which
one gazes in the absence of head movements, or
conversely, when the head moves without stopping to
stare at the same point or object in the surrounding
environment. The possibility of the meridians
separation leaves open the question of whether the
interference between visual and auditory systems
could originate from the potential or actual (depend-
ing on the specific interaction with the environment)
asymmetry of the two axes. The separation and the
lack of overlap result in an asymmetrical representa-
tion of space in the area interposed between the two
axes, where the incoming stimuli can be coded
starting from different spatial codes and frames. In
fact, this area corresponds to one visual hemi-field
(e.g., right; see Figure 1a) and to the opposite
auditory hemi-field (e.g., left; see Figure 1c). The
attempt to establish where that area is located may
produce a sensory conflict, i.e., ‘‘Is it in the right or
in the left hemi-space?’ Given that the above-
mentioned area lies between the meridians, the
general slowing down of RTs in correspondence of
the visual and the auditory meridian crossing may
result from conflicting information coming from the
two systems. Obviously, this situation does not occur
in sensory disabled people who only employ a single
spatial map (see Figure 1d, 1e).
The latter type of interference is structural, and
related to cerebral plasticity and to neu ral reorgani-
zation processes, which in sensory-impaired subjects
may account for different mechanisms of allocation
and dislocation of the attentional focus in space, as
reported by a large numbe r of behavioral, ERPs and
fMRI studies. As concerns deaf people, evidence of
different patterns of neural activation in tasks
involving vi suo-spatial attention was found in com-
parison to hearing subjects [7 13]. In particular,
evidence was reported of 5 to 6 times high er occipital
activation (hyperactivation) in the medial temporal
(MT)/medial-superior temporal (MST) cortex of the
left hemisphere, as well as an increase of posterior
parietal cortex sensibility [10 11]. Crucially, in deaf
people compared to hearing people, an enhanced
recruitment of the posterior superior temporal sulcus
(STS) was found [11]. For the first time this finding
established that this polymodal area is modified
following early sensory deprivation. As suggested by
Neville and Lawson [8 9], since deaf people may
not have acquired verbal speech or auditory language
skills, but sign language (i.e., an inherently spatial
and visuo-motor language), those regions of the left
hemisphere which would normally mediate verbal
functions, may play a key role in visual-spatial
processing. The existence of cerebral plasticity and
neural reorganization processes appears particularly
true for congenitally deaf subjects, as those in our
experimental group, in which the developmen t of the
cognitive system is associated from birth with the
absence of auditory functions. Similarly, as far as
blind people are concerned, an extensive literature
has shown compensation effects and functional
reorganization, occurring when blindness is acquired
at early stages of development [14 19]. In particu-
lar, plasticity changes in the blind people appear to
indicate progressive recruitment of parietal and
occipital regions, providing evidence for cross-modal
sensory reorganization produced by the lack of vision
[14,19]. Thus, deafferen ted posterior visual areas in
blind individuals seem to be recruited to carry out
auditory [16] as well as other somatosensory func-
tions [20].
To sum up, the investigation of deaf subjects
analogous to that already performed on blind
subjects (with visual cues and visual targets) should
not evidence a VM effect, differently from normal
subjects, who are expected to show a VM effect. This
would be indicative that (i) vertical meridian effects
cannot be attributed to longer RTs needed to shift
activation between the two hemispheres, but rather
to the existence of different visual and auditory
spatial maps used by the two attention systems; and
(ii) the existence of both AM and VM effects in
normal subjects could be due to the co-existence of
the visual and auditory modalities, inducing inter-
ference effects. This study aims to examine these
predictions.
More generally, the present study aims to show
functional (or structural) reorganization processes in
disabled people. As a consequence, our data might
Attentional orienting in sensory deprived subjects 799
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
crucially strengthen the view that cognitive processes
in sensory disabled people are not a consequence of
any kind of compensatory dysfunction, but rather of
a different functional normality. From a rehabilitative
(or practical) perspective, this result would be helpful
in order to prevail over the (never really overcome)
medical model of disability, according to which
disabled people are just bodies or brains to be fixed.
Method
Participants. A total of 18 congenitally deaf subjects
and 18 control subjects voluntarily participated in the
experiment. All participants were naive as to the
purposes of the study and reported normal or
corrected-to-normal vision. Control subjects re-
ported normal hearing as well. Two deaf subjects
(one because of too many eye movements and the
other because of too many errors, i.e., 440%) and
one control subjec t (because of too many eye
movements) were excluded from the analysis. The
final sample consisted of 16 experimental partici-
pants (8 males, mean age 19.8, range 18 22) and 17
control group participants (8 males, mean age 23.2,
range 20 26).
Stimuli. Stimuli were presented on a 15-inch
computer monitor, placed about 50 cm from the
participant. The fixation point was a small cross
(0.6860.68), presented at the geometrical center of
the screen. Each trial began with the presentation at
the centre of the monitor of an informative cue: a
digit from 1 to 4, indicating four consecutive
locations in the fronta l space. The cues indicated
the possible location of the incoming targets: the
digit ‘‘1’’, corresponding to the leftmost location and
the digit ‘‘4’’, corresponding to the rightmost
location. Each cue was followed by a target,
consisting of a black rectangle 0.6860.88 in size.
Target stimuli could appear in four possible loca-
tions, 78 apart (center to center), arranged in a row
along the horizontal meridia n, two on the left and
two on the right of the geometrical center on the
screen (see Figure 2). Participants were informed
about the possible locations of the target stimuli, but
a visual demarcation of locations was not provided
during the experiment.
Procedure. Participants were required to maintain
their gaze on the fixation point, while their head
position was held aligned to the center of the screen
by an adjustable chin rest. Eye movements were
monitored by the experimenter, who was positioned
458 in front of the participant. A number of eye
movements led to a warning at the end of each block.
Two warnings led to the exclusion of the participant
from the following statistical analysis. After a mean
delay of 1000 ms (range 800 1200 ms), the visual
target was presented for 100 ms at one of the four
locations. Participants were required to press a button
on a response device, placed 25 cm in front of them,
by using the index finger of both hands, as quickly
and accurately as possible in response to each target,
regardless of its location. They were also informed
that the cue suggested the most likely location of the
incoming target. Therefore the bes t strategy to
achieve a fast response was to allocate their attention
to the cued location. The cue remained on the screen
until a response was given or for 1500 ms after target
offset, if no response occurred. The visual cue
indicated the location of the incoming target corre ctly
(valid trials) in 75% of the trials, while on the
remaining trials it i ndicated one of three different
locations with respect to the incoming target (invalid
trials), with the same probability of occurrence.
Invalid trials could be either cross (when cue and
target were separated by the VM) or no-cross (when
cue and target occurred within the same visual
hemifield). The experiment comprised six blocks of
160 trials each. Trials were randomized within each
block according to the percentages of validity. The
order of block presentation was balanced. Partici-
pants were given a practice block of trials at the
beginning of the experiment and were allowed to rest
for a few minutes between experimental blocks.
Design. Two conditions were produced by the
between-subject factor group (deaf and control
Figure 2. Schematic representation of (A) the display and (B) the experimental procedure used in the present study. See the text for details.
800 M. Olivetti Belardinelli et al.
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
subjects). Three conditions were produced by the
within-subjects factor type of trial (valid, cross and
no-cross). Mean reaction times and percentages of
correct responses were computed for each experi-
mental condition, after removing trials on which the
response occ urred less than 100 ms (premature
response) or more than 800 ms (missing response)
after target onset.
Results
Errors produced during the experiment were 55%
across conditions, and were not further analysed.
The 263 ANOVA on reaction times with the factors
group (deaf and control) and type of trial (valid,
cross, no-cross) revealed a significant main effect for
group [F(1, 31) ¼ 10.1 45, p ¼ 0.003], with slower
RTs for deaf subjects (a mean of 215 ms) than for
controls (178 ms). A significant main effect was also
found for the type of trial [F(2, 62) ¼ 19.436,
p ¼ 0.001], with faster RTs for valid trials (180 ms)
than for no-cross ones (199 ms), which in turn were
faster than cross trials (210 ms). Crucially, the
interaction group6type of trials was also signifi-
cant [F(2, 62) ¼ 3.366, p ¼ 0.041]. As revealed by
post-hoc comparisons (Duncan’s test), the control
group RTs were significantly slower in case of cross
trials than in case of no-cross trials ( p ¼ 0.003;
visual meridian effect), whi ch in turn were slower
than the valid trials ( p ¼ 0.009). Conversely in the
deaf group, RTs on cross trials did not statistically
differ from no-cross trials ( p ¼ 0.898; no visual
meridian effect); both cross and no-cross trials were
different (slower RTs: endogenous orienting effect)
from valid trials ( p ¼ 0.007 and p ¼ 0.006, respec-
tively; see Table I).
Discussion
Two different spatial maps
The results clearly indic ated endogenous orienting
effects in both deaf and control subjects. Indeed,
performance on valid trials was significantly better
than performance on invalid (both cross and no-
cross) ones. Crucially, results showed a VM effect in
control subjects but not in the deaf subjects. This is
not compatible with the hypothesis that longer RTs
are due to an activation shift from one hemisphere to
the other, when cues and targets are delivered on
the opposite side of the vertical meridian. Rather the
present findings appear to be compatible with the
hypothesis of different spatial maps, underlying
visual and auditory attention. Crucially, if the RTs
increase is determined by the shift of activation from
one hemisphere to the other, there is no reason why
such an increase should take place only in normal
and not in sensory disabled people. Indeed, the
incoming perceptual information in the residual
modality should use the same neural pathways,
from the peripheral to the central nervous system,
while crossing the vertical meridian. Our results
appear rather to point out different mechanisms of
endogenous orienting, related to the vertical mer-
idian, in the deaf when compared with hearing
people.
This result agrees with an analogous result recently
reported by Colmenero et al. [13] relating to
exogenous orienting mechanisms. Colmenero et al.
found signifi cant RT differences comparing deaf and
normal subjects in a reflexive spatial attention task.
Obviously, given the diffe rence between the mechan-
isms under investigation (endogenous vs. exogen-
ous), no furthe r comparison between the two studies
can be achieved. However, it is worth noting for the
following discussion that the general functioning of
the spatial attention system (both reflexive and
voluntary components) appears to be substantially
different in the case of sensory disability.
Integrating data on deaf and blind subjects
The central aim of this study was to assess the
existence of the VM effect in deaf subjects. Results
indicated an absence of this effect, while a typical
endogenous orienting effect was found. Taking into
account the analogous result obtained with blind
subjects, i.e., the absence of an AM effect [6], this
result appears to be consistent with an underlying
(either structural or functional) interference between
different auditory and visual spatial representations.
Indeed, when as a consequence of a sensory
disability affecting ei ther auditory or visual processes,
a single spatial representation remains intact, the
conflict between auditory and visual spatial frames
does not appear.
In other words, we argue that the interaction
between visual and auditory systems may play a key
role in determining vertical meridian effects. In fact,
the co-existence of both visual and auditory systems,
processing different kinds of spatial information, may
be the cause of a sensory interference, determining
two distinct meridian effects.
Returning to the two different types of interfer-
ence, that is the structural and the functional one
Table I. Mean reaction times (ms) and SE according to type of
trial in deaf and control subjects.
Valid Cross No-Cross
Deaf 301.56 + 46.73 320.88 + 44.93 321.76 + 40.76
Control 257.98 + 30.92 298.11 + 25.97 276.43 + 30.24
Attentional orienting in sensory deprived subjects 801
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
mentioned above in the Introduction paragraph, it is
worth noting that both of them fit with the empirical
framework resulting form our experiments. As
concerns the functional interference, it may have its
origins in the incomplete overlap of different (visual
or aud itory) spatial representations, subserving visual
and auditory endogenous orienting mechanisms of
attention, since in some cases (but not in sensory
disabilities, as in blin dness or deafness, where only a
single modality spatial coding remains) the overlap
may be conflicting.
As concerns the structural interference view, the
absence of the VM effect in deaf people and of the
AM effect in blind peop le might originate, instead,
from the lack of structural interference, related to the
loss of auditory or visual functions, respectively.
Many indicators of an alternative structural organi-
zation of visuo-spatial attention, produced by neural
plasticity phenomena, were found in deaf and blind
subjects. Crucially, given that posterior (parietal and
occipital) areas have been associat ed in normal
subjects with crucial spatial attention functions, like
the engagement of attentiona l focus in the visual field
[21 24], these changes in deaf and blind people
(i.e., their enhanced activation when compared with
not sensory disabled people) may account for an
increase in resources available to allocate attention in
space. Therefore, the interferen ce effect between
visual and auditory modalities might be absent,
because the areas normally involved with acoustic
or visual elaboration are recruited for visual (in deaf)
or auditory (in blind) functions.
Although further research is necessary to provide
definitive evidence for either the structural or the
functional view (or an integrati on of the two), the
interference effects between audio-visual spatial
maps seem to be an intriguing way to shed light on
endogenous orienting mechanisms in both sensory
disabled and not disabled people. This investigation
supports the separated-but-linked model, confirming
the existence of connected visual and auditory spatial
attention modules, using distinct spatial maps and
codes for the endogenous orienting of attention.
Although we used quite comparable paradigms on
blind and deaf subje cts, we are conscious that from a
purely methodological point of view a comparison
between the results obtained in totally different
studies from two groups with different disability
may be highly speculative. Notwithstanding, we
think that the theoretical framework here outlined
can crucially contribute to the debate on spatial
attention.
A new perspective on disability
We would like to dedicate some words to the
impact of the general result coming out from our
experimentation on the contemporary consideration
of disability. According to the definition of disability
achieved by the 54th World Health Assembly [25],
disability should be considered more in terms of a
modality of individual functioning than in terms of
lack of abilities, since disability is an infinitely various
but universal feature of the human condition [26].
This implies a revolutionary approach which, hope-
fully, can substantially contribute to the improve-
ment of quality of life of disabled people.
Our results support such a new perspective on
disability. Indeed, with the present study, we are
contributing, together with a wide literature [7 20],
to show different mechanisms and processes at work
in sensory disabled persons. Crucially, a sensory
disabled person cannot be referred to as a whole
entity without some parts. The lacking of one or
more functions, indeed, has important consequences
for the remaining structures, generally providing in
sensory-challeng ed individuals deep reorganization
processes on large brain areas, which not only
compensate for their handicap, but often bring them
to develop a specific human functioning in the other
sensory systems.
Although recent research is just starting to
comprehend how large and complex such reorgani-
zation processes are, it stands to reason that it is no
longer possible to use any kind of reductionism for
the study of disability.
Consequences for rehabilitation
The results of our research confirm the disability
model adopted in the present work, while discon-
firming all rehabilitative models that exclusively
focus on the individual’s deficit and consider the
aid of new technologies such as a ‘‘technical fix for
broken bodies’’ [27]. Such rehabilitative models,
ascribable to a medical model of disability and
advocated by able-bodied experts, tend to conceive
all rehabilitative intervention as oriented to ‘‘normal-
ize’’ individuals, and res tore functioning and struc-
tures jeopardized or compromised by impairment.
These models underestimate both personal factors of
disabled people and environmental ones.
From a historical point of view, able-bodied people
have always expected disabled people to be ‘normal’,
considering themselves as the ‘right’ parameter of
normality. As a consequence, deaf people have been
deprived of their own sign language and forced to
learn lip-reading, at great psychological cost. Analo-
gously,
. . . before the Second World War children with ‘low
vision’ who attended the so-called ‘sight-saving schools’,
were prevented from using their sight to the extent of
being forced to wear harnesses that prevented them from
802 M. Olivetti Belardinelli et al.
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
leaning forward to read or write [. . .] and those in
schools for blind children had paper bags put over their
heads to stop them from looking at braille [. . .].
Children with limited sight automatically tend to use
their vision to the full, and it is now known that
preventing them from doing so has an adverse effect on
their later visual functioning. A delicate balance should
be struck. It is very common, when considering the
cognitive functioning of disabled people and their
coping strategies, to assume that the problems lie within
themselves and that the social and physical world is fixed
[28, p. 30].
Our research, investigating some features of the
complex neural plasticity phenomenon (related to
the endogenous orienting of spatial attention), has
pointed out the extraordinary functional capacity of
the so called ‘‘residuals’’ to achieve a functional
normality. Moreover, it contributes to show that the
consequences of human self-rehabilitative abilities
determine such specific mental features that cannot
be simply considered in terms of compensative or
augmentative functions [29].
This is clearly in line with the dynamic model of
auto-organizatio n, according to which the cognitive
system optimizes its processes in order to take the
best advantage of its capabilities at the scope of
minimizing the efforts to achieve a specific goal. In
deaf and blind people, a difference in abilities,
behaviors and cognitive processing, which causes a
personal and specific construction of reality, emerges
from a neural reorganization. Subsequently, what we
observe in the behavior of deaf and blind people is
not a compensa tory dysfunction, but a different
functional normality, which only in a reductive way
may be labelled as augmentative or compensative
(with respect to seeing and hearing people).
As a consequence, we suggest that all rehabilitative
and technological models should consider the
individual in its complexity. Namely, we are thinking
about multidimensional models which support par-
ticipation as a goal of rehabilitation [29]. Indeed, this
is a unique way to assure that subject-diversity may
be understood as a richness of prospects, social
participation, and empowerment of personal abil-
ities. The i mportance of educational science robustly
emerges within this framework [30]. The education
should indeed focus on the actual resources of
disabled people, showing them how many methods
exist to create new (and efficient) ways to interact
with the external environment in social contexts.
References
1. Carrasco M, Talgar CP, Cameron EL. Characterizing visual
performance fields: Effects of transient covert attention,
spatial frequency, eccentricity, task and set size. Spat Vis
2001;15:61 75.
2. Talgar CP, Carrasco M. Vertical meridian asymmetry in
spatial resolution: visual and attentional factors. Psychol Bull
Rev 2002;9:714 722.
3. Ferlazzo F, Couyoumdjian A, Padovani T, Olivetti
Belardinelli M. Head-centered meridian effect on auditory
spatial attention orienting. Q J Exp Psychol 2002;55:
937 963.
4. Santangelo V, Olivetti Belardinelli M, Spence C.
The suppression of reflexive visual and auditory orienting
when attention is otherwise engaged. J Exp Psychol Human,
in press.
5. Spence C, Driver J. Audiovisual links in endogenous covert
spatial attention. J Exp Psychol Human 1996;22:1005 1030.
6. Olivetti Belardinelli M, Santangelo V. The head-centered
meridian effect: Auditory attention orienting in conditions of
impaired visual spatial information. Disabil Rehabil 2005;27:
761 768.
7. Neville HJ, Lawson D. Attention to central and peripheral
visual space in a movement detection task: An event-related
potential and behavioral study. I. Normal hearing adults.
Brain Res 1987a;405:253 267.
8. Neville HJ, Lawson D. Attention to central and peripheral
visual space in a movement detection task: An event-related
potential and behavioral study. II. Congenitally deaf adults.
Brain Res 1987b;405:268 283.
9. Neville HJ, Lawson D. Attention to central and peripheral
visual space in a movement detection task. III. Separate effects
of auditory deprivation and acquisition of a visual language.
Brain Res 1987c;405:284 294.
10. Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D,
Liu G, et al. Visual attention to the periphery is enhanced in
congenitally deaf individuals. J Neurosci 2000;20:1 6.
11. Bavelier D, Brozinsky C, Tomann A, Mitchell T, Neville H,
Liu G. Impact of early deafness and exposure to sign language
on cerebral organization for motion processing. J Neurosci
2001;21:8931 8942.
12. Neville HJ, Bavelier D. Human brain plasticity: Evidence
from sensory deprivation and altered language experience.
Prog Brain Res 2002;138:177 188.
13. Colmenero JM, Catena A, Fuentes LJ, Ramos MM.
Mechanisms of visuospatial orienting in deafness. Eur J Cogn
Psychol 2004;16:791 805.
14. Hummel F, Gerloff C, Leonard GC. Cross-modal plasticity
and deafferentation. Cogn Process 2004;5:152 158.
15. Ro¨der B, Ro¨ sler F, Neville HJ. Event-related potentials during
auditory language processing in congenitally blind and sighted
people. Neuropsychologia 2000;38:1482 1502.
16. Leclerc C, Saint-Amour D, Lavoie ME, Lassonde M,
Lepore F. Brain function reorganization in early blind
humans revealed by auditory event-related potentials.
Neuroreport 2000;11:545 550.
17. Ro¨der B, Ro¨ sler F, Neville HJ. Effects of interstimulus interval
on auditory event-related potentials in congenitally blind and
normally sighted humans. Neurosci Lett 1999;264:53 56.
18. Ro¨der B, Teder-Sa¨leja¨rvi W, Sterr A, Ro¨ sler F, Hillyard SA,
Neville HJ. Improved auditory spatial tuning in blind humans.
Nature 1999;400:162 165.
19. Liotti M, Ryder K, Woldorff MG. Auditory attention in the
congenital blind: Where, when and what gets reorganized?
Neuroreport 1998;9:1007 1012.
20. Ro¨der B, Ro¨ sler F, Henninghausen E, Na¨cker F. Event-
related potentials during auditory and somatosensory dis-
crimination in sighted and blind human subjects. Cognitive
Brain Res 1996;4:77 93.
21. Nobre AC, Sebestyen GN, Gitelman DR, Mesulam MM,
Frackowiak RSJ, Frith CD. Functional localization of the
system for visuo-spatial attention using positron emission
tomography. Brain 1997;120:515 533.
Attentional orienting in sensory deprived subjects 803
Downloaded By: [University of Aberdeen] At: 11:00 19 May 2007
22. Corbetta M, Kinkade JM, Ollinger JM, Mcavoy MP,
Shulman GL. Voluntary orienting is dissociated from target
detection in human posterior parietal cortex. Nat Neurosci
2000;3:292 297.
23. Downar J, Crawley AP, Mikulis DJ, Davis KD. A multimodal
cortical network for the detection of changes in the sensory
environment. Nat Neurosci 2000;3:277 283.
24. Hopfinger JB, Buonocore MH, Mangun GR. The neural
mechanism of top-down attentional control. Nat Neurosci
2000;3:284 291.
25. WHO. International classification of functioning, disability
and health. World Health Organization, Geneva, 2001.
26. Zola IK. Toward the necessary universalizing of a disability
policy. Milbank Q 1989;67:401 428.
27. Roulstone A. Researching a disabling society: The case of
employment and new technology. In: Shakespeare T, editor.
The disability reader: Social science perspectives. London:
Cassell; 1998. pp 110 128.
28. Finkelstein V, French S. Towards a psychology of disability.
In Swain J, Finkelstein V, French S, Oliver M, editors.
Disabling barriers Enabling environments. London: Sage;
1993. pp 26 33.
29. van den Heuvel WJA. Rehabilitation: Interdisciplinarity
in practice. In: Craddock GM, McCormack LP, Reilly RB,
Knops HTP, editors. Assistive technology shaping the
future. Amsterdam: IOS Press; 2003. pp 71 75.
30. Battro AM. Meta` cervello e` abbastanza. La neuroscienza di un
bambino senza emisfero destro. Trento: Erickson; 2002.
804 M. Olivetti Belardinelli et al.
Article
Few studies have examined the stability of the representation of the position of sound sources in spatial working memory. The goal of this study was to verify whether the memory of sound position declines as maintenance time increases. In two experiments, we tested the influence of the delay between stimulus and response in a sound localization task. In Experiment 1, blindfolded participants listened to bursts of white noise originating from 16 loudspeakers equally spaced in a 360-degree circular space around the listener in such a way that the nose was aligned to the zero-degree coordinate. Their task was to indicate sounds’ position using a digital pointer when prompted at varying delays: 0, 3, and 6 s after stimulus offset. In Experiment 2, the task was analogous to Exp. 1 with stimulus–response delays of 0 or 10 s. Results of the two experiments show that increasing stimulus–response delays up to 10 s do not impair sound localization. Participants systematically overestimated the eccentricity of the auditory stimulus by shifting their responses either toward the 90-degree coordinate, in alignment with the right ear, or toward the 270-degree coordinate, in alignment with the left ear. Such bias was analogous in the front and in the rear azimuthal space and was only marginally influenced by the delay conditions. We conclude that the representation of auditory space in working memory is stable, but directionally biased with systematic overestimation of eccentricity.
Article
Literature about sonification technologies, that is the transformation of data relations into perceived acoustic ones, may be examined according to an objective point of view (the real capacity of acoustics to convey exactly the same information contained in a visual input), a subjective point of view (the effective capability of users with different processing capabilities to decode sonificated information), and an interactive point of view (that is the possibility of acquiring experience and improving performance through sonificated representations). According to these different perspectives three experiments were performed based on audio-tactile exploration of geo-referenced sonificated data by congenitally and acquired blind subjects that afterward were requested to recognize in a haptic modality the explored map among three distracters. In a fourth experiment a reverse paradigm was adopted, starting from the haptic exploration, and asking for recognition in the sonificated modality. The results give evidence that interactive sonification is useful for the transmission of spatial information in absence of visual information (objective perspective). New insights on how this spatial information is internally represented by means of mental maps by different categories of blind subjects were also obtained (subjective perspective). The presence/absence of previous visual experience differently influences the development of expertise in decoding sonificated spatial information (interactive perspective).
Chapter
Full-text available
The evaluation method of UX (User eXperience) may help Human Computer Interaction (HCI) practitioners to evaluate web and system interfaces (e-systems) with a mixed panel of users (i.e., non-disabled, older and disabled users). The evaluation of UX might enable developers to release systems without obstacles to access for all users’ needs, characteristics and context of use. This chapter aims to present the relationship between accessibility and usability, under the concept of UX in the assignation process, by discussing definitions, evaluation methods and alternative ways of presenting the user interface for disabled people. In order to achieve this goal, we present and discuss both an integrated model of interaction evaluation, as a framework for a systems evaluation with disabled people, and the concept of sonification, as a new way to develop an adapted system.
Book
Full-text available
The process of matching a person who has a disability with the most appropriate assistive technology requires a series of assessments, typically administered by multidisciplinary teams at specialized centers for technical aid. Assistive Technology Assessment Handbook fills the need for a reference that helps assistive technology experts perform assessments that more effectively connect the person and the technology. Emphasizing the well-being of the individual with a disability, the book proposes an ideal model of the assistive technology assessment process and outlines how this model can be applied in practice internationally. Organized into three parts, the handbook: Gives readers a toolkit for performing assessments Describes the roles of the assessment team members, among them the new profession of the psychotechnologist, who is skilled in understanding individuals and their psychosocial and technological needs and preferences Reviews cutting-edge technologies for rehabilitation and independent living, including brain–computer interfaces and microswitches The book synthesizes information scattered throughout the international literature, focusing on aspects that are particularly representative or innovative. It also addresses the challenges posed by the variety of health and social care systems and the different ways that individuals who need aid are defined—are they users, patients, clients, or consumers, and how does that affect the assessment?
Article
Full-text available
Summary PET was used to image the neural system underlying agreed closely with the cortical regions recently proposed to form the core of a neural network for spatial attention. The visuospatial attention. Analysis of data at both the group and individual-subject level provided anatomical resolution two attention tasks evoked largely overlapping patterns of neural activation, supporting the existence of a general neural superior to that described to date. Six right-handed male subjects were selected from a pilot behavioural study in which system for visuospatial attention with regional functional specialization. Specifically, neocortical activations were behavioural responses and eye movements were recorded. The attention tasks involved covert shifts of attention, where observed in the right anterior cingulate gyrus (Brodmann area 24), in the intraparietal sulcus of right posterior parietal peripheral cues indicated the location of subsequent target stimuli to be discriminated. One attention condition cortex, and in the mesial and lateral premotor cortices (Brodmann area 6). emphasized reflexive aspects of spatial orientation, while the other required controlled shifts of attention. PET activations
Article
Full-text available
The aim of this work was to explore the nature of elementary operations (engage, move, disengage, and filtering) of spatial attention in deaf experts in sign language. Good communication skills require deaf people to rapidly change attention to at least two separate spatial locations, the facial expression and the hand signs of the speaker. Overtraining imposed by sign language demands might have modified certain characteristics of the spatial attention operations. To test that, a spatial orienting task was used in two experiments. Experiment 1 showed that deaf sub-jects reoriented their attention to the target location faster than hearing subjects in invalid trials. Experiment 2 indicated that inhibition of return decays faster in deaf than in hearing people. These results suggest that deaf subjects can disengage their attention faster than hearing subjects, fostering search of relevant information in more spatial locations. This study focused on the functioning of some elementary operations of spatial attention in deaf persons. There are several important factors that may contribute to a differential development of spatial attention in deaf compared to normal hearing people. First, deaf and hearing people interact with their environment in different ways. Interpersonal communication between deaf people is mainly EUROPEAN JOURNAL OF COGNITIVE PSYCHOLOGY, 2004, 16 (6), 791±805
Article
Full-text available
Human ability to attend to visual stimuli based on their spatial locations requires the parietal cortex. One hypothesis maintains that parietal cortex controls the voluntary orienting of attention toward a location of interest. Another hypothesis emphasizes its role in reorienting attention toward visual targets appearing at unattended locations. Here, using event-related functional magnetic resonance (ER-fMRI), we show that distinct parietal regions mediated these different attentional processes. Cortical activation occurred primarily in the intraparietal sulcus when a location was attended before visual-target presentation, but in the right temporoparietal junction when the target was detected, particularly at an unattended location.
Article
is the psychology of disability to be concerned with the impact of mobility, visual, auditory or other 'impairments' on an individual, or should the psychology focus on the way that people with mobility, visual, auditory and other impairments make sense of the 'disabling' social and physical environment psychological aspects of impairment [direct effects of impairment, symptoms of illness, cognitive functioning—making sense of the world] / psychological aspects of disability (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
To interact successfully with the environment and to compensate for environmental challenges, the human brain must integrate information originating in different sensory modalities. Such integration occurs in non-primary associative regions of the human brain. Additionally, recent investigations have documented the involvement of the primary visual cortex in processing tactile information in blind humans to a larger extent than in sighted controls. This form of cross-modal plasticity highlights the capacity of the human central nervous system to reorganize after chronic visual deprivation.
Article
Issues confronting people with disabilities do not result solely from physical or mental impairment, but from the fit between impairments and practically every feature of the social, economic, physical, and political environment. Changes in housing, transportation, and employment policies would augment the quality of daily living for those with disabilities today and in the future. With the entire population facing chronic illness and activity limitations, a universal approach to disability is virtually required, rather than policies focusing exclusively on a person's special needs. The absence of such a universal perspective will lead to the expansion and perpetuation of the segregated and unequal society visible at present.