Jordi Navarra

Jordi Navarra
  • PhD
  • Professor (Associate) at University of Barcelona

About

58
Publications
14,245
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,368
Citations
Current institution
University of Barcelona
Current position
  • Professor (Associate)
Additional affiliations
December 2008 - present
Parc Sanitari Sant Joan de Déu
Position
  • Principal Investigator
September 2006 - September 2008
University of Oxford
Position
  • PostDoc Position
January 2001 - December 2005
University of Barcelona
Position
  • PhD Student

Publications

Publications (58)
Preprint
Full-text available
Crossmodal correspondences between auditory pitch and spatial elevation have been demonstrated extensively in adults. High- and low-pitched sounds tend to be mapped onto upper and lower spatial positions, respectively. We hypothesised that this crossmodal link could be influenced by the development of spatial and linguistic abilities during childho...
Article
Full-text available
The brain is able to extract associative regularities between stimuli and readjust the perceived timing of correlated sensory signals. In order to elucidate whether these two mechanisms interact with each other or not, we exposed participants to two different visual stimuli (a circle and a triangle) that appeared continuously and unpredictably, for...
Article
Since the current neuropsychological assessments are not sensitive to subtle deficits that may be present in cognitively normal subjects with amyloid-β positivity, more accurate and efficient measures are needed. Our aim was to investigate the presence of subtle motor deficits in this population and its relationship with cerebrospinal fluid (CSF) a...
Article
Full-text available
Although the perceptual association between verticality and pitch has been widely studied, the link between loudness and verticality is not fully understood yet. While loud and quiet sounds are assumed to be equally associated crossmodally with spatial elevation, there are perceptual differences between the two types of sounds that may suggest the...
Article
In this article (submitted as a Proposal), we investigate the use of ice-cream as a potentially effective vehicle for the delivery of nutrition/energy to the elderly in hospital and in old people's facilities. Currently, there appears to be a general belief that ice-cream is an unhealthy product and so is not commonly considered in this capacity. T...
Poster
Full-text available
Chemotherapy treatments often induce flavour disorders (e.g., dysgeusia) and loss of appetite. This study explored the effects of multisensory modifications through meal presentations and gamification during the food intake of child and adolescent cancer patients undergoing chemotherapy. Meal trays with a fast food appearance, including separately...
Article
Full-text available
Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on...
Poster
Full-text available
In the present study we studied the electrophysiological correlates of the crossmodal correspondence between pitch and spatial elevation. We aimed at exploring how automatic this correspondence is. We expected to find some event-related potentials (ERPs) that would be sensitive to an incongruence between a spatial position and a sound of a certain...
Article
Full-text available
Higher frequency and louder sounds are associated with higher positions whereas lower frequency and quieter sounds are associated with lower locations. In English, ?high? and ?low? are used to label pitch, loudness, and spatial verticality. By contrast, different words are preferentially used, in Catalan and Spanish, for pitch (high: ?agut/agudo?;...
Article
Introduction: Phenylketonuria (PKU) is a rare metabolic disease that causes slight-to-severe neurological symptoms. Slow performance has been observed in PKU but the influence of high-order (i.e., not purely motor) deficits and of temporary variations of the phenylalanine (Phe) level on this slowness has not been fully corroborated as yet. Respons...
Article
Full-text available
High-pitched sounds generate larger neural responses than low-pitched sounds. We investigated whether this neural difference has implications, at cognitive level, for the “vertical” representation of pitch. Participants performed a speeded detection of visual targets that could appear at one of four different spatial positions. Rising or falling...
Article
The nonverbal learning disability (NLD) is a neurological dysfunction that affects cognitive functions predominantly related to the right hemisphere such as spatial and abstract reasoning. Previous evidence in healthy adults suggests that acoustic pitch (i.e., the relative difference in frequency between sounds) is, under certain conditions, encode...
Poster
Full-text available
Over the past decades, numerous experimental studies have demonstrated the existence of crossmodal correspondences between certain acoustic features (e.g. pitch) and visuospatial elevation (e.g. up or down). This phenomenon is particularly observed in the tendency to associate high and low frequency sounds with higher and lower positions in the spa...
Poster
Full-text available
Sounds that are high in pitch and loud in intensity are associated to upper spatial positions. The opposite appears to be true for low and quiet sounds and lower positions in space. In English, the words ”high” and ”low” define pitch, loudness and spatial elevation. In contrast, in Spanish and Catalan, the words ”agudo/agut” and ”grave/greu” are us...
Presentation
Numerous studies suggest that the processing of pitch variations (e.g., melodic ”ups and downs”) can generate vertical spatial representations. However, the conditions that facilitate the emergence of these ‘crossmodal’ representations and their possible consequences still remain elusive. We collected EEG and behavioural data from healthy participa...
Poster
Full-text available
Little is known about the possible crossmodal association between loudness and verticality. This lack of research contrasts with the extensive use of a direct translation of loudness (e.g., high vs. low sounds) into the vertical plane (low vs. high spatial positions, respectively) in disciplines such as ergonomics, design and sound engineering. We...
Poster
Full-text available
Crossmodal correspondences between acoustic pitch and spatial features (e.g., height) have been demonstrated extensively in adults. High- and low-pitched sounds tend to be mapped into vertical coordinates, up and down, respectively. We have hypothesized that this pitch-height link may be influenced by the development, during childhood, of spatial a...
Article
Individuals with preclinical Alzheimer's disease (Pre-AD) present nonimpaired cognition, as measured by standard neuropsychological tests. However, detecting subtle difficulties in cognitive functions may be necessary for an early diagnosis and intervention. A new computer-based visuomotor coordination task (VMC) was developed to investigate the po...
Presentation
Previous literature has revealed the presence of several crossmodal correspondences between pitch and other dimensions such as spatial elevation, size or colour. For instance, high-pitched tones are associated with high locations in space, small sizes and bright colours. Even though previous studies have suggested that high-pitched sounds generate...
Article
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in auditio...
Poster
Full-text available
We investigated the possible relationship between loudness and spatial elevation. Participants were presented with two auditory stimuli that differed only in terms of intensity (82dB vs. 56dB). In an initial learning phase, participants associated each of these two stimuli with two different visual stimuli. In the experimental phase, one of the two...
Poster
Full-text available
The existence of the perceptual correspondence between pitch and size has been confirmed in studies with adults (Bien, ten Oever, Goebel, & Sack, 2012), thus higher-pitched sounds are judged to be, among other characteristics, smaller visual images than lower-pitched sounds. Are cross-modality correspondences an innate aspect of perception? We exam...
Article
Full-text available
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exp...
Article
Full-text available
Adults as well as infants have the capacity to discriminate languages based on visual speech alone. Here, we investigated whether adults' ability to discriminate languages based on visual speech cues is influenced by the age of language acquisition. Adult participants who had all learned English (as a first or second language) but did not speak Fre...
Article
Full-text available
Classical experiments have shown that sensory neurons carry information about ongoing decisions during random-dot motion discrimination tasks [1,2]. These conclusions are based on the receiver-operating-characteristic (ROC) applied to single-cell recordings, which assumes the presence of a second hypothetical "anti-neuron". Furthermore, ROC analysi...
Poster
Full-text available
Previous studies suggest the existence of facilitatory effects between, for example, responding upwards/downwards while hearing a high/low-pitched tone, respectively (e.g., Rusconi et al., 2006; Occeli, Spence & Zampini, 2009). Neuroimaging research has started to reveal the activation of parietal areas (e.g., the intraparietal sulcus, IPS) during...
Article
The present study explored the effects of short-term experience with audiovisual asynchronous stimuli in 6-month-old infants. Results revealed that, in contrast with adults (usually showing temporal recalibration under similar circumstances), a brief exposure to asynchrony increased infants' perceptual sensitivity to audiovisual synchrony.
Chapter
Full-text available
This chapter considers the contribution of multisensory processes to the development of speech perception. Evidence for matching and integration of audiovisual speech information within the first few months of life suggests an early preparedness for extracting multisensory relations in spoken language. Nonetheless, it is currently not known what re...
Conference Paper
Previous studies suggest the existence of facilitatory effects between, for example, responding upwards/downwards while hearing a high/low-pitched tone, respectively (e.g., Occeli et al. , 2009; Rusconi et al ., 2006). Neuroimaging research has started to reveal the activation of parietal areas (e.g., the intraparietal sulcus, IPS) during the perfo...
Article
Full-text available
The human brain exhibits a highly adaptive ability to reduce natural asynchronies between visual and auditory signals. Even though this mechanism robustly modulates the subsequent perception of sounds and visual stimuli, it is still unclear how such a temporal realignment is attained. In the present study, we investigated whether or not temporal ad...
Conference Paper
We investigated whether perceiving predictable ‘ups and downs’ in acoustic pitch (as can be heard in musical melodies) can influence the spatial processing of visual stimuli as a consequence of a ‘spatial recoding’ of sound (see Foster and Zatorre, 2010; Rusconi et al., 2006). Event-related potentials (ERPs) were recorded while participants perform...
Article
Studies in adults reveal that a short-term exposure to asynchronous audiovisual signals induces temporal realignment between these signals (Di Luca et al., 2009; Fujisaki et al., 2004; Navarra et al., 2009; Vroomen et al., 2004). In contrast with this evidence in adults, Lewkowicz (2010) observed that infants increased their sensitivity to AV async...
Article
Various physical circumstances (for instance, the fact that light and sound do not travel at the same speed) and/or physiological factors (such as the fact that auditory signals are initially processed more rapidly than visual signals) give rise to small asynchronies between sensory signals pertaining to a specific multisensory event. Considering t...
Article
To what extent does our prior experience with the correspondence between audiovisual stimuli influence how we subsequently bind them? We addressed this question by testing English and Spanish speakers (having little prior experience of Spanish and English, respectively) on a crossmodal simultaneity judgment (SJ) task with English or Spanish spoken...
Article
Currently, one of the most controversial topics in the study of multisensory integration in humans (and in its implementation in the development of new technologies for human communication systems) con- cerns the question of whether or not attention is needed during (or can modulate) the integration of sen- sory signals that are presented in differ...
Article
The brain adapts to asynchronous audiovisual signals by reducing the subjective temporal lag between them. However, it is currently unclear which sensory signal (visual or auditory) shifts toward the other. According to the idea that the auditory system codes temporal information more precisely than the visual system, one should expect to find some...
Article
The temporal perception of simple auditory and visual stimuli can be modulated by exposure to asynchronous audiovisual speech. For instance, research using the temporal order judgment (TOJ) task has shown that exposure to temporally misaligned audiovisual speech signals can induce temporal adaptation that will influence the TOJs of other (simpler)...
Article
Full-text available
One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839-843, 2005) suggest that a...
Article
Full-text available
We investigated the consequences of monitoring an asynchronous audiovisual speech stream on the temporal perception of simultaneously presented vowel-consonant-vowel (VCV) audiovisual speech video clips. Participants made temporal order judgments (TOJs) regarding whether the speech-sound or the visual-speech gesture occurred first, for video clips...
Article
Full-text available
This study shows that 4- and 6-month-old infants can discriminate languages (English from French) just from viewing silently presented articulations. By the age of 8 months, only bilingual (French-English) infants succeed at this task. These findings reveal a surprisingly early preparedness for visual language discrimination and highlight infants'...
Article
Full-text available
The goal of this study was to explore the ability to discriminate languages using the visual correlates of speech (i.e., speech-reading). Participants were presented with silent video clips of an actor pronouncing two sentences (in Catalan and/or Spanish) and were asked to judge whether the sentences were in the same language or in different langua...
Article
Previous research has revealed the existence of perceptual mechanisms that compensate for slight temporal asynchronies between auditory and visual signals. We investigated whether temporal recalibration would also occur between auditory and tactile stimuli. Participants were exposed to streams of brief auditory and tactile stimuli presented in sync...
Article
Full-text available
We investigated the effects of visual speech information (articulatory gestures) on the perception of second language (L2) sounds. Previous studies have demonstrated that listeners often fail to hear the difference between certain non-native phonemic contrasts, such as in the case of Spanish native speakers regarding the Catalan sounds /epsilon/ an...
Article
Full-text available
Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish). Participan...
Article
We examined whether monitoring asynchronous audiovisual speech induces a general temporal recalibration of auditory and visual sensory processing. Participants monitored a videotape featuring a speaker pronouncing a list of words (Experiments 1 and 3) or a hand playing a musical pattern on a piano (Experiment 2). The auditory and visual channels we...
Article
Full-text available
One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for...
Article
The McGurk effect is usually presented as an example of fast, automatic, multisensory integration. We report a series of experiments designed to directly assess these claims. We used a syllabic version of the speeded classification paradigm, whereby response latencies to the first (target) syllable of spoken word-like stimuli are slowed down when t...

Network

Cited By