Article

Brain mechanisms for processing co-speech gesture: A cross-language study of spatial demonstratives

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This electrophysiological study investigated the relationship between language and nonverbal socio-spatial context for demonstrative use in speech communication. Adult participants from an English language group and a Japanese language group were asked to make congruency judgment for simultaneous presentation of an audio demonstrative phrase in their native language and a picture that included two human figures as speaker and hearer, as well as a referent object in different spatial arrangements. The demonstratives (“this” and “that” in English, and “ko,” “so,” and “a” in Japanese) were varied for the visual scenes to produce expected and unexpected combinations to refer to an object based on its relative spatial distances to the speaker and hearer. Half of the trials included an accompanying pointing gesture in the picture, and the other half did not. Behavioral data showed robust congruency effects with longer reaction time for the incongruent trials in both subject groups irrespective of the presence or absence of the pointing gesture. Both subject groups also showed a significant N400-like congruency effect in the event-related potential responses for the gesture trials, a finding predicted from previous work (Stevens & Zhang, 2013). In the no-gesture trials, the English data alone showed a P600 congruency effect preceded by a negative deflection. These results provide evidence for shared brain mechanisms for processing demonstrative expression congruency, as well as language-specific neural sensitivity to encoding the co-expressivity of gesture and speech. [See full text at http://zhanglab.wdfiles.com/local--files/publications/Stevens_Zhang_JNL2014.pdf ]

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To improve the signal-tonoise ratio of the data, nine electrode regions were defined for analysis, which were organized from anterior to posterior and left to right (Figure 3). Similar channel groupings were used in previous studies (Chen et al., 2011;Schneider et al., 2008;Stevens & Zhang, 2014;Zhang et al., 2011). ...
... Results for reaction time demonstrated that participants took longer to respond to incongruent stimuli than congruent stimuli, irrespective of whether the condition was phonetic or prosodic. A similar effect has been observed in previous studies (Kamiyama et al., 2013;Stevens & Zhang, 2014). In the experiment by Kamiyama et al., participants with and without musical experience judged congruent face-music pairs more quickly than incongruent pairs. ...
... Consistent with our hypotheses, we observed both an N400 response and a late positive response to incongruent audiovisual stimuli. Our results were in line with the results of previous studies evaluating responses to various congruent and incongruent stimuli (Kamiyama et al., 2013;Schirmer & Kotz, 2013;Stevens & Zhang, 2014). In the current experiment, the phonetic condition elicited a larger N400 component than the prosodic condition. ...
... Results for reaction time demonstrated that participants took longer to respond to incongruent stimuli than congruent stimuli, irrespective of whether the condition was phonetic or prosodic. A similar effect has been observed in previous studies (Kamiyama et al., 2013;Stevens & Zhang, 2014). In the experiment by Kamiyama et al., participants with and without musical experience judged congruent face-music pairs more quickly than incongruent pairs. ...
... Consistent with our hypotheses, we observed both an N400 response and a late positive response to incongruent audiovisual stimuli. Our results were in line with the results of previous studies evaluating responses to various congruent and incongruent stimuli (Kamiyama et al., 2013;Schirmer & Kotz, 2013;Stevens & Zhang, 2014). In the current experiment, the phonetic condition elicited a larger N400 component than the prosodic condition. ...
... The left posterior (LP) included channel sites P7, P5, P3, PO7, PO3, and O1, the middle posterior (MP) region included P1, PZ, P2, POZ, and OZ, and the right posterior (RP) region included P8, P6, P4, PO8, PO4, and O2. Similar channel groupings were used in previous studies(Chen et al., 2011;Schneider et al., 2008;Stevens & Zhang, 2014;Zhang et al., 2011).Based on visual inspection and evidence from previous literature, two time windows were selected for analysis: an early time window from 250 -450 ms (N400 component,Aguado et al., 2013;Kamiyama et al., 2013; Kotz & Paulmann, 2011) and a late time window from 700 -1000 ms (late positive response,Chen et al., 2011;Paulmann, Bleichner & Kotz, 2013; Kotz & Paulmann, 2011). These latencies were measured based on the onset of the auditory stimulus, which occurred at 400 ms. ...
Thesis
Full-text available
The present study utilized a cross-modal priming paradigm to investigate dimensional information processing in speech. Primes were facial expressions that varied in two dimensions: affect (happy, neutral, or angry) and mouth shape (corresponding to either /a/ or /i/ vowels). Targets were CVC words that varied by prosody and vowel identity. In both the phonetic and prosodic conditions, adult participants responded to congruence or incongruence of the visual-auditory stimuli. Behavioral results showed a congruency effect in percent correct and reaction time measures. Two ERP responses, the N400 and late positive response, were identified for the effect with systematic between-condition differences. Localization and time-frequency analyses indicated different cortical networks for selective processing of phonetic and emotional information in the words. Overall, the results suggest that cortical processing of phonetic and emotional information involves distinct neural systems, which has important implications for further investigation of language processing deficits in clinical populations.
... This egocentric account is intuitively appealing and still influential (e.g., Diessel, 2014;Stevens and Zhang, 2014). In the current paper, we question this account from both the production and the comprehension side, and discuss recent accumulating observational, experimental, and neuroscientific evidence that suggests an alternative social and multimodal view of demonstrative reference. ...
... Participants' linguistic judgments were in line with the egocentric view of demonstrative reference. However, analysis of their EEGs suggested that they took into account whether speaker and hearer both gazed at the referent or not (Stevens and Zhang, 2013) and whether the speaker produced a pointing gesture to the referent or not (Stevens and Zhang, 2014). Thus, a measure tapping into linguistic intuitions (the judgment task) was found to be in line with the egocentric view whereas a measure reflecting online processing (EEG) found an influence of social factors such as the presence of shared gaze. ...
... Our research question: Will CI children receive additional benefits from the use of co-speech hand movement gestures for lexical tone pitch contours during training? [9,10]. In our study, producing hand gestures to indicate the lexical tone contours serves as a more elaborative encoding strategy that may help perception-action binding to promote speech learning [11]. ...
Article
No PDF available ABSTRACT Prelinguallydeafened Mandarin-speaking children with cochlear implants (CI) encounter a significant challenge in perceiving lexical tones accurately due to their limited language experience and insufficient pitch information provided by CI devices. To facilitate their tonal perception, we examined the role of pitch gestures in the training program, which was reported to be beneficial in acquiring Mandarin lexical tones for foreign learners of Chinese. In the current study, 18 prelingually deafened preschoolers with CI were recruited in Shanghai. They were randomly assigned to two groups. The experimental group was trained with audio and pitch gestures, and the control group was trained with audio only. Three tone-identification tests were conducted before, in the middle, and after the eight training sessions. Although the two groups had identical performance before the training, the experimental group demonstrated significantly better performance than the control group after training sessions, especially in noise conditions. The results thus showed that multimodal training with pitch gestures improved the tone recognition ability of CI children more than sheer auditory training. Our findings offer more evidence to support that learning to perceive lexical tones can be facilitated by multimodal cues and also provide important implications for optimizing rehabilitation training after implantation.
... It is perhaps not surprising that the relative location of a referent may influence demonstrative form, as the speaker often has to identify the location of a referent anyway when deciding to produce a pointing gesture to guide the addressee's visual attention in a desired direction. This idea suggests that demonstrative form may vary as a function of whether the speaker includes a pointing gesture in their multimodal referential utterance or not, which is confirmed by recent observations (Bohnemeyer, 2018;Brown & Levinson, 2018;Cooperrider, 2016;Cutfield, 2018;Margetts, 2018;Meira, 2018;Stevens & Zhang, 2014;Terrill, 2018;Wilkins, 2018). Hence, it may be the case that the same factor (e.g., the relative location of the referent) simultaneously influences whether a speaker produces a pointing gesture or not, and which specific demonstrative form they will use (cf. ...
Article
Full-text available
Language allows us to efficiently communicate about the things in the world around us. Seemingly simple words like this and that are a cornerstone of our capability to refer, as they contribute to guiding the attention of our addressee to the specific entity we are talking about. Such demonstratives are acquired early in life, ubiquitous in everyday talk, often closely tied to our gestural communicative abilities, and present in all spoken languages of the world. Based on a review of recent experimental work, here we introduce a new conceptual framework of demonstrative reference. In the context of this framework, we argue that several physical, psychological, and referent-intrinsic factors dynamically interact to influence whether a speaker will use one demonstrative form (e.g., this) or another (e.g., that) in a given setting. However, the relative influence of these factors themselves is argued to be a function of the cultural language setting at hand, the theory-of-mind capacities of the speaker, and the affordances of the specific context in which the speech event takes place. It is demonstrated that the framework has the potential to reconcile findings in the literature that previously seemed irreconcilable. We show that the framework may to a large extent generalize to instances of endophoric reference (e.g., anaphora) and speculate that it may also describe the specific form and kinematics a speaker's pointing gesture takes. Testable predictions and novel research questions derived from the framework are presented and discussed.
... The activation of each brain region was obtained by averaging the temporal activities of the electrodes within that region. Grouping of electrodes also could improve the signal-to-noise ratio (Diamond & Zhang, 2016;Zhang et al., 2011) and has been widely used in EEG studies (Chen et al., 2011;Diamond & Zhang, 2016;Elchlepp et al., 2016;Giertuga et al., 2017;Martinovic et al., 2014;Stevens & Zhang, 2014;Zhang et al., 2011). These nine collapsed brain regions were studied in all further analyses instead of 61 individual channels. ...
Article
This study examined hypnotizability-related modulation of the cortical network following expected and nonexpected nociceptive stimulation. The electroencephalogram (EEG) was recorded in 9 high (highs) and 8 low (lows) hypnotizable participants receiving nociceptive stimulation with (W1) and without (noW) a visual warning preceding the stimulation by 1 second. W1 and noW were compared to baseline conditions to assess the presence of any later effect and between each other to assess the effects of expectation. The studied EEG variables measured local and global features of the cortical connectivity. With respect to lows, highs exhibited scarce differences between experimental conditions. The hypnotizability-related differences in the later processing of nociceptive information could be relevant to the development of pain-related individual traits. Present findings suggest a lower impact of nociceptive stimulation in highs than in lows.
... It is perhaps not surprising that the relative location of a referent may influence demonstrative form, as the speaker often has to identify the location of a referent anyway when deciding to produce a pointing gesture to guide the addressee's visual attention in a desired direction. This idea suggests that demonstrative form may vary as a function of whether the speaker includes a pointing gesture in their multimodal referential utterance or not, which is confirmed by recent observations (Bohnemeyer, 2018;Brown & Levinson, 2018;Cooperrider, 2016;Cutfield, 2018;Margetts, 2018;Meira, 2018;Stevens & Zhang, 2014;Terrill, 2018;Wilkins, 2018). Rather than the presence or absence of a concurrent pointing gesture being a factor influencing the speaker's choice of demonstrative form, it may be the case that similar factors (e.g., the relative location of the referent) simultaneously influence whether a speaker produces a pointing gesture or not, and which specific demonstrative form they will use (cf. ...
Preprint
Full-text available
Language allows us to efficiently communicate about the things in the world around us. Seemingly simple words like this and that are a cornerstone of our capability to refer, as they contribute to guiding the attention of our addressee to the specific entity we are talking about. Such demonstratives are acquired early in life, ubiquitous in everyday talk, often closely tied to our gestural communicative abilities, and present in all spoken languages of the world. Based on a review of recent experimental work, we here introduce a new conceptual framework of demonstrative reference. In the context of this framework, we argue that several physical, psychological, and referent-intrinsic factors dynamically interact to influence whether a speaker will use one demonstrative form (e.g., this) or another (e.g., that) in a given setting. However, the relative influence of these factors themselves is argued to be a function of the cultural language setting at hand, the theory-of-mind capacities of the speaker, and the affordances of the specific context in which the speech event takes place. It is demonstrated that the framework has the potential to reconcile findings in the literature that previously seemed irreconcilable. We show that the framework may to a large extent generalize to instances of endophoric reference (e.g., anaphora) and speculate that it may also describe the specific form and kinematics a speaker’s pointing gesture takes. Testable predictions and novel research questions derived from the framework are presented and discussed.
... For instance, enriched speech input (e.g., formant exaggeration), which heightens cortical activation in the infant brain (Zhang et al., 2011), has been found to be effective in reshaping the neural circuitry in the adult brain to promote second language learning (Zhang et al., 2009). (4) NLNC relies on the perception-production link to facilitate speech communication, and this process takes place by integrating the multimodal information in a structured context of social interaction (Imada et al., 2006;Kuhl, 2007;Ferjan Ramirez and Kuhl, 2017;Stevens and Zhang, 2014). ...
Article
A current topic in auditory neurophysiology is how brainstem sensory coding contributes to higher-level perceptual, linguistic and cognitive skills. This cross-language study was designed to compare frequency following responses (FFRs) for lexical tones in tonal (Mandarin Chinese) and non-tonal (English) language users and test the correlational strength between FFRs and behavior as a function of language experience. The behavioral measures were obtained in the Garner paradigm to assess how lexical tones might interfere with vowel category and duration judgement. The FFR results replicated previous findings about between-group differences, showing enhanced pitch tracking responses in the Chinese subjects. The behavioral data from the two subject groups showed that lexical tone variation in the vowel stimuli significantly interfered with vowel identification with a greater effect in the Chinese group. Moreover, the FFRs for lexical tone contours were significantly correlated with the behavioral interference only in the Chinese group. This pattern of language-specific association between speech perception and brainstem-level neural phase-locking of linguistic pitch information provides evidence for a possible native language neural commitment at the subcortical level, highlighting the role of experience-dependent brainstem tuning in influencing subsequent linguistic processing in the adult brain.
... A cognitive neuroscience approach may also be employed in future to investigate if different brain mechanisms are engaged by English and Chinese speakers when they engage in co-speech gesture processing. This is particularly relevant in light of recent research revealing language-specific cospeech gesture processing in the brain (e.g., Özyürek et al., 2007Stevens and Zhang, 2014). ...
Article
Full-text available
Spatial metaphors are used to represent and reason about time. Such metaphors are typically arranged along the sagittal axis in most languages. For example, in English, " The future lies ahead of us " and " We look back on our past. " This is less straightforward for Chinese. Specifically, both the past and future can either be behind or ahead. The present study aims to explore these cross-linguistic differences by priming auditory targets (e.g., tomorrow) with either a congruent (i.e., pointing forwards) or incongruent (i.e., pointing backwards) gesture. Two groups of college-age young adult participants (English and Chinese speakers) made temporal classifications of words after watching a gestural prime. If speakers represent time along the sagittal axis, they should respond faster if the auditory target is preceded with a gesture indicating a congruent vs. incongruent spatial location. Results showed that English speakers responded faster to congruent gesture-word pairs than to incongruent pairs, mirroring spatio-temporal metaphors commonly recruited to talk about time in their native language. However, such an effect of congruency was not found for Chinese speakers. These findings suggest that while the spatio-temporal metaphors commonly recruited to talk about time help to structure the mental timelines of English speakers, the varying instances in how time is represented along the sagittal axis in Chinese may lead to a more variable mental timeline as well. In addition, our findings demonstrate that gestures may not only be a means of accessing concrete concepts in the mind, as shown in previous studies, but may be used to access abstract ones as well.
... (c) Neural commitment is subject to continual shaping and reshaping by experience -Enriched exposure (including high stimulus variability and talker variability, exaggerated speech, and audiovisual training) not only provides enhanced stimulation to the infant brain (e.g., but also can induce substantial plasticity in the adult brain for second language learning, producing hemispheric reallocation of resources for enhanced phonetic sensitivity and more efficient linguistic processing (Zhang et al., 2000. (d) Neural commitment involves the binding of perception and action systems to facilitate speech communication, and this process depends on social/affective learning early in life (Imada et al., 2006;Kuhl, 2007;Stevens and Zhang, 2014). These claims are consistent with the developmental framework that views language acquisition as an adaptive computational process to extract the abstract speech categories and higher-order linguistic structures. ...
Article
Full-text available
The present study investigated how syllable structure differences between the first Language (L1) and the second language (L2) affect L2 consonant perception and production at syllable-initial and syllable-final positions. The participants were Mandarin-speaking college students who studied English as a second language. Monosyllabic English words were used in the perception test. Production was recorded from each Chinese subject and rated for accentedness by two native speakers of English. Consistent with previous studies, significant positional asymmetry effects were found across speech sound categories in terms of voicing, places of articulation, and manner of articulation. Furthermore, significant correlations between perception and accentedness ratings were found at the syllable onset position but not for the coda. Many exceptions were also found, which could not be solely accounted for by differences in L1-L2 syllabic structures. The results show a strong effect of language experience at the syllable level, which joins force with acoustic, phonetic, and phonemic properties of individual consonants in influencing positional asymmetry in both domains of L2 segmental perception and production. The complexities and exceptions call for further systematic studies on the interactions between syllable structure universals and native-language interference and refined theoretical models to specify the links between perception and production in second language acquisition. (Open Access Web Link at: http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01801/full)
Article
Aims and Objectives This study explored the extent to which bilingual language exposure and practice might alter the way in which bilingual first-generation adult speakers use deictic demonstratives in their first language (Spanish) after immersion in a new language environment (Norwegian). Fully developed L1 systems are expected to be stable and less susceptible to change or restructuring than child systems. In addition, core domains of a language such as deictic demonstrative reference are hypothesized to be more robust. Design Participants were tested with the Spanish version of the memory game. They completed an ethnolinguistic background questionnaire with questions targeting demographic data, experience with language, and daily routines in language use. Data and analyses Demonstrative use was analysed using binomial multilevel modelling, allowing residual variance to be partitioned into a between-participant component and a within-participant component. Findings Results demonstrate a shift in the demonstrative system of Spanish native speakers who have resided in Norway for a median of 6.5 years. This shift is reflected in extensive use of the semantically underspecified item ese at the expense of the form aquel. The latter form is less frequent and highly context-dependent in corpora of the modern language. It can be hypothesized that first-generation speakers are faster in converging on a simplified system of deictic reference than the native speaker group tested in Spain, but this development parallels tendencies observed in the monolingual variety of the language. This faster shift may well be influenced and catalysed by bilingual language practice. Originality This article addresses a gap in research on deictic terms under conditions of language attrition. It documents a restructuring of the deictic system in first-generation speakers of Spanish residing in another country. The results suggest that marking peri-personal space is a core feature of deictic systems across languages, also preserved under deictic system shift.
Article
In all spoken languages, speakers use demonstratives – words like this and that – to refer to entities in their immediate environment. But which factors determine whether they use one demonstrative (this) or another (that)? Here we report the results of an experiment examining the effects of referent visibility, referent distance, and addressee location on the production of demonstratives by speakers of Ticuna (isolate; Brazil, Colombia, Peru), an Amazonian language with four demonstratives, and speakers of Dutch (Indo-European; Netherlands, Belgium), which has two demonstratives. We found that Ticuna speakers' use of demonstratives displayed effects of addressee location and referent distance, but not referent visibility. By contrast, under comparable conditions, Dutch speakers displayed sensitivity only to referent distance. Interestingly, we also observed that Ticuna speakers consistently used demonstratives in all referential utterances in our experimental paradigm, while Dutch speakers strongly preferred to use definite articles. Taken together, these findings shed light on the significant diversity found in demonstrative systems across languages. Additionally, they invite researchers studying exophoric demonstratives to broaden their horizons by cross-linguistically investigating the factors involved in speakers’ choice of demonstratives over other types of referring expressions, especially articles.
Article
Full-text available
Languages around the world differ in terms of the number of adnominal and pronominal demonstratives they require, as well as the factors that impact on their felicitous use. Given this cross-linguistic variation in deictic demonstrative terms, and the features that determine their felicitous use, an open question is how this is accommodated within bilingual cognition and language. In particular, we were interested in the extent to which bilingual language exposure and practice might alter the way in which a bilingual is using deictic demonstratives in their first language. Recent research on language attrition suggests that L2 learning selectively affects aspects of the native language, with some domains of language competence being more vulnerable than others. If demonstratives are basic, and acquired relatively early, they should be less susceptible to change and attrition. This was the hypothesis we went on to test in the current study. We tested two groups of native Spanish speakers, a control group living in Spain and an experimental group living in Norway using the (Spatial) Memory game paradigm. Contra to our expectations, the results indicate a significant difference between the two groups in use of deictic terms, indicative of a change in the preferred number of terms used. This suggests that deictic referential systems may change over time under pressure from bilingual language exposure.
Article
Full-text available
Most of the research done with spatial demonstratives (words such as this, here and that, there) have focused on the production, not the interpretation, of these words. In addition, emphasis has been largely on demonstrative pronouns, leaving demonstrative adverbs with relatively little research attention. The present study explores the interpretation of both demonstrative pronouns and demonstrative adverbs in Estonian—a Finno-Ugric language with two dialectal-specific demonstrative pronoun systems. In the South-Estonian (SE) dialectal region, two demonstrative pronouns, see—“this” and too—“that”, are used. In the North-Estonian (NE) region, only one, see—“this/that”, is used. The aim of this study is twofold. First, we test if the distance and the visual salience of a referent have an effect on the interpretation of demonstratives. Second, we explore if there is a difference in the interpretation of demonstratives between native speakers from SE and NE. We used an interpretation experiment with 30 participants per group (total n = 60) and compared the SE and NE group responses. The results clearly show that the distance of the referent has an effect on how demonstratives are interpreted across the two groups, while the effect of visual salience is inconclusive. There is also a difference in the interpretation of demonstratives between the two dialectal groups. When using the Estonian with an influence of the SE dialect, the NE speakers rely on demonstrative adverbs in interpreting the referential utterance that includes demonstrative pronoun and adverb combinations, whereas the SE speakers also take into account the semantics of demonstrative pronouns. We show that, in addition to an already known difference in the production, there is also a difference in the interpretation of demonstratives between the two groups. In addition, our findings support the recognition that languages that have distance neutral demonstrative pronouns enforce the spatial meaning of a referring utterance by adding demonstrative adverbs. Not only is the interpretation of demonstrative pronouns affected, but the interpretation of demonstrative adverbs as well. The latter shows the importance of studying adverbs also, not just pronouns, and contributes to further knowledge of how demonstratives function.
Article
Full-text available
Language and Theory of Mind come together in communication, but their relationship has been intensely contested. I hypothesize that pragmatic markers connect language and Theory of Mind and enable their co-development and co-evolution through a positive feedback loop, whereby the development of one skill boosts the development of the other. I propose to test this hypothesis by investigating two types of pragmatic markers: demonstratives (e.g., ‘this’ vs. ‘that’ in English) and articles (e.g., ‘a’ vs. ‘the’). Pragmatic markers are closed-class words that encode non-representational information that is unavailable to consciousness, but accessed automatically in processing. These markers have been associated with implicit Theory of Mind because they are used to establish joint attention (e.g., ‘I prefer that one’) and mark shared knowledge (e.g., ‘We bought the house’ vs. ‘We bought a house’). Here I develop a theoretical account of how joint attention (as driven by the use of demonstratives) is the basis for children’s later tracking of common ground (as marked by definite articles). The developmental path from joint attention to common ground parallels language change, with demonstrative forms giving rise to definite articles. This parallel opens the possibility of modelling the emergence of Theory of Mind in human development in tandem with its routinization across language communities and generations of speakers. I therefore propose that, in order to understand the relationship between language and Theory of Mind, we should study pragmatics at three parallel timescales: during language acquisition, language use, and language change.
Article
Full-text available
Previous studies have shown that young children often fail to comprehend demonstratives correctly when they are uttered by a speaker whose perspective is different from children’s own, and instead tend to interpret them with respect to their own perspective (e.g., Webb and Abrahamson in J Child Lang 3(3):349–367, 1976); Clark and Sengul in J Child Lang 5(3):457–475, 1978). In the current study, we examined children’s comprehension of demonstratives in English (this and that) and Mandarin Chinese (zhe and na) in order to test the hypothesis that children’s non-adult-like demonstrative comprehension is related to their still-developing non-linguistic cognitive abilities supporting perspective-taking, including Theory of Mind and Executive Function. Testing 3 to 6-year-old children on a set of demonstrative comprehension tasks and assessments of Theory of Mind and Executive Function, our findings revealed that children’s successful demonstrative comprehension is related to their development of Theory of Mind and Executive Function, for both of the language groups. These findings suggest that the development of deictic expressions like demonstratives may be related to the development of non-linguistic cognitive abilities, regardless of the language that the children are acquiring.
Article
Full-text available
Spatial demonstratives - terms including this and that - are among the most common words across all languages. Yet, there are considerable differences between languages in how demonstratives carve up space and the object characteristics they can refer to, challenging the idea that the mapping between spatial demonstratives and the vision and action systems is universal. In seven experiments we show direct parallels between spatial demonstrative usage in English and (non-linguistic) memory for object location, indicating close connections between the language of space and non-linguistic spatial representation. Spatial demonstrative choice in English and immediate memory for object location are affected by a range of parameters - distance, ownership, visibility and familiarity - that are lexicalized in the demonstrative systems of some other languages. The results support a common set of constraints on language used to talk about space and on (non-linguistic) spatial representation itself. Differences in demonstrative systems across languages may emerge from basic distinctions in the representation and memory for object location. In turn, these distinctions offer a building block from which non-spatial uses of demonstratives can develop.
Article
Full-text available
Lexical-semantic processing impairments in aphasic patients with left hemisphere lesions and non-aphasic patients with right hemisphere lesions were investigated by recording event-related brain potentials (ERPs) while subjects listened to auditorily presented word pairs. The word pairs consisted of unrelated words, or words that were related in meaning. The related words were either associatively related, e.g. 'bread-butter', or were members of the same semantic category without being associatively related, e.g. 'churchvilla '. The latter relationships are assumed to be more distant than the former ones. The most relevant ERP component in this study is the N400. In elderly control subjects, the N400 amplitude to associatively and semantically related word targets is reduced relative to the N400 elicited by unrelated targets. Compared with this normal N400 effect, the different patient groups showed the following pattern of results: aphasic patients with only minor comprehension deficits (high comprehenders) showed N400 effects of a similar size as the control subjects. In aphasic patients with more severe comprehension deficits (low comprehenders) a clear reduction in the N400 effects was obtained, both for the associative and the semantic word pairs. The patients with right hemisphere lesions showed a normal N400 effect for the associatively related targets, but a trend towards a reduced N400 effect for the semantically related word pairs. A dissociation between the N400 results in the word pair paradigm and P300 results in a classical tone oddball task indicated that the N400 effects were not an aspecific consequence of brain lesion, but were related to the nature of the language comprehension impairment. The conclusions drawn from the ERP results are that comprehension deficits in the aphasic patients are due to an impairment in integrating individual word meanings into an overall meaning representation. Right hemisphere patients are more specifically impaired in the processing of semantically more distant relationships, suggesting the involvement of the right hemisphere in semantically coarse coding.
Article
Full-text available
This study examined longitudinally how infantsí display of gestural and verbal deictic means to indicate targets is related to a certain target topology and to a specific mother attention pattern. Eight Spanish 1- and 2-year-olds and their mothers were observed every three months during one year, while performing routine activities. Results showed that the younger children usually pointed alone or combined with a vocalization to objects placed within the boundaries of the visual field. The older children usually pointed combined with a content word or with a deictic word or said a deictic word alone to indicate objects under manipulation, mainly placed in the close peripersonal space. Across ages, mothers supported the childís use of deictic means by looking at the object as well as at the childís face. Findings are discussed in terms of the functions that gestural and verbal deixis may serve for early verbal development and, specifically, to the grounding of reference.
Article
Full-text available
Gesture, or visible bodily action that is seen as intimately involved in the activity of speaking, has long fascinated scholars and laymen alike. Written by a leading authority on the subject, this 2004 study provides a comprehensive treatment of gesture and its use in interaction, drawing on the analysis of everyday conversations to demonstrate its varied role in the construction of utterances. Adam Kendon accompanies his analyses with an extended discussion of the history of the study of gesture - a topic not dealt with in any previous publication - as well as exploring the relationship between gesture and sign language, and how the use of gesture varies according to cultural and language differences. Set to become the definitive account of the topic, Gesture will be invaluable to all those interested in human communication. Its publication marks a major development, both in semiotics and in the emerging field of gesture studies.
Article
Full-text available
This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.
Article
Full-text available
As Rose (2006) discusses in the lead article, two camps can be identified in the field of gesture research: those who believe that gesticulation enhances communication by providing extra information to the listener, and on the other hand those who believe that gesticulation is not communicative, but rather that it facilitates speaker-internal word finding processes. I review a number of key studies relevant for this controversy, and conclude that the available empirical evidence is supporting the notion that gesture is a communicative device which can compensate for problems in speech by providing information in gesture. Following that, I discuss the finding by Rose and Douglas (2001) that making gestures does facilitate word production in some patients with aphasia. I argue that the gestures produced in the experiment by Rose and Douglas are not guaranteed to be of the same kind as the gestures that are produced spontaneously under naturalistic, communicative conditions, which makes it difficult to generalise from that particular study to general gesture behavior. As a final point, I encourage researchers in the area of aphasia to put more emphasis on communication in naturalistic contexts (e.g., conversation) in testing the capabilities of people with aphasia.
Article
Full-text available
Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.
Article
Full-text available
Early word learning in infants relies on statistical, prosodic, and social cues that support speech segmentation and the attachment of meaning to words. It is debated whether such early word knowledge represents mere associations between sound patterns and visual object features, or reflects referential understanding of words. By measuring an event-related brain potential component known as the N400, we demonstrated that 9-month-old infants can detect the mismatch between an object appearing from behind an occluder and a preceding label with which their mother introduces it. Differential N400 amplitudes have been shown to reflect semantic priming in adults, and its absence in infants has been interpreted as a sign of associative word learning. By setting up a live communicative situation for referring to objects, we demonstrated that a similar priming effect also occurs in young infants. This finding may indicate that word meaning is referential from the outset of word learning and that referential expectation drives, rather than results from, vocabulary acquisition in humans.
Article
Full-text available
Monitoring is an aspect of executive control that entails the detection of errors and the triggering of corrective actions when there is a mismatch between competing responses or representations. In the language domain, research of monitoring has mainly focused on errors made during language production. However, in language perception, for example while reading or listening, errors occur as well and people are able to detect them. A hypothesis that was developed to account for these errors is the monitoring hypothesis for language perception. According to this account, when a strong expectation conflicts with what is actually observed, a reanalysis is triggered to check the input for processing errors reflected by the P600 component. In contrast to what has been commonly assumed, the P600 is thought to reflect a general reanalysis and not a syntactic reanalysis. In this review, we will describe the different studies that led to this hypothesis and try to extend it beyond the language domain.
Article
Full-text available
A close coupling of perception and action processes is assumed to play an important role in basic capabilities of social interaction, such as guiding attention and observation of others' behavior, coordinating the form and functions of behavior, or grounding the understanding of others' behavior in one's own experiences. In the attempt to endow artificial embodied agents with similar abilities, we present a probabilistic model for the integration of perception and generation of hand-arm gestures via a hierarchy of shared motor representations, allowing for combined bottom-up and top-down processing. Results from human-agent interactions are reported demonstrating the model's performance in learning, observation, imitation, and generation of gestures.
Article
Full-text available
Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of formant exaggeration. ERP waveform analysis showed significantly enhanced N250 for formant exaggeration, which was more prominent in the right hemisphere than the left. Time-frequency analysis indicated increased neural synchronization for processing formant-exaggerated speech in the delta band at frontal-central-parietal electrode sites as well as in the theta band at frontal-central sites. Minimum norm estimates further revealed a bilateral temporal-parietal-frontal neural network in the infant brain sensitive to formant exaggeration. Collectively, these results provide the first evidence that formant expansion in infant-directed speech enhances neural activities for phonetic encoding and language learning.
Article
Full-text available
This experiment explored the effect of semantic expectancy on the processing of grammatical gender. and tice versa, in German using event-related-potentials (ERPs). Subjects were presented with correct sentences and sentences containing an article-noun gender agreement violation. The doze probability of the nouns was either high or low. ERPs were measured on the nouns. The low-cloze nouns evoked a larger N400 than the high-cloze nouns. Gender violations elicited a left-anterior negativity (LAN, 300-600 msec) for ail nouns. An additional P600 component was found only in high-cloze nouns. The N400 was independent of the gender mismatch variable; the LAN was independent of the semantic variable, whereas an interaction of the two variables was found in the P600. This finding indicates that syntactic and semantic processes are autonomous during an early processing stage, whereas these information types interact during a later processing phase.
Article
Full-text available
Using data from more than ten years of research, David McNeill shows that gestures do not simply form a part of what is said and meant but have an impact on thought itself. Hand and Mind persuasively argues that because gestures directly transfer mental images to visible forms, conveying ideas that language cannot always express, we must examine language and gesture together to unveil the operations of the mind.
Article
Full-text available
Microsaccades are very small, involuntary flicks in eye position that occur on average once or twice per second during attempted visual fixation. Microsaccades give rise to EMG eye muscle spikes that can distort the spectrum of the scalp EEG and mimic increases in gamma band power. Here we demonstrate that microsaccades are also accompanied by genuine and sizeable cortical activity, manifested in the EEG. In three experiments, high-resolution eye movements were corecorded with the EEG: during sustained fixation of checkerboard and face stimuli and in a standard visual oddball task that required the counting of target stimuli. Results show that microsaccades as small as 0.15 degrees generate a field potential over occipital cortex and midcentral scalp sites 100-140 ms after movement onset, which resembles the visual lambda response evoked by larger voluntary saccades. This challenges the standard assumption of human brain imaging studies that saccade-related brain activity is precluded by fixation, even when fully complied with. Instead, additional cortical potentials from microsaccades were present in 86% of the oddball task trials and of similar amplitude as the visual response to stimulus onset. Furthermore, microsaccade probability varied systematically according to the proportion of target stimuli in the oddball task, causing modulations of late stimulus-locked event-related potential (ERP) components. Microsaccades present an unrecognized source of visual brain signal that is of interest for vision research and may have influenced the data of many ERP and neuroimaging studies.
Article
Full-text available
In the language domain, most studies of error monitoring have been devoted to language production. However, in language perception, errors are made as well and we are able to detect them. According to the monitoring theory of language perception, a strong conflict between what is expected and what is observed triggers reanalysis to check for possible perceptual errors, a process reflected by the P600. This is at variance with the dominant view that the P600 reflects syntactic reanalysis or repair, after syntactic violations or ambiguity. In the present study, the prediction of the monitoring theory of language perception was tested, that only a strong conflict between expectancies triggers reanalysis to check for possible perceptual errors, reflected by the P600. Therefore, we manipulated plausibility, and hypothesized that when a critical noun is mildly implausible in the given sentence (e.g., "The eye consisting of among other things a pupil, iris, and eyebrow ..."), a mild conflict arises between the expected and unexpected event; integration difficulties arise due to the unexpectedness but they are resolved successfully, thereby eliciting an N400 effect. When the noun is deeply implausible however (e.g., "The eye consisting of among other things a pupil, iris, and sticker ..."), a strong conflict arises; integration fails and reanalysis is triggered, eliciting a P600 effect. Our hypothesis was confirmed; only when the conflict between the expected and unexpected event is strong enough, reanalysis is triggered.
Article
Full-text available
Measuring event-related potentials (ERPs) has been fundamental to our understanding of how language is encoded in the brain. One particular ERP response, the N400 response, has been especially influential as an index of lexical and semantic processing. However, there remains a lack of consensus on the interpretation of this component. Resolving this issue has important consequences for neural models of language comprehension. Here we show that evidence bearing on where the N400 response is generated provides key insights into what it reflects. A neuroanatomical model of semantic processing is used as a guide to interpret the pattern of activated regions in functional MRI, magnetoencephalography and intracranial recordings that are associated with contextual semantic manipulations that lead to N400 effects.
Book
All languages have demonstratives, but their form, meaning and use vary tremendously across the languages of the world. This book presents the first large-scale analysis of demonstratives from a cross-linguistic and diachronic perspective. It is based on a representative sample of 85 languages. The first part of the book analyzes demonstratives from a synchronic point of view, examining their morphological structures, semantic features, syntactic functions, and pragmatic uses in spoken and written discourse. The second part concentrates on diachronic issues, in particular on the development of demonstratives into grammatical markers. Across languages demonstratives provide a frequent historical source for definite articles, relative and third person pronouns, nonverbal copulas, sentence connectives, directional preverbs, focus markers, expletives, and many other grammatical markers. The book describes the different mechanisms by which demonstratives grammaticalize and argues that the evolution of grammatical markers from demonstratives is crucially distinct from other cases of grammaticalization.
Chapter
This chapter claims, from the viewpoint of cognitive and evolutionary neuroscience, that language originated with a system of manual gestures. It reviews a broad range of data, including studies of language and communicative abilities in apes, the skeletal remains and artefacts in the archaeological record, and the language abilities of hearing, deaf, and language-impaired human populations. Whereas nonhuman primates tend to gesture only when others are looking, their vocalisations are not necessarily directed at others-perhaps because of differences in voluntary control over gestures and vocalisations. One of the first steps in language evolution may have been the advent of bipedalism, which would have allowed the hands to be used for gestures instead of locomotion. There could have been a gradual evolution of a capacity for grammar, although language remained primarily gestural until relatively late in our evolutionary history. The shift from visual gestures to vocal ones would have been gradual, and largely autonomous speech likely arose following a genetic mutation between 100,000 and 50,000 years ago.
Article
The P600 component in Event Related Potential research has been hypothesised to be associated with syntactic reanalysis processes. We, however, propose that the P600 is not restricted to reanalysis processes, but reflects difficulty with syntactic integration processes in general. First we discuss this integration hypothesis in terms of a sentence processing model proposed elsewhere. Next, in Experiment 1, we show that the P600 is elicited in grammatical, non-garden path sentences in which integration is more difficult (i.e., ''who'' questions) relative to a control sentence (''whether'' questions). This effect is replicated in Experiment 2. Furthermore, we directly compare the effect of difficult integration in grammatical sentences to the effect of agreement violations. The results suggest that the positivity elicited in ''who'' questions and the P600-effect elicited by agreement violations have partly overlapping neural generators. This supports the hypothesis that similar cognitive processes, i.e., integration, are involved in both first pass analysis of ''who'' questions and dealing with ungrammaticalities (reanalysis).
Article
This chapter explores how gestures contribute to comprehension, how gesturing affect speech and what can be learned from studying conversational gestures. The primary function of conversational hand gestures is to aid in the formulation of speech. Gestures can convey nonsemantic information. The study of speech and gestures overlaps with the study of person perception and attribution processes. The significance of gestures can be ambiguous and will affect the meanings and consequences to the observed gestures. A topology of gestures is adopters, symbolic gestures, and conversational gestures. Different types of conversational gestures can be distinguished as—namely, motor movements and lexical movements. Conversational hand gestures have been assumed to convey semantic information. Several studies that attempt to assess the kinds of information conversational gestures convey to naive observers and the extent to which gestures enhance the communicativeness of spoken messages are described in the chapter.
Article
How linguistic expressions are contextually constrained is of vital importance to our understanding of language as a formal representational system and a vehicle of social communication. This study collected behavioral and event-related potential (ERP) data to investigate neural processing of two entity-referring spatial demonstrative expressions, this one and that one, in different contexts involving the speaker, the hearer and the referred-to object. Stimulus presentation varied distance and gaze conditions with either semantically congruent or incongruent audiovisual pairings. Behavioral responses showed that distance determined the demonstrative form only in joint gaze conditions. The ERP data for the joint gaze conditions further indicated significant congruent vs. incongruent differences in the post-stimulus window of 525–725 ms for the hearer-associated spatial context. Standardized Low Resolution Brain Electromagnetic Tomography (sLORETA) showed left temporal and bilateral parietal activations for the effect. The results provide the first neural evidence that the use of spatial demonstratives in English is obligatorily influenced by two factors: (1) shared gaze of speaker and hearer, and (2) the relative distance of the object to the speaker and hearer. These findings have important implications for cognitive-linguistic theories and studies on language development and social discourse.
Article
This note provides a statistical-graphical method for the evaluation of the statistical significance of difference potentials from a group of subjects, and for the comparison of difference potentials between two groups. A table of the lengths of statistically significant intervals for various sampling interval lengths, numbers of subjects, and autocorrelation parameters is presented.
Article
The present paper studies how, in deictic expressions, the temporal interdependency of speech and gesture is realized in the course of motor planning and execution. Two theoretical positions were compared. On the “interactive” view the temporal parameters of speech and gesture are claimed to be the result of feedback between the two systems throughout the phases of motor planning and execution. The alternative “ballistic” view, however, predicts that the two systems are independent during the phase of motor execution, the temporal parameters having been preestablished in the planning phase. In four experiments subjects were requested to indicate which of an array of referent lights was momentarily illuminated. This was done by pointing to the light and/or by using a deictic expression (this/that light). The temporal and spatial course of the pointing movement was automatically registered by means of a Selspot opto-electronic system. By analyzing the moments of gesture initiation and apex, and relating them to the moments of speech onset, it was possible to show that, for deictic expressions, the ballistic view is very nearly correct.
Article
Suppose a speaker gestures toward four flowers and asks a listener, “How would you describe the color of this flower?” How does the listener infer which of the four flowers is being referred to? It is proposed that he selects the one he judges to be most salient with respect to the speaker's and his common ground—their mutual knowledge, beliefs, and suppositions. In a field experiment, it was found that listeners would accept demonstrative references (like this flower) with more than one potential referent. Three further experiments showed that listeners select referents based on estimates of their mutual beliefs about perceptual salience, the speaker's goals, and the speaker's presuppositions and assertions. Common ground, it is argued, is necessary in general for understanding demonstrative reference.
Article
The development of comprehension and production of spatial deictic terms “this/that”, “here/there”, “my/your”, and “in front of/behind” was investigated in the context of a hide-and-seek game. The first three contrasts are produced according to the speaker's perspective, so comprehension requires a nonegocentric viewpoint. The contrast “in front of/behind” is produced relative to the hearer, i.e., production is nonegocentric. The subjects were 39 children, rangin in age from 2.5–4.5 years, and 18 college undergraduates. The 2.5-year-old children were best at those contrasts which do not require a shift in perspective. The 3- and 4-year-old children were adept at switching to the speaker's perspective for comprehension of the terms requiring this shift, i.e., were nonegocentric. Four-year-olds were also capable of nonegocentric production of “in front of/behind”.
Article
We recorded event-related brain potentials (ERPs) while participants read sentences, some of which contained an anomalous word. In the critical sentences (e.g., The meal was devouring…), the syntactic cues unambiguously signaled an Agent interpretation of the subject noun, whereas the semantic cues supported a Theme interpretation. An Agent interpretation would render the main verb semantically anomalous (as meals do not devour things). Conversely, the Theme interpretation would render the main verb syntactically anomalous (as the -ED form, not the -ING form, is syntactically appropriate for this interpretation). We report that the main verbs in such sentences elicit the P600 effect associated with syntactic anomalies, rather than the N400 effect associated with semantic anomalies. We conclude that, at least under certain conditions, semantic information is “in control” of how words are combined during sentence processing.
Article
People of all ages, cultures and backgrounds gesture when they speak. These hand movements are so natural and pervasive that researchers across many fields - from linguistics to psychology to neuroscience - have claimed that the two modalities form an integrated system of meaning during language production and compre- hension. This special relationship has implications for a variety of research and applied domains. Gestures may provide unique insights into language and cognitive development, and also help clinicians identify, understand and even treat devel- opmental disorders in childhood. In addition, research in education suggests that teachers can use gesture to become even more effective in several fundamental aspects of their profession, including communication, assessment of student knowledge, and the ability to instill a profound understanding of abstract concepts in traditionally difficult domains such as language and mathematics. This work converging from multiple perspectives will push researchers and practitioners alike to view hand gestures in a new and constructive way.
Article
This paper presents event-related brain potential (ERP) data from an experiment on syntactic processing. Subjects read individual sentences containing one of three different kinds of violations of the syntactic constraints of Dutch. The ERP results provide evidence for M electrophysiological response to syntactic processing that is qualitatively different from established ERP responses to semantic processing. We refer to this electro-physiological manifestation of parsing as the Syntactic Positive Shift (SPS). The SPS was observed in an experiment in which no task demands, other than to read the input, were imposed on the subjects. The pattern of responses to the different kinds of syntactic violations suggests that the SPS indicates the impossibility for the parser to assign the preferred structure to an incoming string of words, irrespective of the specific syntactic nature of this preferred structure. The implications of these findings for further research on parsing are discussed.
Article
In 1980, the N400 event-related potential was described in association with semantic anomalies within sentences. When, in 1992, a second waveform, the P600, was reported in association with syntactic anomalies and ambiguities, the story appeared to be complete: the brain respected a distinction between semantic and syntactic representation and processes. Subsequent studies showed that the P600 to syntactic anomalies and ambiguities was modulated by lexical and discourse factors. Most surprisingly, more than a decade after the P600 was first described, a series of studies reported that semantic verb-argument violations, in the absence of any violations or ambiguities of syntax can evoke robust P600 effects and no N400 effects. These observations have raised fundamental questions about the relationship between semantic and syntactic processing in the brain. This paper provides a comprehensive review of the recent studies that have demonstrated P600s to semantic violations in light of several proposed triggers: semantic-thematic attraction, semantic associative relationships, animacy and semantic-thematic violations, plausibility, task, and context. I then discuss these findings in relation to a unifying theory that attempts to bring some of these factors together and to link the P600 produced by semantic verb-argument violations with the P600 evoked by unambiguous syntactic violations and syntactic ambiguities. I suggest that normal language comprehension proceeds along at least two competing neural processing streams: a semantic memory-based mechanism, and a combinatorial mechanism (or mechanisms) that assigns structure to a sentence primarily on the basis of morphosyntactic rules, but also on the basis of certain semantic-thematic constraints. I suggest that conflicts between the different representations that are output by these distinct but interactive streams lead to a continued combinatorial analysis that is reflected by the P600 effect. I discuss some of the implications of this non-syntactocentric, dynamic model of language processing for understanding individual differences, language processing disorders and the neuroanatomical circuitry engaged during language comprehension. Finally, I suggest that that these two processing streams may generalize beyond the language system to real-world visual event comprehension.
Article
Human communication in a natural context implies the dynamic coordination of contextual clues, paralinguistic information and literal as well as figurative language use. In the present study we constructed a paradigm with four types of video clips: literal and metaphorical expressions accompanied by congruent and incongruent gesture actions. Participants were instructed to classify the gesture accompanying the expression as congruent or incongruent by pressing two different keys while electrophysiological activity was being recorded. We compared behavioral measures and event related potential (ERP) differences triggered by the gesture stroke onset. Accuracy data showed that incongruent metaphorical expressions were more difficult to classify. Reaction times were modulated by incongruent gestures, by metaphorical expressions and by a gesture-expression interaction. No behavioral differences were found between the literal and metaphorical expressions when the gesture was congruent. N400-like and LPC-like (late positive complex) components from metaphorical expressions produced greater negativity. The N400-like modulation of metaphorical expressions showed a greater difference between congruent and incongruent categories over the left anterior region, compared with the literal expressions. More importantly, the literal congruent as well as the metaphorical congruent categories did not show any difference. Accuracy, reaction times and ERPs provide convergent support for a greater contextual sensitivity of the metaphorical expressions.
Article
This study employed behavioral and electrophysiological measures to examine selective listening of concurrent auditory stimuli. Stimuli consisted of four compound sounds, each created by mixing a pure tone with filtered noise bands at a signal-to-noise ratio of +15 dB. The pure tones and filtered noise bands each contained two levels of pitch. Two separate conditions were created; the background stimuli varied randomly or were held constant. In separate blocks, participants were asked to judge the pitch of tones or the pitch of filtered noise in the compound stimuli. Behavioral data consistently showed lower sensitivity and longer response times for classification of filtered noise when compared with classification of tones. However, differential effects were observed in the peak components of auditory event-related potentials (ERPs). Relative to tone classification, the P1 and N1 amplitudes were enhanced during the more difficult noise classification task in both test conditions, but the peak latencies were shorter for P1 and longer for N1 during noise classification. Moreover, a significant interaction between condition and task was seen for the P2. The results suggest that the essential ERP components for the same compound auditory stimuli are modulated by listeners' focus on specific aspects of information in the stimuli.
Article
Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated-systems hypothesis, which explains two ways in which gesture and speech are integrated--through mutual and obligatory interactions--in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: "chop"; gesture: chop) than when they contained incongruent information (speech: "chop"; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: "chop"; gesture: cut) than for strong incongruities (speech: "chop"; gesture: twist). Crucial for the integrated-systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture's influence on speech was obligatory. The results confirm the integrated-systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
Article
The neurological correlates of pointing comprehension in adults and 8-month-old infants are explored. Both age groups demonstrate differential activation to congruent and incongruent pointing gestures over posterior temporal areas. The functional similarity of the adult N200 and the infant P400 component suggests that they might have a common source.
Article
In three experimental conditions, we tested matched children with and without autism (n = 15 per group) for their comprehension and use of first person plural ('we') and third person singular ('he') pronouns, and examined whether such linguistic functioning related to their social interaction. The groups were indistinguishable in their comprehension and use of 'we' pronouns, although within each group, such usage was correlated with ratings of interpersonal connectedness with the collaborator. On the other hand, participants with autism were less likely to use third person pronouns or to show patterns of eye gaze reflecting engagement with an interlocutor's stance vis-à-vis a third person. In these settings, atypical third person pronoun usage seemed to reflect limited communicative engagement, but first person pronouns were relatively spared.
Article
The processing of semantic and structural information concerning the relation between a verb and its arguments is investigated in German in two experiments: In Experiment 1 the verb precedes all its arguments, whereas in Experiment 2 all arguments precede the verb. In both experiments, participants read sentences containing a semantic violation concerning the thematic role, a violation of the number of arguments, or a violation of the grammatical type of the argument (direct versus indirect object) indicated by case marking. Event-related brain potentials (ERPs) were recorded during sentence reading. ERPs displayed different patterns for each of the violation types in the two experiments. The specific ERP patterns found for the different violation types indicate that the processes concerning the thematic role violation are primarily semantic in nature and that those concerning the grammatical type of argument are purely syntactic. Interestingly, processes concerning the number of arguments seem to trigger semantic processes followed by syntactic processes. The combined findings from the two experiments suggest that the parser uses verb-specific information to build up syntactic and thematic structures against which incoming arguments are checked and that argument-specific information can be used to build up syntactic and thematic structures against which the incoming verb has to be checked to allow lexical integration.
Article
We employed semi-structured tests to determine whether children with autism produce and comprehend deictic (person-centred) expressions such as 'this'/'that', 'here'/'there' and 'come'/'go', and whether they understand atypical non-verbal gestural deixis in the form of directed head-nods to indicate location. In Study 1, most participants spontaneously produced deictic terms, often in conjunction with pointing. Yet only among children with autism were there participants who referred to a location that was distal to themselves with the terms 'this' or 'here', or made atypical points with unusual precision, often lining-up with an eye. In Study 2, participants with autism were less accurate in responding to instructions involving contrastive deictic terms, and fewer responded accurately to indicative head nods.
Article
Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures (e.g. "converging" and "decreasing") was the same in amplitude, latency and topography as that elicited by words primed with action gestures (e.g. drive and lift), and that for terminal words of sentences. Significance and conclusion: Findings provide a within-subject demonstration that the topographies of the gesture N400 effect for both action and mathematical words are indistinguishable from that of the standard language N400 effect. This suggests that mathematical function words are processed by the general language semantic system and do not appear to involve areas involved in other mathematical concepts (e.g. numerosity).
Article
In recent years, studies have suggested that gestures influence comprehension of linguistic expressions, for example, eliciting an N400 component in response to a speech/gesture mismatch. In this paper, we investigate the role of gestural information in the understanding of metaphors. Event related potentials (ERPs) were recorded while participants viewed video clips of an actor uttering metaphorical expressions and producing bodily gestures that were congruent or incongruent with the metaphorical meaning of such expressions. This modality of stimuli presentation allows a more ecological approach to meaning integration. When ERPs were calculated using gesture stroke as time-lock event, gesture incongruity with metaphorical expression modulated the amplitude of the N400 and of the late positive complex (LPC). This suggests that gestural and speech information are combined online to make sense of the interlocutor's linguistic production in an early stage of metaphor comprehension. Our data favor the idea that meaning construction is globally integrative and highly context-sensitive.
Article
Spatial demonstratives (this/that) play a crucial role when indicating object locations using language. However, the relationship between the use of these proximal and distal linguistic descriptors and the near (peri-personal) versus far (extra-personal) perceptual space distinction is a source of controversy [Kemmerer, D. (1999). "Near" and "far" in language and perception. Cognition 73, 35-63], and has been hitherto under investigated. Two experiments examined the influence of object distance from speaker, tool use (participants pointed at objects with their finger/arm or with a stick), and interaction with objects (whether or not participants placed objects themselves) on spatial demonstrative use (e.g. this/that red triangle) in English (this/that) and Spanish (este/ese/aquel). The results show that the use of demonstratives across two languages is affected by distance from speaker and by both tool use and interaction with objects. These results support the view that spatial demonstrative use corresponds with a basic distinction between near and far perceptual space.