Article

What iconic gesture fragments reveal about gesture-speech integration: when synchrony is lost, memory can help.

Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
Journal of Cognitive Neuroscience (Impact Factor: 4.49). 03/2010; 23(7):1648-63. DOI: 10.1162/jocn.2010.21498
Source: PubMed

ABSTRACT The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive speech. In a pretest, the minimal duration of an iconic gesture fragment needed to disambiguate a homonym (i.e., disambiguation point) was therefore identified. In three subsequent ERP experiments, we then investigated whether the gesture information available at the disambiguation point has immediate as well as delayed consequences on the processing of a temporarily ambiguous spoken sentence, and whether these gesture-speech integration processes are susceptible to temporal synchrony. Experiment 1, which used asynchronous stimuli as well as an explicit task, showed clear N400 effects at the homonym as well as at the target word presented further downstream, suggesting that asynchrony does not prevent integration under explicit task conditions. No such effects were found when asynchronous stimuli were presented using a more shallow task (Experiment 2). Finally, when gesture fragment and homonym were synchronous, similar results as in Experiment 1 were found, even under shallow task conditions (Experiment 3). We conclude that when iconic gesture fragments and speech are in synchrony, their interaction is more or less automatic. When they are not, more controlled, active memory processes are necessary to be able to combine the gesture fragment and speech context in such a way that the homonym is disambiguated correctly.

0 Bookmarks
 · 
80 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Understanding actions based on either language or action observation is presumed to involve the motor system, reflecting the engagement of an embodied conceptual network. We examined how linguistic and gestural information were integrated in a series of cross-domain priming studies. We varied the task demands across three experiments in which symbolic gestures served as primes for verbal targets. Primes were clips of symbolic gestures taken from a rich set of emblems. Participants responded by making a lexical decision to the target (Experiment 1), naming the target (Experiment 2), or performing a semantic relatedness judgment (Experiment 3). The magnitude of semantic priming was larger in the relatedness judgment and lexical decision tasks compared to the naming task. Priming was also observed in a control task in which the primes were pictures of landscapes with conceptually related verbal targets. However, for these stimuli, the amount of priming was similar across the three tasks. We propose that action observation triggers an automatic, pre-lexical spread of activation, consistent with the idea that language-gesture integration occurs in an obligatory and automatic fashion.
    Psychological Research 01/2013; · 2.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Language and action systems are functionally coupled in the brain as demonstrated by converging evidence using Functional magnetic resonance imaging (fMRI), electroencephalography (EEG), transcranial magnetic stimulation (TMS), and lesion studies. In particular, this coupling has been demonstrated using the action-sentence compatibility effect (ACE) in which motor activity and language interact. The ACE task requires participants to listen to sentences that described actions typically performed with an open hand (e.g., clapping), a closed hand (e.g., hammering), or without any hand action (neutral); and to press a large button with either an open hand position or closed hand position immediately upon comprehending each sentence. The ACE is defined as a longer reaction time (RT) in the action-sentence incompatible conditions than in the compatible conditions. Here we investigated direct motor-language coupling in two novel and uniquely informative ways. First, we measured the behavioural ACE in patients with motor impairment (early Parkinson's disease - EPD), and second, in epileptic patients with direct electrocorticography (ECoG) recordings. In experiment 1, EPD participants with preserved general cognitive repertoire, showed a much diminished ACE relative to non-EPD volunteers. Moreover, a correlation between ACE performance and action-verb processing (kissing and dancing test - KDT) was observed. Direct cortical recordings (ECoG) in motor and language areas (experiment 2) demonstrated simultaneous bidirectional effects: motor preparation affected language processing (N400 at left inferior frontal gyrus and middle/superior temporal gyrus), and language processing affected activity in movement-related areas (motor potential at premotor and M1). Our findings show that the ACE paradigm requires ongoing integration of preserved motor and language coupling (abolished in EPD) and engages motor-temporal cortices in a bidirectional way. In addition, both experiments suggest the presence of a motor-language network which is not restricted to somatotopically defined brain areas. These results open new pathways in the fields of motor diseases, theoretical approaches to language understanding, and models of action-perception coupling.
    Cortex 03/2012; · 6.16 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (-) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G-), while during the acoustic control condition a foreign language was presented (S-). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.
    PLoS ONE 01/2012; 7(11):e51207. · 3.53 Impact Factor

Full-text

View
6 Downloads
Available from
May 28, 2014