Article

Embodied cognition: The interplay between automatic resonance and selection‐for‐action mechanisms

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Embodied cognition, or the notion that cognitive processes develop from goal-directed interactions between organisms and their environment has stressed the automaticity of perceptual and motor resonance mechanisms in other cognitive domains like language. The present paper starts with reviewing abundant empirical evidence for automatic resonance mechanisms between action and language and examples of other cognitive domains such as number processing. Special attention is given here to social implications of embodied cognition. Then some more recent evidence indicating the importance of the action context on the interaction between action and language is reviewed. Finally, a theoretical notion about how automatic and selective mechanisms can be incorporated in an embodied cognitive perspective is provided. Copyright © 2009 John Wiley & Sons, Ltd.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Researchers are convinced that the interplay between the external world and a person is a key indicator to determine social interactions. They assume that embodied experiences in social communication are synthesized and then transformed [21,22]. As a lens of social embodiment [23], which is a branch of the field of social psychology, it is assumed that body perception of human being in interpersonal communication is likely to be determined by sensory channels. ...
... As a lens of social embodiment [23], which is a branch of the field of social psychology, it is assumed that body perception of human being in interpersonal communication is likely to be determined by sensory channels. The theories of social embodiment [22,24] offer empirical clues of how to simulate inter-transmission between sensory channels and an intelligent model. For example, social embodiment studies have investigated the way multiple nonverbal clues from human leverage person's perception and emotional responses. ...
... For example, social embodiment studies have investigated the way multiple nonverbal clues from human leverage person's perception and emotional responses. A key theoretical framework highlighting social embodiment is the perceptual symbol system proposed by Barsalou, Niedenthal, Barbey and Ruppert [22]. The theory explains the transaction of concrete and abstract conceptual information with sensory-motor cues. ...
Article
Full-text available
The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment, which allows autistic children themselves to shape socially-appropriate behaviors in real world. To build up the training environment for autistic children, it is necessary to identify how to simulate social components in the training. In particular, designing non-player characters (NPCs) in the training is essential to determining the quality of the simulated social interactions during the training. Through this literature review, this study proposes multiple design themes that underline the nature of social embodiment in which interactions with NPCs in VR-based social skill training take place.
... Aan de basis van het huidige onderzoek liggen twee belangrijke ontdekkingen binnen de benadering van embodied cognition, namelijk die van resonantiemechanismen tussen MENTALE SIMULATIE EN HANDDOMINANTIE 3 handelen en handeling gerelateerde taal (hierna te noemen: 'actietaal') (Rueschemeyer, Lindemann, Van Elk, & Bekkering, 2009), en de ontdekking dat spiegelneuronen actief zijn bij de verwerking van actietaal (e.g. Hauk & Pulvermüller, 2003;Willems, Labruna, D'Esposito, Ivry, & Casasanto, 2011). ...
... Hauk & Pulvermüller, 2003;Willems, Labruna, D'Esposito, Ivry, & Casasanto, 2011). Bij het zien of horen van taal waarin een handeling (of: 'actie') wordt geïmpliceerd, gebruiken we sensorische en motorische bronnen die ons helpen de betekenis van deze taal te begrijpen (Rueschemeyer et al., 2009). Bij het horen van het woord 'gooien', simuleren we bijvoorbeeld het gevoel van een bal in je hand (sensorisch), evenals het maken van een gooibeweging (motorisch) en bij de verwerking van dit woord, wordt het motorische deel van de hersenen ook daadwerkelijk actief (Rueschemeyer, Brass, & Friederici, 2007). ...
... Het is hierdoor niet moeilijk voor te stellen dat actietaal en het uitvoeren van acties met elkaar samenhangen en invloed op elkaar uitoefenen. Er wordt dan ook verondersteld dat zowel taal als handelingen dezelfde semantische informatie delen en dat deze informatie toegankelijk is voor beide domeinen (Rueschemeyer et al., 2009). Als de informatie door het ene domein wordt opgehaald, dan kan deze sneller worden gebruikt in het andere domein. ...
Research
Full-text available
Bachelorthesis (in Dutch) on the difference between mental simulations of right- and lefthanded people.
... Fingerzählen ermöglicht somit ein symbolisches Externalisieren kognitiver Inhalte, das als eine Art der Verkörperung angesehen werden kann (sogenanntes symbolic off-loading; Wilson, 2002). Ob dieses sensomotorische Symbolsystem für Zahlen jedoch eine privilegierte Stellung im Vergleich zu anderen nicht körperlichen symbolischen Notationen einnimmt, wie es die embodied cognition Hypothese nahe legen würde (Barsalou, 2008;Rueschemeyer et al., 2009), erscheint bei der jetzigen Befundlage als reine Spekulation. Im Gegensatz zu der sehr schwachen Form von Embodiment als symbolic off-loading, wird Verkörperung in der kognitiven Psychologie und Psycholinguistik meist als eine obligatorische und automatische Aktivierung von sensomotorischen Codes beim Lesen von Wörtern und Zahlen verstanden, welche den entscheidenden semantischen Bezug herstellt und somit die Basis einer jeden assoziativen Repräsentation bildet (sogenannte symbol grounding Hypothese; Barsalou, 2008). ...
... Zusammengefasst kann gesagt werden, dass Fingerzählen von zentraler Bedeutung für die Veranschaulichung von numerischen Größen während der kindlichen Entwicklung ist, und dass dies eine positive Auswirkungen auf später erworbene mathematische Konzepte zur Folge hat. Ob jedoch fingerbasierte Repräsentationen darüber hinausgehend auch zur Erfahrbarkeit von abstrakten numerischen Größen beitragen und so als eine Grundlage für die kognitive Repräsentation von Zahlen dienen kann (Lindemann, Rueschemeyer & Bekkering, 2009) ...
... In addition, it could be that spatial effects during language processing depend on the nature of the task and that spatial congruency effects are mainly elicited when subjects explicitly make a judgment about visual properties of the words' referents (Louwerse & Jeuniaux, 2008, 2010). Several studies have shown that spatial effects during language processing are strongly modulated by top-down influences, such as perceptual or motor imagery (Hoenig, Sim, Bochev, Herrnberger, & Kiefer, 2008; Louwerse & Jeuniaux, 2008, 2010 Rueschemeyer, Lindemann, van Elk, & Bekkering, 2009). Accordingly, it could be that a spatial congruency effect for body parts is only observed when the spatial dimension is made task-relevant and if it is clear to the subject what type of spatial body representation is relevant (i.e. ...
... Two key questions in discussions on embodied cognition are (1) whether activation in modality-specific brain areas is automatic and bottom-up or driven by contextual and top-down influences (Pulvermuller & Fadiga, 2010 ) and (2) to what extent activation in modality-specific brain areas is necessary for language understanding (Fischer & Zwaan, 2008; Mahon & Caramazza, 2008). The finding that both the spatial congruency effect and the distance effect are modulated by task requirements (semantic categorization vs. iconicity judgment) argues against the view that the activation of spatial information is automatic and necessary for language understanding (see also: Hoenig et al., 2008; Louwerse & Jeuniaux, 2008, 2010 Rueschemeyer et al., 2009). Only when participants made an iconic judgment about the spatial properties of body semantics a clear spatial congruency effect was observed. ...
Article
The present study addressed the relation between body semantics (i.e. semantic knowledge about the human body) and spatial body representations, by presenting participants with word pairs, one below the other, referring to body parts. The spatial position of the word pairs could be congruent (e.g. EYE / MOUTH) or incongruent (MOUTH / EYE) with respect to the spatial position of the words' referents. In addition, the spatial distance between the words' referents was varied, resulting in word pairs referring to body parts that are close (e.g. EYE / MOUTH) or far in space (e.g. EYE / FOOT). A spatial congruency effect was observed when subjects made an iconicity judgment (Experiments 2 and 3) but not when making a semantic relatedness judgment (Experiment 1). In addition, when making a semantic relatedness judgment (Experiment 1) reaction times increased with increased distance between the body parts but when making an iconicity judgment (Experiments 2 and 3) reaction times decreased with increased distance. These findings suggest that the processing of body-semantics results in the activation of a detailed visuo-spatial body representation that is modulated by the specific task requirements. We discuss these new data with respect to theories of embodied cognition and body semantics.
... When language comprehension influences action components, we generally refer to it as motor resonance, based on the observation that understanding an action-related stimulus activates the same neural substrates encoding the planning and execution of the corresponding action. On the other hand, when a motor component of an action modulates the lexico-semantic processing, we refer to it as semantic resonance (e.g., Bidet-Ildei, Beauprez, & Badets, 2020; Mollo, Pulvermüller, & Hauk, 2016;Rueschemeyer, Lindemann, van Elk, & Bekkering, 2009). An example of such semantic resonance, that is, actioninduced effects on word comprehension, can be found i n th e stu dy b y Lin dema nn et al . ...
Article
Full-text available
According to embodied theories, motor and language processing bidirectionally interact: Motor activation modulates behavior in lexico-semantic tasks (semantic resonance), whereas understanding motor-related words entails activation of the corresponding motor brain areas (motor resonance). Whereas many studies investigated such interaction in the first language (L1), only few did so in a second language (L2), focusing on motor resonance. Here, we directly compared L1 and a late L2, for the first time both in terms of semantic and motor resonance and both in terms of magnitude and timing, by taking advantage of single-pulse TMS. Twenty-five bilinguals judged in each language, whether hand motor-related (“grasp”) and non-motor-related verbs (“believe”), were physical or mental. Meanwhile, we applied TMS on the hand motor cortex at 125, 275, 350, and 500 msec post verb onset, and recorded behavioral responses and TMS-induced motor evoked potentials. TMS induced faster responses for L1 versus L2 motor and nonmotor verbs at 125 msec (three-way interaction β = −0.0442, 95% CI [0.0814, −0.0070]), showing a semantic resonance effect at an early stage of word processing in L1 but not in L2. Concerning motor resonance, TMS-induced motor evoked potentials at 275 msec revealed higher motor cortex excitability for L2 versus L1 processing (two-way interaction β = 0.095, 95% CI [0.017, 0.173]). These findings confirm action–language interaction at early stages of word recognition, provide further evidence that L1 and L2 are differently embodied, and call for an update of existing models of bilingualism and embodiment, concerning both language representations and processing.
... He argued that culture is a human cognitive process that exists both inside and outside of the mind and that it is enacted through activity. Subsequent researchers in the field of cognitive ecology have demonstrated that human cognitive processes are embodied and enacted and develop through goal-orientated action and interactions between the human organism and its environment (Rueschemeyer & Bekkering, 2009;Hutchins, 2010). It follows, they contended, that individual cognition and action happens as part of the environment, not in isolation from it. ...
Thesis
Full-text available
Organisations are continuously seeking to increase employee engagement to improve organisational performance and gain competitive advantage. Gamification — the use of game mechanics in non-game contexts — is a nascent and increasingly applied approach to improve engagement and holds promise to address current engagement gaps in workplaces. Applying gamification to the complexities and idiosyncrasies of the workplace, however, presents challenges for researchers and gamification designers. This thesis argues that Cultural Historical Activity Theory (CHAT) provides a theoretical framework for addressing these challenges in both research and practice, and it develops methods of adapting the use of CHAT to understand the unique factors of a particular workplace context. Using a qualitative design-based research method, a gamification experience was designed for staff of three workplaces using the same five design steps in all contexts and implementing a gamification experience for three months. Three organisations participated in this study: a school seeking to increase innovative teaching practices in its teachers; a restaurant wanting to improve team interaction and restaurant management; and a government department wanting to increase professional development activities. The findings from this study demonstrate the positive effects gamification can have in the workplace, including increased staff engagement and motivation, improved team interactions and communication, increased productivity and better clarity on team goals, and increased workplace satisfaction. Significantly, the gamification design process helped alleviate systemic tensions in the workplace and demonstrates that gamification can contribute to a more productive and higher performing organisation. This thesis makes several unique contributions including providing additional qualitative evidence of the effectiveness of gamification and first study to extend Cultural Historical Activity Theory and practice to the gamification design process. Finally, this thesis provides a gamification design process and evaluation framework for designers to use when implementing gamification in the workplace.
... Fourth, overt movement or stimulation of these motor areas has a causal effect on simultaneous processing of specific types of action words. Vice versa, action word processing may impact on specific motor mechanisms, with effects visible in behaviour and in electrophysiological brain recordings 2 Fischer & Zwaan, 2008;Glenberg & Kaschak, 2003;Ibanez et al., 2012;Pulvermü ller, Hauk, Nikulin, & Ilmoniemi, 2005;Rueschemeyer, Lindemann, van Elk, & Bekkering, 2009;Schomers & Pulvermü ller, 2016;Schomers, Kirilina, Weigand, Bajbouj, & Pulvermü ller, 2015;Shebani & Pulvermü ller, 2013). Fifth, and finally, movement disorders and clinical impairments to motor systems are associated with specific processing impairments or abnormalities for action-related words which call on action knowledge in the retrieval of their meaning (Bak & Chandran, 2012;Boulenger et al., 2008;Cardona et al., 2014;Cotelli et al., 2006;García & Ib añez, 2014;Grossman et al., 2008;Kemmerer, 2015;Neininger & Pulvermü ller, 2001Pulvermü ller et al., 2010). ...
Article
Full-text available
Within the neurocognitive literature there is much debate about the role of the motor system in language, social communication and conceptual processing. We suggest, here, that autism spectrum conditions (ASC) may afford an excellent test case for investigating and evaluating contemporary neurocognitive models, most notably a neurobiological theory of action perception integration where widely-distributed cell assemblies linking neurons in action and perceptual brain regions act as the building blocks of many higher cognitive functions. We review a literature of functional motor abnormalities in ASC, following this with discussion of their neural correlates and aberrancies in language development, explaining how these might arise with reference to the typical formation of cell assemblies linking action and perceptual brain regions. This model gives rise to clear hypotheses regarding language comprehension, and we highlight a recent set of studies reporting differences in brain activation and behaviour in the processing of action-related and abstract-emotional concepts in individuals with ASC. At the neuroanatomical level, we discuss structural differences in long-distance frontotemporal and frontoparietal connections in ASC, such as would compromise information transfer between sensory and motor regions. This neurobiological model of action perception integration may shed light on the cognitive and social-interactive symptoms of ASC, building on and extending earlier proposals linking autistic symptomatology to motor disorder and dysfunction in action perception integration. Further investigating the contribution of motor dysfunction to higher cognitive and social impairment, we suggest, is timely and promising as it may advance both neurocognitive theory and the development of new clinical interventions for this population and others characterised by early and pervasive motor disruption.
... Learning environments consists of myriad of internal and external resources and interactions that can be likened to a biological ecosystem (Hutchins, 2010). Researchers in the nascent field of cognitive ecology have demonstrated that human cognitive processing and learning are embodied and enacted; developing through goal-orientated action and interactions between the human organism and its environment (Hutchins, 2010;Rueschemeyer & Bekkering, 2009). It follows, they argue, that individual learning happens as part of the environment, not in isolation from it. ...
Book
Full-text available
The conference general theme Research Perspectives on Creative Intersections captured the overall conference spirit. It also reflects the conference planning and organisational processes which involved the community of international scholars located in different institutions, faculties, schools and departments. The interdisciplinary nature of the conference enabled active intersections of scholars from the fields of design, social sciences and business studies. The mingling of researchers from diverse disciplines reflects the need for interdisciplinary approaches to research complex issues related to innovation. The intersection between emerging and established researchers was an intended aspect of the conference. The reason was that today’s PhD candidates will drive the future research. The conference succeeded by attracting significant number of PhD candidates who represented a third of the conference delegates. This provides a good indication for the future growth research related to design innovation. Altogether, 295 authors have submitted: 140 full papers and 31 workshop proposals. These numbers indicate that a single authored research is no longer the norm. The intersection which stems from collaboration amongst researchers to undertake and disseminate research is now becoming the established practice within the design innovation research. The 19 conference tracks, for which the papers were submitted, were organised within 7 overarching themes (see Table 1). The track facilitators ultimately shaped the overall conference scope and direction. The tracks’ topics acted as the focal points for the overall Call for Papers. Thus, our thanks you go to all the 69 tracks’ facilitators. It was them who collectively were responsible for the conference programme and we would like to thank them for their valuable services on the International Scientific Programme Committee. Table 1 Conference Tracks Theme 1) New Models of Innovation Track 1a. The Interplay between Science, Technology and Design Track 1b. Interdisciplinary Perspectives and Trends in Open Innovation Track 1c. FROM R&D TO D&R: Challenging the Design Innovation Landscape Track 1d. Design creating value at intersections Track 1e. Design management transforming innovation strategy Theme 2) Product-Service Systems Track 2a. Capturing Value and Scalability in Product-Service System Design Track 2b. Service Design for Business Innovation for Industry 4.0 Theme 3) Policy Making Track 3a. Creative Intersection of Policies and Design Management Theme 4) Intersecting Perspective Track 4a. Changing Design Practices: How We Design, What We Design, and Who Designs? Track 4b. Challenges and Obstacles to the Enactment of an Outside-In Perspective: The Case of Design Track 4c. At the Intersection Social Innovation and Philosophy Theme 5) Methods Track 5a. Design practices of effective strategic design Track 5b. Markets and Design: Vertical and Horizontal Product Differentiation Track 5c. Foresight by Design: Dealing with uncertainty in Design Innovation Track 5d. Contemporary Brand Design Theme 6) Capabilities Track 6a. Building New Capabilities in an Organization: A research methodology perspective Track 6b. Exploring Design Management Learning: Innovate with 'user' oriented design and KM perspectives Track 6c. Design teams in the pursuit of innovation Track 6d. Designing the Designers: Future of Design Education Theme 7) Foundations Track 7a. Pioneering Design Thinkers We would like to also thank the over 150 expert reviewers who provided their valuable time to provide critical peer feedback. Their service on the International Board of Reviewers was invaluable as the good quality peer reviews provided a vital contribution to this international conference. Each reviewer scored papers on a scale of 0 to 10 and provided critical review comments. Most papers were reviewed by two people, though some had three or even four reviewers, and in a very small number of cases only one review was submitted. Total number of submitted full papers was 140. After the blind peer review process 66 papers (47%) were accepted and 49 (35%) papers were provisionally accepted as these needed major revisions, and 25 (19%) papers were rejected. In making the final decisions about papers, the Review Committee first looked at all papers where the difference of opinion between reviewers was 4 points or greater and moderated the scores if necessary. The Review Committee then discussed all papers that were just under the general level of acceptance to determine outcomes, before finally looking at any exceptions. At the end of the review process 103 (73%) paper submissions were accepted for presentations of which 95 (68%) were included in the proceedings and 38 (27%) papers were rejected. Seven accepted papers were presented at the conference as research in progress and they were not included in the proceedings. The workshops provided another intersection on how delegates and workshop facilitators interacted. Altogether, 31 workshop proposals were submitted and 17 (54%) workshops were accepted by the International Workshop Organising Committee. We would like to thank the International Workshop Organising Committee members: Katinka Bergema, Nuša Fain, Oriana Haselwanter, Sylvia Xihui Liu, Ida Telalbasic and Sharon Prendeville for providing their expertise. We would like to thank both keynote speakers, Professor Jeanne Liedtka and Mr Richard Kelly, who generously gave their time to share their insights with the conference delegates. Their generosity allowed us to offer bursaries to five emerging researchers to attend the conference. The bursar recipients were selected from close to 40 applicants. The number of applicants indicates the need to setup funding schemes to allow emerging researchers to attend international events such as this conference. The PhD Seminar event which took place a day prior to the conference was attended by over 100 delegates. The PhD Seminar was chaired by Dr Sylvia Xihui Liu and Professor Jun Cai. Initially 40 submissions were received of which 36 were presented at the event. The event culminated with a debate organised by the PhD students who were inspired by the “Open Letter to the Design Community: Stand Up for Democracy” by Manzini and Margolin (2017). We are grateful to the debate organisers. The location of the conference in the Jockey Club Innovation Tower designed by Zaha Hadid at the Hong Kong Polytechnic University has also provided delegates with visible cultural intersections of a rapidly transitioning major interconnected global city from one political sphere of influence into another. The conference would not have happened without the solid work provided by the local organising team which was led by Professor Cees de Bont and consisted of: Ms Rennie Kan who took up the role of the fixer; Mr Pierre Tam who in his role as the Conference Secretary tirelessly worked on satisfying at many times conflicting requirement; Ms Flora Chang who checked and checked again all delegates registrations; Mr Rio Chan wizard of IT and Mr Jason Liu who provided the visual direction for the conference. The Design Management Academy’s international research conference was organised under the auspices of the Design Society’s Design Management Special Interest Group (DeMSIG) and Design Research Society’s Design Innovation Management Special Interest Group (DIMSIG) in collaboration with: The Hong Kong Polytechnic University, Loughborough University, Tsinghua University, University of Strathclyde, Politecnico di Milano and Delft University of Technology. The conference was a culmination of two years of planning and the 2019 conference planning commenced well before the 2017 conference programme schedule was finalised. It is a hope that the conference will act as a platform to build a diverse community of scholars who are interested to explore and discuss design innovation practices. Conference Proceedings of the Design Management Academy 2017 International Conference: Research Perspectives on Creative Intersections 7–9 June 2017, Hong Kong, designmanagementacademy.com Volume 4 Editors: Erik Bohemia, Cees de Bont and Lisbeth Svengren Holm This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License. https://creativecommons.org/licenses/by-nc-sa/4.0/ Conference Proceedings of the Design Management Academy ISSN 2514-8419 (Online) Design Management Academy ISBN 978-1-912294-11-4 (Volume 1) ISBN 978-1-912294-12-1 (Volume 2) ISBN 978-1-912294-13-8 (Volume 3) ISBN 978-1-912294-14-5 (Volume 4) ISBN 978-1-912294-15-2 (Volume 5) Published as an imprint of the Design Research Society Loughborough University, London 3 Lesney Avenue, The Broadcast Centre, Here East London, E15 2GZ United Kingdom The Design Society DMEM University of Strathclyde 75 Montrose Street GLASGOW, G1 1XJ United Kingdom DS 90 Design Research Society Secretariat email: admin@designresearchsociety.org website: www.designresearchsociety.org Founded in 1966 the Design Research Society (DRS) is a learned society committed to promoting and developing design research. It is the longest established, multi-disciplinary worldwide society for the design research community and aims to promote the study of and research into the process of designing in all its many fields. Design Society email: contact@designsociety.org website: www.designsociety.org The Design Society is an international non-governmental, non-profit making organisation whose members share a common interest in design. It strives to contribute to a broad and established understanding of all aspects of design, and to promote the use of results and knowledge for the good of humanity. The Design Society is a charitable body, registered in Scotland, No: SC031694
... Engeström's expansive view of learning is increasingly supported by other fields of research such as the nascent field of cognitive ecology. Researchers in cognitive ecology are showing that human cognitive processing and learning are embodied and enacted; developing through goal-orientated action and interactions between the human organism and its environment (Hutchins, 2010;Rueschemeyer et al., 2009). It follows, they argue, that individual learning happens as part of the environment, not in isolation from it. ...
... However, the activation of a word"s motor representation is also thought to be modulated by sentential context (Rueschemeyer, Lindemann, van Elk, & Bekkering, 2009). It is a well established fact that when reading a sentence, we integrate the meaning of all the words in the sentence to come to an interpretation, and that this happens online in an incremental manner (Altmann & Kamide, 1999;Kamide, Altmann, & Haywood, 2003). ...
... Cognitive scientists have long held an embodied view on cognition which assumes that symbols and abstract concepts become meaningful only when they refer to bodily experiences (e.g., Barsalou, 1999;Fischer & Zwaan, 2008;Glenberg, 1997;Pulvermüller, 2005;Rueschemeyer, Lindemann, van Elk, & Bekkering, 2009). Over the last two decades, research on language comprehension has provided a large amount of evidence for the idea that conceptual knowledge is grounded through such sensory-motor referencing. ...
Article
Full-text available
Journal of Cognitive Psychology Publication details, including instructions for authors and subscription information:
... Semantic theory postulates that words are represented in distributed "action-perception circuits" which link the representation of phonology and articulatory word features in core perisylvian language cortices with the representation of meaning in sensorimotor systems (Pulvermüller and Fadiga, 2010). Functional importance of these sensorimotor systems for the processing of action words is supported by a plethora of empirical work, which demonstrates a common neural substrate for movement and the understanding of action-related language (Fischer and Zwaan, 2008;Rueschemeyer et al., 2009). Among the strongest evidence is that relating deficits in action word processing to damage or disease of the motor system (Bak et al., 2001(Bak et al., , 2006Neininger and Pulvermueller, 2001;Neininger and Pulvermüller, 2003;Bak and Hodges, 2004;Cotelli et al., 2006;Grossman et al., 2008;Bak and Chandran, 2012;Kemmerer et al., 2012). ...
Article
Full-text available
Autism spectrum conditions (ASC) are characterised by deficits in understanding and expressing emotions and are frequently accompanied by alexithymia, a difficulty in understanding and expressing emotion words. Words are differentially represented in the brain according to their semantic category and these difficulties in ASC predict reduced activation to emotion-related words in limbic structures crucial for affective processing. Semantic theories view ‘emotion actions’ as critical for learning the semantic relationship between a word and the emotion it describes, such that emotion words typically activate the cortical motor systems involved in expressing emotion actions such as facial expressions. As ASC are also characterised by motor deficits and atypical brain structure and function in these regions, motor structures would also be expected to show reduced activation during emotion-semantic processing. Here we used event-related fMRI to compare passive processing of emotion words in comparison to abstract verbs and animal names in typically-developing controls and individuals with ASC. Relatively reduced brain activation in ASC for emotion words, but not matched control words, was found in motor areas and cingulate cortex specifically. The degree of activation evoked by emotion words in the motor system was also associated with the extent of autistic traits as revealed by the Autism Spectrum Quotient. We suggest that hypoactivation of motor and limbic regions for emotion-word processing may underlie difficulties in processing emotional language in ASC. The role that sensorimotor systems and their connections might play in the affective and social-communication difficulties in ASC is discussed.
... Because knowledge about the form and meaning of a word are normally active together such that neuronal connections between the respective neuronal circuits are strengthened, these meaning-and form-related circuits are joined together into one higher-order semantic network -to the degree that one circuit part typically does not activate without the other becoming active too. There is room for flexibility in this mechanism, especially if attentional resources are limited, overt motor action is being prepared for, or context puts a focus on grammatical processing (Angrilli, Dobel, Rockstroh, Stegagno, & Elbert 2000;Chen, Davis, Pulvermüller, & Hauk 2013;Hoenig, Sim, Bochev, Herrnberger, & Kiefer 2008;Pulvermüller, Cook, & Hauk 2012;Rueschemeyer, Lindemann, Van Elk, & Bekkering 2009;van Elk, van Schie, Zwaan, & Bekkering 2010). However, for typical passive tasks (reading, listening), action-related verbs activate semantic circuits involving motor and action schemas stored in motor and premotor cortex, and a wealth of neuroimaging and neuropsychological work indicates that this activation is functionally important for action word processing D'Ausilio et al. 2009;Devlin & Watkins, 2007;Glenberg & Kaschak, 2002;Moseley et al., 2013;Pulvermüller et al. 2005;Shebani & Pulvermüller, 2011). ...
Article
Full-text available
Noun/verb dissociations in the literature defy interpretation due to the confound between lexical category and semantic meaning; nouns and verbs typically describe concrete objects and actions. Abstract words, pertaining to neither, are a critical test case: dissociations along lexical-grammatical lines would support models purporting lexical category as the principle governing brain organisation, whilst semantic models predict dissociation between concrete words but not abstract items. During fMRI scanning, participants read orthogonalised word categories of nouns and verbs, with or without concrete, sensorimotor meaning. Analysis of inferior frontal/insula, precentral and central areas revealed an interaction between lexical class and semantic factors with clear category differences between concrete nouns and verbs but not abstract ones. Though the brain stores the combinatorial and lexical-grammatical properties of words, our data show that topographical differences in brain activation, especially in the motor system and inferior frontal cortex, are driven by semantics and not by lexical class.
... Our results favour a view of embodied cognition where semantic knowledge can be accessed by processing of single action verbs and thereby recruits motor areas, strongly enough to be detected by MEG. Hence, this is in line with the strong claim of embodiment theories that sensorimotor activation occurs automatically -and possibly necessarily -during verb processing (Pulvermüller et al., 2005b;Boulenger et al., 2008;Rüschemeyer, Lindemann, van Elk, & Bekkering, 2009). This is also in accordance with neuropsychological findings (Bak et al., 2001;Fernandino et al., 2012;Herrera et al., 2012). ...
Article
The current study investigated sensorimotor involvement in the processing of verbs describing actions performed with the hands, feet, or no body part. Actual movements were used to identify neuromagnetic sources for hand and foot actions. These sources constrained the analysis of verb processing. While hand and foot sources picked up activation in all three verb conditions, peak amplitudes showed an interaction of source and verb condition at 200ms after word onset, thereby reflecting effector-specificity. Specifically, hand verbs elicited significantly higher peak amplitudes than foot verbs in hand sources. Our results are in line with theories of embodied cognition that assume an involvement of sensorimotor areas in early stages of lexico-semantic processing, even for single words without a semantic or motor task.
... Along this line, previous brain imaging studies showed that the motor resonance phenomenon, i.e., the tendency to tune our behavior to others' behavior as reflected by the activation of the mirror neuron system, may be influenced by ethnic and cultural in group familiarity (e.g., 43,44). This evidence questioned the idea that the mirror system is automatically activated in presence of others in an imitative fashion; rather, it showed that the mirror system is modulated by the similarity between us and the others, as well as by the context [45]. In our case the mirror neuron system might be activated to comprehend the other's action, but no automatic collaborative attitude was developed; rather, understanding the other's action might have helped to prepare actions aimed at delimiting his/her influence. ...
Article
Full-text available
We investigated whether and how comprehending sentences that describe a social context influences our motor behaviour. Our stimuli were sentences that referred to objects having different connotations (e.g., attractive/ugly vs. smooth/prickly) and that could be directed towards the self or towards "another person" target (e.g., "The object is ugly/smooth. Bring it to you/Give it to another person"). Participants judged whether each sentence was sensible or non-sensible by moving the mouse towards or away from their body. Mouse movements were analysed according to behavioral and kinematics parameters. In order to enhance the social meaning of the linguistic stimuli, participants performed the task either individually (Individual condition) or in a social setting, in co-presence with the experimenter. The experimenter could either act as a mere observer (Social condition) or as a confederate, interacting with participants in an off-line modality at the end of task execution (Joint condition). Results indicated that the different roles taken by the experimenter affected motor behaviour and are discussed within an embodied approach to language processing and joint actions.
... Embodied cognition will be used as the third example of biologically primary knowledge used in the service of acquiring biologically secondary knowledge. The theoretical framework of grounded or embodied cognition is based on the notion that cognitive processes develop from goal-directed interactions between organisms and their environment (Barsalou 1999(Barsalou , 2008Glenberg 1997;Rueschemeyer et al. 2009). Embodied cognition assumes that cognitive processes are grounded in perception and action, rather than being reducible to the manipulation of abstract symbols (Barsalou 1999). ...
Article
Full-text available
Cognitive load theory is intended to provide instructional strategies derived from experimental, cognitive load effects. Each effect is based on our knowledge of human cognitive architecture, primarily the limited capacity and duration of a human working memory. These limitations are ameliorated by changes in long-term memory associated with learning. Initially, cognitive load theory's view of human cognitive architecture was assumed to apply to all categories of information. Based on Geary's (Educational Psychologist 43, 179-195 2008; 2011) evolutionary account of educational psychology, this interpretation of human cognitive architecture requires amendment. Working memory limitations may be critical only when acquiring novel information based on culturally important knowledge that we have not specifically evolved to acquire. Cultural knowledge is known as biologically secondary information. Working memory limitations may have reduced significance when acquiring novel information that the human brain specifically has evolved to process, known as biologically primary information. If biologically primary information is less affected by working memory limitations than biologically secondary information, it may be advantageous to use primary information to assist in the acquisition of secondary information. In this article, we suggest that several cognitive load effects rely on biologically primary knowledge being used to facilitate the acquisition of biologically secondary knowledge. We indicate how incorporating an evolutionary view of human cognitive architecture can provide cognitive load researchers with novel perspectives of their findings and discuss some of the practical implications of this view.
... It is argued that motor activation is induced by action words, affordances of perceptual objects, and other perceptual events because motor activation is intrinsically linked to the processing of semantics and affordances (e.g., Fagioli, Ferlazzo, & Hommel, 2007; Fagioli, Hommel, & Schubotz, 2007; von Cramon 2002, 2003; Tipper et al., 2006; Tucker and Ellis 1998 ). In other words, motor activation is part of the meaning or concept, regardless of whether the meaning or concept is represented by a word(s), object(s ), or actions of another individual (e.g., Pulvermüller 2005; Rueschemeyer, Lindemann, Van Elk, & Bekkering, 2009), which is consistent with modal theories of memory (e.g., Barsalou 2009; Barsalou et al., 2003). Our results are consistent with these findings and interpretations. ...
Article
In this article, we ask what serves as the "glue" that temporarily links information to form an event in an active observer. We examined whether forming a single action event in an active observer is contingent on the temporal presentation of the stimuli (hence, on the temporal availability of the action information associated with these stimuli), or on the learned temporal execution of the actions associated with the stimuli, or on both. A partial-repetition paradigm was used to assess the boundaries of an event for which the temporal properties of the stimuli (i.e., presented either simultaneously or temporally separate) and the intended execution of the actions associated with these stimuli (i.e., executed as one, temporally integrated, response or as two temporally separate responses) were manipulated. The results showed that the temporal features of action execution determined whether one or more events were constructed; the temporal presentation of the stimuli (and hence the availability of their associated actions) did not. This suggests that the action representation, or "task goal," served as the "glue" in forming an event in an active observer. These findings emphasize the importance of action planning in event construction in an active observer.
... On the other hand, if the body is merely a tool in this respect, manipulations with part(s) of the body may be equally effective. Another issue is the bi-directionality of these phenomena (Rueschemeyer et al., 2009; Miles et al., 2010a). Miles et al. (2010b) showed that direction of apparent motion in the form of dots appearing to move toward or away from the center of a display affected their mental time travel. ...
Article
Full-text available
Embodied cognition research has shown how actions or body positions may affect cognitive processes, such as autobiographical memory retrieval or judgments. The present study examined the role of body balance (to the left or the right) in participants on their attributions to political parties. Participants thought they stood upright on a Wii™ Balance Board, while they were actually slightly tilted to the left or the right. Participants then ascribed fairly general political statements to one of 10 political parties that are represented in the Dutch House of Representatives. Results showed a significant interaction of congruent leaning direction with left- or right-wing party attribution. When the same analyses were performed with the political parties being divided into affiliations to the right, center, and left based on participants’ personal opinions rather than a ruling classification, no effects were found. The study provides evidence that conceptual metaphors are activated by manipulating body balance implicitly. Moreover, people’s judgments may be colored by seemingly trivial circumstances such as standing slightly out of balance.
... It is now widely accepted that perception can influence action planning and execution (e.g. Blakemore and Frith, 2005; Rueschemeyer et al., 2009 ), in part by triggering motor related representations of perceptual stimuli (Rizzolatti and Craighero, 2004; Rizzolatti and Luppino, 2001). Moreover, growing evidence clearly indicates that the influence of perception on action is mediated over the dorsal pathway (Goodale and Milner, 1992; Goodale and Westwood, 2004; Rizzolatti and Matelli, 2003) that includes the mirror neuron system (Rizzolatti and Craighero, 2004). ...
Article
Theories proposing a bidirectional influence between action and perception are well supported by behavioral findings. In contrast to the growing literature investigating the brain mechanisms by which perception influences action, there is a relative dearth of neural evidence documenting how action may influence perception. Here we show that action priming of apparent motion perception is associated with increased functional connectivity between dorsal cortical regions connecting vision with action. Participants manually rotated a joystick in a clockwise or counter-clockwise direction while viewing ambiguous apparent rotational motion. Actions influenced perception when the perceived direction of the ambiguous display was the same as manual rotation. For comparison, participants also rotated the joystick while viewing non-ambiguous apparent motion and in the absence of apparent motion. In a final control condition, participants viewed ambiguous apparent motion without manual rotation. Actions influence on perception was accompanied by a significant increase in alpha and beta band event related desynchronization (ERD) in contralateral primary motor cortex, superior parietal lobe and middle occipital gyrus. Increased ERD across these areas was accompanied by an increase in gamma band phase locking between primary motor, parietal, striate and extrastriate regions. Similar patterns were not observed when action was compatible with perception, but did not influence it. These data demonstrate that action influences perception by strengthening the interaction across a broad sensorimotor network for the putative purpose of integrating compatible action outcomes and sensory information into a single coherent percept.
... When one of these leads to an initial link between sensorimotor representations and conceptual representations, this link will by itself cause the production of an environment that confirms it. This can happen in the physical ecology, but also in language, which by itself goes Embodiment as a Unifying Perspective beyond activation of amodal meaning and has impact on modal representations (Pickering & Garrod, 2009;Rueschemeyer, Lindemann, Van Elk, & Bekkering, 2009;Zwaan, 2009). More powerful people, or people motivated to attain power, will strive for and link themselves in reality to larger stimuli. ...
Article
Adaptive action is the function of cognition. It is constrained by the properties of evolved brains and bodies. An embodied perspective on social psychology examines how biological constrains give expression to human function in socially situated contexts. Key contributions in social psychology have highlighted the interface between the body and cognition, but theoretical development in social psychology and embodiment research remain largely disconnected. The current special issue reflects on recent developments in embodiment research. Commentaries from complementary perspectives connect them to social psychological theorizing. The contributions focus on the situatedness of social cognition in concrete interactions, and the implementation of cognitive processes in modal instead of amodal representations. The proposed perspectives are highly compatible, suggesting that embodiment can serve as a unifying perspective for psychology. Copyright © 2009 John Wiley & Sons, Ltd.
... Despite the impressive amount of increasing evidence (for reviews, see Fischer & Zwaan, 2008; Barsalou, 2008), many issues are still open and will hopefully be solved in the next few years. One important issue concerns the role of the social dimension for cognition (Sebanz, Bekkering, & Knoblich, 2006; Rueschemeyer, Lindemann, van Elk, & Bekkering, 2009). Many behavioral and brain imaging studies (for a review, see Martin, 2007 ) have demonstrated that observing objects activates action potentialities. ...
Article
Full-text available
We investigated how the reach-to-grasp movement is influenced by the presence of another person (friend or non-friend), who was either invisible (behind) or located in different positions with respect to an object and to the agent, and by the perspective conveyed by linguistic pronouns ("I", "You"). The interaction between social relationship and relative position influenced the latency of both maximal fingers aperture and velocity peak, showing shorter latencies in the presence of a non-friend than in the presence of a friend. However, whereas the relative position of a non-friend did not affect the kinematics of the movement, the position of a friend mattered: latencies were significantly shorter with friends only in positions allowing them to easily reach for the object. Finally, the investigation of the overall reaching movement time showed an interaction between the speaker and the pronoun: participants reached the object more quickly when the other spoke, particularly if she used the "I" pronoun. This suggests that speaking, and particularly using the "I" pronoun, evokes a potential action. Implications of the results for embodied cognition are discussed.
... However, strong evidence points to the early emergence of these effects, as methods with high temporal resolution (electroencephalography and magnetoencephalography) have confirmed that semantic somatotopic activation occurs alongside other lexicosemantic processes within 200 ms of word presentation, thus ruling out post-comprehension interpretations (Pulvermuïler and Shtyrov 2006; Hauk et al. 2008). Furthermore, the appearance of automatic interaction and interference (or ''motor/semantic resonance'' Rueschemeyer et al. 2009) between concurrent semantic--linguistic and motor tasks provides direct evidence that motor and language systems of the brain exert causal effects on each other (Pulvermuïler, Hauk, et al. 2005; Boulenger et al. 2006, 2008; Zwaan and Taylor 2006; Scorolli and Borghi 2007). Lesion evidence further underpins the crucial role frontocentral areas play in the processing of words with action-related meaning (Bak et al. 2001; Pulvermuïler et al. 2010; Kemmerer et al. forthcoming). ...
Article
Full-text available
Sensorimotor areas activate to action- and object-related words, but their role in abstract meaning processing is still debated. Abstract emotion words denoting body internal states are a critical test case because they lack referential links to objects. If actions expressing emotion are crucial for learning correspondences between word forms and emotions, emotion word–evoked activity should emerge in motor brain systems controlling the face and arms, which typically express emotions. To test this hypothesis, we recruited 18 native speakers and used event-related functional magnetic resonance imaging to compare brain activation evoked by abstract emotion words to that by face- and arm-related action words. In addition to limbic regions, emotion words indeed sparked precentral cortex, including body-part–specific areas activated somatotopically by face words or arm words. Control items, including hash mark strings and animal words, failed to activate precentral areas. We conclude that, similar to their role in action word processing, activation of frontocentral motor systems in the dorsal stream reflects the semantic binding of sign and meaning of abstract words denoting emotions and possibly other body internal states.
Article
Full-text available
Designers are increasingly in demand in a range of context because of their ability to deal with complexity and develop innovative solutions. Educational practice, however, is not yet on par with the multi-disciplinary and multi-modal learning style of today’s students. Gamification offers the promise of an innovative approach to engage students and produce better learning outcomes. The challenges facing gamification designers are parallel to those of experience designers, chiefly in ensuring the solution is contextually and personal relevant to the user. Learning design thinking, however, is a complex social activity influenced by myriad contextual factors. Cultural historical activity theory, when coupled with design-based research, offers a theoretical foundation that allows for gamification to be used for expansive education. The authors present The Four Orders of Gamification and a Gamification Design System that enables design educators to develop expansive curricula for tomorrow’s designers.
Thesis
Full-text available
L’apprentissage donne souvent l’impression d’être un processus long et difficile, notamment quand il fait penser à l’école et à la difficulté que tout le monde a déjà ressentie pour maintenir sa motivation pour telle ou telle matière. Pourtant, il y a des choses que l’on apprend sans enseignement. Par exemple, apprendre à parler sa langue maternelle se fait naturellement sans effort conscient. Les connaissances primaires et secondaires sont une façon de distinguer ce qui est facile ou difficile à apprendre. Les connaissances primaires sont celles pour lesquelles nos mécanismes cognitifs auraient évolué, permettant une acquisition sans effort, intuitive et rapide alors que les connaissances secondaires sont apparues récemment : ce sont celles pour lesquelles nous n’aurions pas eu le temps d’évoluer et dont l’acquisition serait longue et coûteuse. Les écoles se focalisent essentiellement sur ce deuxième type de connaissances. Leur défi est de permettre ces apprentissages longs et coûteux, et, pour cela, de maintenir la motivation des apprenants. Une piste de recherche s’appuie sur le fait que les connaissances secondaires sont construites sur la base des connaissances primaires. En effet, personne n’est capable d’enseigner « initialement » une langue maternelle alors que l’apprentissage des langues étrangères s’appuie sur cette première langue. Le présent travail explore le caractère motivant et peu coûteux des connaissances primaires pour faciliter l’apprentissage de la logique en tant que connaissance secondaire. En modifiant la présentation de problèmes logiques avec des habillages liés aux connaissances primaires (e.g., nourriture et caractéristiques d’animaux) ou secondaires (e.g., règles de grammaire, mathématiques), huit premières expériences ont permis de mettre en avant les effets positifs des connaissances primaires que le contenu soit familier ou non. Les résultats montrent que les connaissances primaires favorisent la performance, l’investissement émotionnel, la confiance dans les réponses et diminuent la charge cognitive perçue. Quant aux connaissances secondaires, elles semblent miner la motivation des participants et générer une sensation de conflit parasite. De plus, présenter des problèmes avec un habillage de connaissances primaires en premier permettrait de réduire les effets délétères des connaissances secondaires présentées ensuite et aurait un impact positif global. Trois autres expériences ont alors mis ces résultats à l’épreuve de tâches d’apprentissage afin de proposer une approche qui favorise l’engagement des apprenants et leur apprentissage. Ces découvertes tendent à montrer que les recherches sur l’apprentissage bénéficieraient à prendre en considération les connaissances primaires plutôt que de les négliger car elles sont « déjà apprises ».
Article
A growing body of experimental work highlights the potential value of unstructured, interactive, or spontaneous motions, including gestures, dance, shifting body postures, physical object‐manipulation, drawing, etc. to favorably impact creative performance. However, despite these favorable findings, to our knowledge, no systematic review has been conducted to explore the totality of evidence for embodied activities in this arena. Thus, the objective of this paper was to systematically evaluate the potential effects of embodied experimental manipulations on traditionally assessed creativity outcomes. A systematic review was conducted utilizing PubMed, PsychInfo, Sports Discus, and Google Scholar databases. The 20 studies evaluated employed a variety of methodological approaches regarding study design, embodied manipulation, and selection of specific creativity outcomes. Despite these variations, embodied movement robustly enhanced creativity across nearly all studies (90%), with no studies showing a detrimental effect. Based on the evaluation of the studies reviewed, several common themes emerged. These included the relevance of symbolic metaphors and distributed embodied cognitions, selection of embodied modality, specific measurement considerations, as well as the importance for implementing true, inactive control conditions in embodied creativity research. This review expands on these findings and places them in the context of improving future embodied creativity research.
Article
Previous research has verified the benefits obtained when learners trace out worked examples with the index finger. Our study conducted two experiments to explore the reasons for this phenomenon and its generalizability. Experiment 1 compared the learning effects among tracing, non-tracing, and cueing methods. The cueing method was included to isolate the variable of drawing attention. Students employing the tracing method obtained higher transfer test scores compared to those employing the cueing and non-tracing methods, rating the far transfer test as easier. The tracing effect was verified to have a greater impact on learning than solely focusing learners’ attention. Experiment 2 compared the learning effects among three methods – tracing with the index finger, tracing with a computer mouse, and observing others tracing. Students who merely observed tracing obtained lower far transfer scores than those who traced either with their finger or with the mouse, rating the tests as more difficult. There was no significant difference between the other two groups. Tracing promotes learning more than merely observing tracing. The learning benefits of tracing partly stem from the fact that it is action-based and can be generalized to mouse tracing.
Conference Paper
Full-text available
Designers are increasingly in demand in a range of context because of their ability to deal with complexity and develop innovative solutions. Educational practice, however, is not yet on par with the multidisciplinary and multi-modal learning style of today's students. Gamification offers the promise of an innovative approach to engage students and produce better learning outcomes. The challenges facing gamification designers are parallel to those of experience designers, chiefly in ensuring the solution is contextually and personal relevant to the user. Learning design thinking, however, is a complex social activity influenced by myriad contextual factors. Cultural historical activity theory, when coupled with design-based research, offers a theoretical foundation that allows for gamification to be used for expansive education. The authors present The Four Orders of Gamification and a Gamification Design System that enables design educators to develop expansive curricula for tomorrow's designers.
Article
Embodied theories claim that semantic representations are grounded in sensorimotor systems, but the contribution of sensorimotor brain areas in representing meaning is still controversial. One current debate is whether activity in sensorimotor areas during language comprehension is automatic. Numerous neuroimaging studies reveal activity in perception and action areas during semantic processing that is automatic and independent of context, but increasing findings show that involvement of sensorimotor areas and the connectivity between word-form areas and sensorimotor areas can be modulated by contextual information. "Context Effects on Embodied Representation of Language Concepts "focuses on these findings and discusses the influences from word, phrase, and sentential contexts that emphasize either dominant conceptual features or non-dominant conceptual features. Reviews the findings about contextual modularityClarifies the invariant and flexible features of embodied lexical-semantic processing.
Article
Numerous philosophical theories of joint agency and its intentional structure have been developed in the past few decades. These theories have offered accounts of joint agency that appeal to higher-level states (such as goals, commitments, and intentions) that are “shared” in some way. These accounts have enhanced our understanding of joint agency, yet there are a number of lower-level cognitive phenomena involved in joint action that philosophers rarely acknowledge. In particular, empirical research in cognitive science has revealed that when individuals engage in a joint activity such as conversation or joint problem solving, they become aligned at multiple levels (e.g., behaviors, or cognitive states). We argue that this phenomenon of alignment is crucial to understanding joint actions and should be integrated with philosophical approaches. In this paper, we sketch a possible integration, and draw out its implications for understanding of joint agency and collective intentionality. The result is a process-based, dynamic account of joint action that integrates both low-level and high-level states, and seeks to capture the separate processes of how a joint action is initiated and sustained.
Article
Our capacity to use tools and objects is often considered one of the hallmarks of the human species. Many objects greatly extend our bodily capabilities to act in the physical world, such as when using a hammer or a saw. In addition, humans have the remarkable capability to use objects in a flexible fashion and to combine multiple objects in complex actions. We prepare coffee, cook dinner and drive our car. In this review we propose that humans have developed declarative and procedural knowledge, i.e. action semantics that enables us to use objects in a meaningful way. A state-of-the-art review of research on object use is provided, involving behavioral, developmental, neuropsychological and neuroimaging studies. We show that research in each of these domains is characterized by similar discussions regarding (1) the role of object affordances, (2) the relation between goals and means in object use and (3) the functional and neural organization of action semantics. We propose a novel conceptual framework of action semantics to address these issues and to integrate the previous findings. We argue that action semantics entails both multimodal object representations and modality-specific sub systems, involving manipulation knowledge, functional knowledge and representations of the sensory and proprioceptive consequences of object use. Furthermore, we argue that action semantics are hierarchically organized and selectively activated and used depending on the action intention of the actor and the current task context. Our framework presents an integrative account of multiple findings and perspectives on object use that may guide future studies in this interdisciplinary domain.
Article
One of the central insights of the embodied cognition (EC) movement is that cognition is closely tied to action. In this paper, I formulate an EC-inspired hypothesis concerning social cognition. In this domain, most think that our capacity to understand and interact with one another is best explained by appeal to some form of mindreading. I argue that prominent accounts of mindreading likely contain a significant lacuna. Evidence indicates that what I call an agent’s actional processes and states—her goals, needs, intentions, desires, and so on—likely play important roles in and for mindreading processes. If so, a full understanding of mindreading processes and their role in cognition more broadly will require an understanding of how actional mental processes interact with, influence, or take part in mindreading processes.
Article
processing of abstract units of information that are not related to, and grounded in real-world events in straightforward and theoretically well-understood ways, has led to a growing dissatisfaction with traditional cognitivistic approaches. A promising,alternative is the embodied-cognition approach,that construes cognition and cognitive representations as emerging from, and as being grounded in perceptual, affective, and action-related states and processes (see Pecher & Zwaan, 2005). Ideally, the meaning of a perceived or produced event can be reduced entirely to the sensorimotor (and affective) states and processes directly involved in its perception or production, so that cognitive representations lose their explanatory overhead and become mere summaries of, or pointers to well-understood sensorimotor component processes—as in the Theory of Event Coding (TEC; Hommel, in press a; Hommel et al., 2001Hommel, Mysseler, Aschersleben & Prinz, 2001). Rueschemeyer, Lindemann, van Elk, and Bekkering (2009; henceforth RLvEB) make an attempt to apply an embodied-cognition approach to the interface between language and action, and they put forward two major claims: That the new concept of ‘‘semantic resonance’’ is needed to understand how language and action control interact and that a dedicated cognitive control mechanism,is needed to regulate this interaction and tailor it to the situation at hand. I strongly sympathize with the general approach defended by RLvEB because the embodied-cognition approach is healthy in forcing us (more than traditional cognitivistic approaches) to think of how mind, brain, and body interact, and how our cognitions relate to our physical and social environment, and because relating nonverbal perception and action to verbal perception and action is likely to be very productive both theoretically and empirically. At the same time, however, I have doubts whether the concrete suggestions RLvEB make really advance our understanding of embodied,cognition in general and of the relationship between language and action in particular. In fact, I believe that their approach actually represents a significant setback on the way to a comprehensive theory of embodied cognition. As I will explain in the following, this is because their approach increases, rather than decreases, the gap between cognition and the sensorimotor processes that according to be embodied-cognition perspective should represent its basis and substrate. It, thus, effectively disembodies cognition and, as I will also explain, it does so without any need, that is, in the face of obvious theoretical alternatives that perfectly fit with the notion of embodied,cognition. For the sake of the argument, let us take an extreme alternative and assume that the semantics of human perception and
Article
Human memory is an imperfect process, prone to distortion and errors that range from minor disturbances to major errors that can have serious consequences on everyday life. In this study, we investigated false remembering of manipulatory verbs using an explicit recognition task and pupillometry. Our results replicated the "classical" pupil old/new effect as well as data in false remembering literature that show how items must be recognize as old in order for pupil size to increase (e.g., "subjective" pupil old/new effect), even though these items do not necessarily have to be truly old. These findings support the strength-of-memory trace account that affirms that pupil dilation is related to experience rather than to the accuracy of recognition. Moreover, behavioral results showed higher rates of true and false recognitions for manipulatory verbs and a consequent larger pupil diameter, supporting the embodied view of language.
Article
Full-text available
The commentators have raised many pertinent points that allow us to refine and clarify our view. We classify our response comments into seven sections: automaticity; developmental and educational questions; priming; multiple representations or multiple access(?); terminology; methodological advances; and simulated cognition and numerical cognition. We conclude that the default numerical representations are not abstract.
Article
抄録 This research investigated whether action semantic knowledge influences mental simulation during sentence comprehension. In Experiment 1, we confirmed that the words of face-related objects include the perceptual knowledge about the actions that bring the object to the face. In Experiment 2, we used an acceptability judgment task and a word-picture verification task to compare the perceptual information that is activated by the comprehension of sentences describing an action using face-related objects near the face (near-sentence) or far from the face (far-sentence). Results showed that participants took a longer time to judge the acceptability of the far-sentence than the near-sentence. Verification times were significantly faster when the actions in the pictures matched the action described in the sentences than when they were mismatched. These findings suggest that action semantic knowledge influences sentence processing, and that perceptual information corresponding to the content of the sentence is activated regardless of the action semantic knowledge at the end of the sentence processing.
Article
Full-text available
Two experiments investigated whether the triadic interaction between objects, ourselves and other persons modulates motor system activation during language comprehension. Participants were faced with sentences formed by a descriptive part referring to a positive or negative emotively connoted object and an action part composed of an imperative verb implying a motion toward the self or toward other persons (e.g., "The object is attractive/ugly. Bring it toward you/Give it to another person/Give it to a friend"). Participants judged whether each sentence was sensible or not by moving the mouse toward or away from their body. Findings showed that the simulation of a social context influenced both (1) the motor system and (2) the coding of stimulus valence. Implications of the results for theories of embodied and social cognition are discussed.
Article
This study investigates the influence of activating specific motor codes on the comprehension of passages that describe the use of an object requiring similar motor manipulations. In three experiments, participants either imagined or pantomimed performing an action involving a common object. Participants then held the action in memory while reading a brief story, which described another object that required similar or different motor behaviors. Reading times were collected on the complementary actions. Finally, participants acted out the original action. In Exp. 1 and 2, reading slowed to the verbs. Exp. 2 revealed the slowing to be true interference, which disappeared in Exp. 3 when the action did not need to be recalled. The results suggest that readers activate motor codes when reading story actions, which supports an embodied view. The results also indicate that activated codes bound to an action will, at least briefly, impair reading about a complementary action requiring the same codes, consistent with Hommel's (2009) theory of event coding.
Article
Full-text available
This study examined the developing object knowledge of infants through their visual anticipation of action targets during action observation. Infants (6, 8, 12, 14, and 16 months) and adults watched short movies of a person using 3 different everyday objects. Participants were presented with objects being brought either to a correct or to an incorrect target location (e.g., cup to mouth, phone to ear vs. cup to ear, brush to mouth). When observing the action sequences, infants as well as adults showed anticipatory fixations to the target areas of the displayed actions. For all infant age-groups, there were differences in anticipation frequency between functional and nonfunctional object-target combinations. Adults exhibited no effect of object-target combination, possibly because they quickly learned and flexibly anticipated the target area of observed actions, even when they watched objects being brought to incorrect target areas. Infants, however, had difficulties anticipating to incorrect target locations for familiar objects. Together, these findings suggest that by 6 months of age, infants have acquired solid knowledge about objects and the actions associated with them.
Article
The neural realization of number in abstract form is implausible, but from this it doesn't follow that numbers are not abstract. Clear definitions of abstraction are needed so they can be applied homogeneously to numerical and non-numerical cognition. To achieve a better understanding of the neural substrate of abstraction, productive cognition--not just comprehension and perception--must be investigated.
Article
Full-text available
Abstraction is instrumental for our understanding of how numbers are cognitively represented. We propose that the notion of abstraction becomes testable from within the framework of simulated cognition. We describe mental simulation as embodied, grounded, and situated cognition, and report evidence for number representation at each of these levels of abstraction.
Article
Full-text available
Single-neuron recordings may help resolve the issue of abstract number representation in the parietal lobes. Two manipulations in particular - reversible inactivation and adaptation of apparent numerosity - could provide important insights into the causal influence of "numeron" activity. Taken together, these tests can significantly advance our understanding of number processing in the brain.
Article
Full-text available
A dual-code model of number processing needs to take into account the difference between a number symbol and its meaning. The transition of automatic non-abstract number representations into intentional abstract representations could be conceptualized as a translation of perceptual asemantic representations of numerals into semantic representations of the associated magnitude information. The controversy about the nature of number representations should be thus related to theories on embodied grounding of symbols.
Article
We delineate a developmental model of number representations. Notably, developmental dyscalculia (DD) is rarely associated with an all-or-none deficit in numerosity processing as would be expected if assuming abstract number representations. Finally, we suggest that the "generalist genes" view might be a plausible--though thus far speculative--explanatory framework for our model of how number representations develop.
Article
Much evidence cited by Cohen Kadosh & Walsh (CK&W) in support of their notation-specific representation hypothesis is based on tasks requiring automatic number processing. Several of these findings can be alternatively explained by differential expertise in mapping numerical symbols onto semantic magnitude representations. The importance of considering symbol-referent mapping expertise in theories on numerical representations is highlighted.
Article
We contrapose computational models using representations of numbers in parietal cortical activity patterns (abstract or not) with dynamic models, whereby prefrontal cortex (PFC) orchestrates neural operators. The neural operators under PFC control are activity patterns that mobilize synaptic matrices formed by learning into textured oscillations we observe through the electroencephalogram from the scalp (EEG) and the electrocorticogram from the cortical surface (ECoG). We postulate that specialized operators produce symbolic representations existing only outside of brains.
Article
Cohen Kadosh & Walsh (CK&W) present convincing evidence indicating the existence of notation-specific numerical representations in parietal cortex. We suggest that the same conclusions can be drawn for a particular type of numerical representation: the representation of time. Notation-dependent representations need not be limited to number but may also be extended to other magnitude-related contents processed in parietal cortex (Walsh 2003).
Article
Full-text available
The dual-code proposal of number representation put forward by Cohen Kadosh & Walsh (CK&W) accounts for only a fraction of the many modes of numerical abstraction. Contrary to their proposal, robust data from human infants and nonhuman animals indicate that abstract numerical representations are psychologically primitive. Additionally, much of the behavioral and neural data cited to support CK&W's proposal is, in fact, neutral on the issue of numerical abstraction.
Article
Full-text available
In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise- or a counterclockwise direction. Action execution had to be delayed until the appearance of a visual go signal, which induced an apparent rotational motion in either a clockwise- or a counterclockwise direction. Stimulus detection was faster when the direction of the induced apparent motion was consistent with the direction of the concurrently intended manual object rotation. Responses to action-consistent motions were also faster when the participants prepared the manipulation actions but signaled their stimulus detections with another motor effector (i.e., with a foot response). Taken together, the present study demonstrates a motor-visual priming effect of prepared object manipulations on visual motion perception, indicating a bidirectional functional link between action and perception beyond object-related visuomotor associations.
Article
Observing and producing a smile activate the very same facial muscles. In Experiment 1, we predicted and found that verbal stimuli (action verbs) that refer to emotional expressions elicit the same facial muscle activity (facial electromyography) as visual stimuli do. These results are evidence that language referring to facial muscular activity is not amodal, as traditionally assumed, but is instead bodily grounded. These findings were extended in Experiment 2, in which subliminally presented verbal stimuli were shown to drive muscle activation and to shape judgments, but not when muscle activation was blocked. These experiments provide an important bridge between research on the neurobiological basis of language and related behavioral research. The implications of these findings for theories of language and other domains of cognitive psychology (e.g., priming) are discussed.
Article
Full-text available
The close integration between visual and motor processes suggests that some visuomotor transformations may proceed automatically and to an extent that permits observable effects on subsequent actions. A series of experiments investigated the effects of visual objects on motor responses during a categorisation task. In Experiment 1 participants responded according to an object's natural or manufactured category. The responses consisted in uni-manual precision or power grasps that could be compatible or incompatible with the viewed object. The data indicate that object grasp compatibility significantly affected participant response times and that this did not depend upon the object being viewed within the reaching space. The time course of this effect was investigated in Experiments 2-4b by using a go-nogo paradigm with responses cued by tones and go-nogo trials cued by object category. The compatibility effect was not present under advance response cueing and rapidly diminished following object extinction. A final experiment established that the compatibility effect did not depend on a within-hand response choice, but was at least as great with bi-manual responses where a full power grasp could be used. Distributional analyses suggest that the effect is not subject to rapid decay but increases linearly with RT whilst the object remains visible. The data are consistent with the view that components of the actions an object affords are integral to its representation.
Article
Full-text available
Two experiments were performed to explore a possible visuomotor priming effect. The participants were instructed to fixate a cross on a computer screen and to respond, when the cross changed colour ("go" signal), by grasping one of two objects with their right hand. The participants knew in advance the nature of the to-be-grasped object and the appropriate motor response. Before (100 msec), simultaneously with or after (100 msec) the "go" signal, a two-dimensional picture of an object (the prime), centred around the fixation cross, was presented. The prime was not predictive of the nature of the to-be-grasped object. There was a congruent condition, in which the prime depicted the to-be-grasped object, an incongruent condition, in which the prime depicted the other object, and a neutral condition, in which either no prime was shown or the prime depicted an object that did not belong to the set of to-be-grasped objects. It was found that, in the congruent condition, reaction time for initiating a grasping movement was reduced. These results provide evidence of visuomotor priming.
Chapter
Full-text available
The purpose of [this handbook] is to provide a comprehensive review of all the significant conceptualizations related to [social psychology] principles in a given domain, as well as a review of the research supporting (or failing to support) each conceptualization. Each chapter attempts to fulfill a "compare and contrast" function in relation to particular principles. (PsycINFO Database Record (c) 2002 APA, all rights reserved)
Article
Full-text available
Based on the conceptualization of approach as a decrease in distance and avoidance as an increase in distance, we predicted that stimuli with positive valence facilitate behavior for either approaching the stimulus (object as reference point) or for bringing the stimulus closer (self as reference point) and that stimuli with negative valence facilitate behavior for withdrawing from the stimulus or for pushing the stimulus away. In Study 1, we found that motions to and from a computer screen where positive and negative words were presented lead to compatibility effects indicative of an object-related frame of reference. In Study 2, we replicated this finding using social stimuli with different evaluative associations (young vs. old persons). Finally, we present evidence that self vs. object reference points can be induced through instruction and thus lead to opposite compatibility effects even when participants make the same objective motion (Study 3).
Article
Full-text available
Embodied theories of language processing suggest that this motor simulation is an automatic and necessary component of meaning representation. If this is the case, then language and action systems should be mutually dependent (i.e., motor activity should selectively modulate processing of words with an action-semantic component). In this paper, we investigate in two experiments whether evidence for mutual dependence can be found using a motor priming paradigm. Specifically, participants performed either an intentional or a passive motor task while processing words denoting manipulable and nonmanipulable objects. The performance rates (Experiment 1) and response latencies (Experiment 2) in a lexical-decision task reveal that participants performing an intentional action were positively affected in the processing of words denoting manipulable objects as compared to nonmanipulable objects. This was not the case if participants performed a secondary passive motor action (Experiment 1) or did not perform a secondary motor task (Experiment 2). The results go beyond previous research showing that language processes involve motor systems to demonstrate that the execution of motor actions has a selective effect on the semantic processing of words. We suggest that intentional actions activate specific parts of the neural motor system, which are also engaged for lexical-semantic processing of action-related words and discuss the beneficial versus inhibitory nature of this relationship. The results provide new insights into the embodiment of language and the bidirectionality of effects between language and action processing.
Article
Full-text available
We investigated the hypothesis that people's facial activity influences their affective responses. Two studies were designed to both eliminate methodological problems of earlier experiments and clarify theoretical ambiguities. This was achieved by having subjects hold a pen in their mouth in ways that either inhibited or facilitated the muscles typically associated with smiling without requiring subjects to pose in a smiling face. Study 1's results demonstrated the effectiveness of the procedure. Subjects reported more intense humor responses when cartoons were presented under facilitating conditions than under inhibiting conditions that precluded labeling of the facial expression in emotion categories. Study 2 served to further validate the methodology and to answer additional theoretical questions. The results replicated Study 1's findings and also showed that facial feedback operates on the affective but not on the cognitive component of the humor response. Finally, the results suggested that both inhibitory and facilitatory mechanisms may have contributed to the observed affective responses.
Article
Full-text available
Previous research has shown that trait concepts and stereotype become active automatically in the presence of relevant behavior or stereotyped-group features. Through the use of the same priming procedures as in previous impression formation research, Experiment 1 showed that participants whose concept of rudeness was printed interrupted the experimenter more quickly and frequently than did participants primed with polite-related stimuli. In Experiment 2, participants for whom an elderly stereotype was primed walked more slowly down the hallway when leaving the experiment than did control participants, consistent with the content of that stereotype. In Experiment 3, participants for whom the African American stereotype was primed subliminally reacted with more hostility to a vexatious request of the experimenter. Implications of this automatic behavior priming effect for self-fulfilling prophecies are discussed, as is whether social behavior is necessarily mediated by conscious choice processes.
Article
Full-text available
This contribution is devoted to the question of whether action-control processes may be demonstrated to influence perception. This influence is predicted from a framework in which stimulus processing and action control are assumed to share common codes, thus possibly interfering with each other. In 5 experiments, a paradigm was used that required a motor action during the presentation of a stimulus. The participants were presented with masked right- or left-pointing arrows shortly before executing an already prepared left or right keypress response. We found that the identification probability of the arrow was reduced when the to-be-executed reaction was compatible with the presented arrow. For example, the perception of a right-pointing arrow was impaired when presented during the execution of a right response as compared with that of a left response. The theoretical implications of this finding as well as its relation to other, seemingly similar phenomena (repetition blindness, inhibition of return, psychological refractory period) are discussed.
Article
Full-text available
Five experiments investigated whether preparation of a grasping movement affects detection and discrimination of visual stimuli. Normal human participants were required to prepare to grasp a bar and then to grasp it as fast as possible on presentation of a visual stimulus. On the basis of the degree of sharing of their intrinsic properties with those of the to-be-grasped bar, visual stimuli were categorized as "congruent" or "incongruent." Results showed that grasping reaction times to congruent visual stimuli were faster than reaction times to incongruent ones. These data indicate that preparation to act on an object produces faster processing of stimuli congruent with that object. The same facilitation was present also when, after the preparation of hand grasping, participants were suddenly instructed to inhibit the prepared grasping movement and to respond with a different motor effector. The authors suggest that these findings could represent an extension of the premotor theory of attention, from orienting of attention to spatial locations to orienting of attention to graspable objects.
Article
Full-text available
This study tested the idea of habits as a form of goal-directed automatic behavior. Expanding on the idea that habits are mentally represented as associations between goals and actions, it was proposed that goals are capable of activating the habitual action. More specific, when habits are established (e.g., frequent cycling to the university), the very activation of the goal to act (e.g., having to attend lectures at the university) automatically evokes the habitual response (e.g., bicycle). Indeed, it was tested and confirmed that, when behavior is habitual, behavioral responses are activated automatically. In addition, the results of 3 experiments indicated that (a) the automaticity in habits is conditional on the presence of an active goal (cf. goal-dependent automaticity; J. A. Bargh, 1989), supporting the idea that habits are mentally represented as goal-action links, and (b) the formation of implementation intentions (i.e., the creation of a strong mental link between a goal and action) may simulate goal-directed automaticity in habits.
Article
Full-text available
Research has illustrated dissociations between "cognitive" and "action" systems, suggesting that different representations may underlie phenomenal experience and visuomotor behavior. However, these systems also interact. The present studies show a necessary interaction when semantic processing of an object is required for an appropriate action. Experiment 1 demonstrated that a semantic task interfered with grasping objects appropriately by their handles, but a visuospatial task did not. Experiment 2 assessed performance on a visuomotor task that had no semantic component and showed a reversal of the effects of the concurrent tasks. In Experiment 3, variations on concurrent word tasks suggested that retrieval of semantic information was necessary for appropriate grasping. In all, without semantic processing, the visuomotor system can direct the effective grasp of an object, but not in a manner that is appropriate for its use.
Article
Full-text available
The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals. Its neural basis, however, has remained a mystery. Here, we propose that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represent goals and the means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task. We review neurophysiological, neurobiological, neuroimaging, and computational studies that support this theory and discuss its implications as well as further issues to be addressed
Article
Full-text available
It is proposed that goals can be activated outside of awareness and then operate nonconsciously to guide self-regulation effectively (J. A. Bargh, 1990). Five experiments are reported in which the goal either to perform well or to cooperate was activated, without the awareness of participants, through a priming manipulation. In Experiment 1 priming of the goal to perform well caused participants to perform comparatively better on an intellectual task. In Experiment 2 priming of the goal to cooperate caused participants to replenish a commonly held resource more readily. Experiment 3 used a dissociation paradigm to rule out perceptual-construal alternative explanations. Experiments 4 and 5 demonstrated that action guided by nonconsciously activated goals manifests two classic content-free features of the pursuit of consciously held goals. Nonconsciously activated goals effectively guide action, enabling adaptation to ongoing situational demands.
Article
Full-text available
Traditional approaches to human information processing tend to deal with perception and action planning in isolation, so that an adequate account of the perception-action interface is still missing. On the perceptual side, the dominant cognitive view largely underestimates, and thus fails to account for, the impact of action-related processes on both the processing of perceptual information and on perceptual learning. On the action side, most approaches conceive of action planning as a mere continuation of stimulus processing, thus failing to account for the goal-directedness of even the simplest reaction in an experimental task. We propose a new framework for a more adequate theoretical treatment of perception and action planning, in which perceptual contents and action plans are coded in a common representational medium by feature codes with distal reference. Perceived events (perceptions) and to-be-produced events (actions) are equally represented by integrated, task-tuned networks of feature codes--cognitive structures we call event codes. We give an overview of evidence from a wide variety of empirical domains, such as spatial stimulus-response compatibility, sensorimotor synchronization, and ideomotor action, showing that our main assumptions are well supported by the data.
Article
Full-text available
We report a new phenomenon associated with language comprehension: the action-sentence compatibility effect (ACE). Participants judged whether sentences were sensible by making a response that required moving toward or away from their bodies. When a sentence implied action in one direction (e.g., "Close the drawer" implies action away from the body), the participants had difficulty making a sensibility judgment requiring a response in the opposite direction. The ACE was demonstrated for three sentences types: imperative sentences, sentences describing the transfer of concrete objects, and sentences describing the transfer of abstract entities, such as "Liz told you the story." These dataare inconsistent with theories of language comprehension in which meaning is represented as a set of relations among nodes. Instead, the data support an embodied theory of meaning that relates the meaning of sentences to human action.
Article
Full-text available
Behavioural, neuropsychological and functional imaging studies suggest possible interactions between number processing and finger representation. Since grasping requires the object size to be estimated in order to determine the appropriate hand shaping, coding number magnitude and grasping may share common processes. In the present study, participants performed either a grip closure or opening depending on the parity of a visually presented digit. Electromyographic recordings revealed that grip closure was initiated faster in response to small digit presentation whereas grip opening was initiated faster in response to large digits. This result was interpreted in reference to a recent theory which proposed that physical and numerical quantities are represented by a generalized magnitude system dedicated to action.
Article
Full-text available
Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.
Article
Full-text available
Observing actions made by others activates the cortical circuits responsible for the planning and execution of those same actions. This observation–execution matching system (mirror-neuron system) is thought to play an important role in the understanding of actions made by others. In an fMRI experiment, we tested whether this system also becomes active during the processing of action-related sentences. Participants listened to sentences describing actions performed with the mouth, the hand, or the leg. Abstract sentences of comparable syntactic structure were used as control stimuli. The results showed that listening to action-related sentences activates a left fronto-parieto-temporal network that includes the pars opercularis of the inferior frontal gyrus (Broca's area), those sectors of the premotor cortex where the actions described are motorically coded, as well as the inferior parietal lobule, the intraparietal sulcus, and the posterior middle temporal gyrus. These data provide the first direct evidence that listening to sentences that describe actions engages the visuomotor circuits which subserve action execution and observation.
Article
Full-text available
The lateral prefrontal cortex has been implicated in a wide variety of functions that guide our behavior, and one such candidate function is selection. Selection mechanisms have been described in several domains spanning different stages of processing, from visual attention to response execution. Here, we consider two such mechanisms: selecting relevant information from the perceptual world (e.g., visual selective attention) and selecting relevant information from conceptual representations (e.g., selecting a specific attribute about an object from long-term memory). Although the mechanisms involved in visual selective attention have been well characterized, much less is known about the latter case of selection. In this article, we review the relevant literature from the attention domain as a springboard to understanding the mechanisms involved in conceptual selection.
Article
Full-text available
The brain basis of action words may be neuron ensembles binding language- and action-related information that are dispersed over both language- and action-related cortical areas. This predicts fast spreading of neuronal activity from language areas to specific sensorimotor areas when action words semantically related to different parts of the body are being perceived. To test this, fast neurophysiological imaging was applied to reveal spatiotemporal activity patterns elicited by words with different action-related meaning. Spoken words referring to actions involving the face or leg were presented while subjects engaged in a distraction task and their brain activity was recorded using high-density magnetoencephalography. Shortly after the words could be recognized as unique lexical items, objective source localization using minimum norm current estimates revealed activation in superior temporal (130 msec) and inferior frontocentral areas (142-146 msec). Face-word stimuli activated inferior frontocentral areas more strongly than leg words, whereas the reverse was found at superior central sites (170 msec), thus reflecting the cortical somatotopy of motor actions signified by the words. Significant correlations were found between local source strengths in the frontocentral cortex calculated for all participants and their semantic ratings of the stimulus words, thus further establishing a close relationship between word meaning access and neurophysiology. These results show that meaning access in action word recognition is an early automatic process ref lected by spatiotemporal signatures of word-evoked activity. Word-related distributed neuronal assemblies with specific cortical topographies can explain the observed spatiotemporal dynamics reflecting word meaning access.
Article
Full-text available
Neurophysiological observations suggest that attending to a particular perceptual dimension, such as location or shape, engages dimension-related action, such as reaching and prehension networks. Here we reversed the perspective and hypothesized that activating action systems may prime the processing of stimuli defined on perceptual dimensions related to these actions. Subjects prepared for a reaching or grasping action and, before carrying it out, were presented with location- or size-defined stimulus events. As predicted, performance on the stimulus event varied with action preparation: planning a reaching action facilitated detecting deviants in location sequences whereas planning a grasping action facilitated detecting deviants in size sequences. These findings support the theory of event coding, which claims that perceptual codes and action plans share a common representational medium, which presumably involves the human premotor cortex.
Article
Full-text available
Observing actions and understanding sentences about actions activates corresponding motor processes in the observer-comprehender. In 5 experiments, the authors addressed 2 novel questions regarding language-based motor resonance. The 1st question asks whether visual motion that is associated with an action produces motor resonance in sentence comprehension. The 2nd question asks whether motor resonance is modulated during sentence comprehension. The authors' experiments provide an affirmative response to both questions. A rotating visual stimulus affects both actual manual rotation and the comprehension of manual rotation sentences. Motor resonance is modulated by the linguistic input and is a rather immediate and localized phenomenon. The results are discussed in the context of theories of action observation and mental simulation.
Article
Full-text available
Some words immediately and automatically remind us of odours, smells and scents, whereas other language items do not evoke such associations. This study investigated, for the first time, the abstract linking of linguistic and odour information using modern neuroimaging techniques (functional MRI). Subjects passively read odour-related words ('garlic', 'cinnamon', 'jasmine') and neutral language items. The odour-related terms elicited activation in the primary olfactory cortex, which include the piriform cortex and the amygdala. Our results suggest the activation of widely distributed cortical cell assemblies in the processing of olfactory words. These distributed neuron populations extend into language areas but also reach some parts of the olfactory system. These distributed neural systems may be the basis of the processing of language elements, their related conceptual and semantic information and the associated sensory information.
Article
Full-text available
Four experiments investigated activation of semantic information in action preparation. Participants either prepared to grasp and use an object (e.g., to drink from a cup) or to lift a finger in association with the object's position following a go/no-go lexical-decision task. Word stimuli were consistent to the action goals of the object use (Experiment 1) or to the finger lifting (Experiment 2). Movement onset times yielded a double dissociation of consistency effects between action preparation and word processing. This effect was also present for semantic categorizations (Experiment 3), but disappeared when introducing a letter identification task (Experiment 4). In sum, our findings indicate that action semantics are activated selectively in accordance with the specific action intention of an actor.
Article
Full-text available
When a person views an object, the action the object evokes appears to be activated independently of the person's intention to act. We demonstrate two further properties of this vision-to-action process. First, it is not completely automatic, but is determined by the stimulus properties of the object that are attended. Thus, when a person discriminates the shape of an object, action affordance effects are observed; but when a person discriminates an object's color, no affordance effects are observed. The former, shape property is associated with action, such as how an object might be grasped; the latter, color property is irrelevant to action. Second, we also show that the action state of an object influences evoked action. Thus, active objects, with which current action is implied, produce larger affordance effects than passive objects, with which no action is implied. We suggest that the active object activates action simulation processes similar to those proposed in mirror systems.
Article
Full-text available
The interaction between language and action systems has become an increasingly interesting topic of discussion in cognitive neuroscience. Several recent studies have shown that processing of action verbs elicits activation in the cerebral motor system in a somatotopic manner. The current study extends these findings to show that the brain responses for processing of verbs with specific motor meanings differ not only from that of other motor verbs, but, crucially, that the comprehension of verbs with motor meanings (i.e., greifen, to grasp) differs fundamentally from the processing of verbs with abstract meanings (i.e., denken, to think). Second, the current study investigated the neural correlates of processing morphologically complex verbs with abstract meanings built on stems with motor versus abstract meanings (i.e., begreifen, to comprehend vs. bedenken, to consider). Although residual effects of motor stem meaning might have been expected, we see no evidence for this in our data. Processing of morphologically complex verbs built on motor stems showed no differences in involvement of the motor system when compared with processing complex verbs with abstract stems. Complex verbs built on motor stems did show increased activation compared with complex verbs built on abstract stems in the right posterior temporal cortex. This result is discussed in light of the involvement of the right temporal cortex in comprehension of metaphoric or figurative language.
Article
Full-text available
Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.
Article
Full-text available
To investigate the functional connection between numerical cognition and action planning, the authors required participants to perform different grasping responses depending on the parity status of Arabic digits. The results show that precision grip actions were initiated faster in response to small numbers, whereas power grips were initiated faster in response to large numbers. Moreover, analyses of the grasping kinematics reveal an enlarged maximum grip aperture in the presence of large numbers. Reaction time effects remained present when controlling for the number of fingers used while grasping but disappeared when participants pointed to the object. The data indicate a priming of size-related motor features by numerals and support the idea that representations of numbers and actions share common cognitive codes within a generalized magnitude system.
Article
This study rested the idea of habits as a form of goal-directed automatic behavior. Expanding on the idea that habits are mentally represented as associations between goals and actions, it was proposed that goals are capable of activating the habitual action. More specific, when habits are established (e.g., frequent cycling to the university), the very activation of the goal to act (e.g., having to attend lectures at the university) automatically evokes the habitual response (e.g., bicycle). Indeed, it was tested and confirmed that, when behavior is habitual, behavioral responses are activated automatically. in addition, the results of 3 experiments indicated that (a) the automaticity in habits is conditional on the presence of an active goal (cf. goal-dependent automaticity; J. A. Bargh, 1989), supporting the idea that habits are mentally represented as goal-action links, and (b) the formation of implementation intentions (i.e., the creation of a strong mental link between a goal and action) may simulate goal-directed automaticity in habits.
Article
The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target. Target selection processes prior to the first saccadic eye movement were modulated by the different action intentions. Specifically, fewer saccades to objects with the wrong orientation were made in the grasping condition than in the pointing condition, whereas the number of saccades to an object with the wrong color was the same in the two conditions. Saccadic latencies were similar under the different task conditions, so the results cannot be explained by a speed-accuracy trade-off. The results suggest that a specific action intention, such as grasping, can enhance visual processing of action-relevant features, such as orientation. Together, the findings support the view that visual attention can be best understood as a selection-for-action mechanism.
Article
Words denoting manipulable objects activate sensorimotor brain areas, likely reflecting action experience with the denoted objects. In particular, these sensorimotor lexical representations have been found to reflect the way in which an object is used. In the current paper we present data from two experiments (one behavioral and one neuroimaging) in which we investigate whether body schema information, putatively necessary for interacting with functional objects, is also recruited during lexical processing. To this end, we presented participants with words denoting objects that are typically brought towards or away from the body (e.g., cup or key, respectively). We hypothesized that objects typically brought to a location on the body (e.g., cup) are relatively more reliant on body schema representations, since the final goal location of the cup (i.e., the mouth) is represented primarily through posture and body co-ordinates. In contrast, objects typically brought to a location away from the body (e.g., key) are relatively more dependent on visuo-spatial representations, since the final goal location of the key (i.e., a keyhole) is perceived visually. The behavioral study showed that prior planning of a movement along an axis towards and away from the body facilitates processing of words with a congruent action semantic feature (i.e., preparation of movement towards the body facilitates processing of cup.). In an fMRI study we showed that words denoting objects brought towards the body engage the resources of brain areas involved in the processing information about human bodies (i.e., the extra-striate body area, middle occipital gyrus and inferior parietal lobe) relatively more than words denoting objects typically brought away from the body. The results provide converging evidence that body schema are implicitly activated in processing lexical information.
Article
Actions are part of the way that the mind controls the body. Two fundamental psychological questions about actions are 'Where do they come from?' and 'How does the mind produce them?' These may be called the 'internal generation problem' and the 'information expansion problem' respectively. The importance of these questions was appreciated at the birth of the British Psychological Society (BPS) a century ago, though the experimental methods to study them were lacking. This article falls into two halves. The first half discusses some of the major epochs in the psychology of action over the last 100 years; the second half outlines some currently prominent research questions, and considers their historical antecedents. Finally, I offer some speculations regarding where future contributions to the psychology of action will be most fruitful.
Article
The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target. Target selection processes prior to the first saccadic eye movement were modulated by the different action intentions. Specifically, fewer saccades to objects with the wrong orientation were made in the grasping condition than in the pointing condition, whereas the number of saccades to an object with the wrong color was the same in the two conditions. Saccadic latencies were similar under the different task conditions, so the results cannot be explained by a speed-accuracy trade-off. The results suggest that a specific action intention, such as grasping, can enhance visual processing of action-relevant features, such as orientation. Together the findings support the view that visual attention can be best understood as a selection-for-action mechanism.
Article
The semantic meaning of a word label printed on an object can have significant effects on the kinematics of reaching and grasping movements directed towards that object. Here, we examined how the semantics of word labels might differentially affect the planning and control stages of grasping. Subjects were presented with objects on which were printed either the word "LARGE" or "SMALL." When the grip aperture in the two conditions was compared, an effect of the words was found early in the reach, but this effect declined continuously as the hand approached the target. This continuously decreasing effect is consistent with a planning/control model of action, in which cognitive and perceptual variables affect how actions are planned but not how they are monitored and controlled on-line. The functional and neurological bases of semantic effects on planning and control are discussed.
Article
Research into the perception of space, time and quantity has generated three separate literatures. That number can be represented spatially is, of course, well accepted and forms a basis for research into spatial aspects of numerical processing. Links between number and time or between space and time, on the other hand, are rarely discussed and the shared properties of all three systems have not been considered. I propose here that time, space and quantity are part of a generalized magnitude system. I outline A Theory Of Magnitude (ATOM) as a conceptually new framework within which to re-interpret the cortical processing of these elements of the environment.
Article
It has been suggested that the processing of action words referring to leg, arm, and face movements (e.g., to kick, to pick, to lick) leads to distinct patterns of neurophysiological activity. We addressed this issue using multi-channel EEG and beam-former estimates of distributed current sources within the head. The categories of leg-, arm-, and face-related words were carefully matched for important psycholinguistic factors, including word frequency, imageability, valence, and arousal, and evaluated in a behavioral study for their semantic associations. EEG was recorded from 64 scalp electrodes while stimuli were presented visually in a reading task. We applied a linear beam-former technique to obtain optimal estimates of the sources underlying the word-evoked potentials. These suggested differential activation in frontal areas of the cortex, including primary motor, pre-motor, and pre-frontal sites. Leg words activated dorsal fronto-parietal areas more strongly than face- or arm-related words, whereas face-words produced more activity at left inferior-frontal sites. In the right hemisphere, arm-words activated lateral-frontal areas. We interpret the findings in the framework of a neurobiological model of language and discuss the possible role of mirror neurons in the premotor cortex in language processing.
Article
Transcranial magnetic stimulation (TMS) was applied to motor areas in the left language-dominant hemisphere while right-handed human subjects made lexical decisions on words related to actions. Response times to words referring to leg actions (e.g. kick) were compared with those to words referring to movements involving the arms and hands (e.g. pick). TMS of hand and leg areas influenced the processing of arm and leg words differentially, as documented by a significant interaction of the factors Stimulation site and Word category. Arm area TMS led to faster arm than leg word responses and the reverse effect, faster lexical decisions on leg than arm words, was present when TMS was applied to leg areas. TMS-related differences between word categories were not seen in control conditions, when TMS was applied to hand and leg areas in the right hemisphere and during sham stimulation. Our results show that the left hemispheric cortical systems for language and action are linked to each other in a category-specific manner and that activation in motor and premotor areas can influence the processing of specific kinds of words semantically related to arm or leg actions. By demonstrating specific functional links between action and language systems during lexical processing, these results call into question modular theories of language and motor functions and provide evidence that the two systems interact in the processing of meaningful information about language and action.
Article
Do approach-avoidance actions create attitudes? Prior influential studies suggested that rudimentary attitudes could be established by simply pairing novel stimuli (Chinese ideographs) with arm flexion (approach) or arm extension (avoidance). In three experiments, we found that approach-avoidance actions alone were insufficient to account for such effects. Instead, we found that these affective influences resulted from the interaction of these actions with a priori differences in stimulus valence. Thus, with negative stimuli, the effect of extension on attitude was more positive than the effect of flexion. Experiment 2 demonstrated that the affect from motivationally compatible or incompatible action can also influence task evaluations. A final experiment, using Chinese ideographs from the original studies, confirmed these findings. Both approach and avoidance actions led to more positive evaluations of the ideographs when the actions were motivationally compatible with the prior valence of the ideographs. The attitudinal impact of approach-avoidance action thus reflects its situated meaning, which depends on the valence of stimuli being approached or avoided.
Article
Three studies demonstrate that stereotypic movements activate the corresponding stereotype. In Study 1, participants who were unobtrusively induced to move in the portly manner that is stereotypic of overweight people subsequently ascribed more overweight-stereotypic characteristics to an ambiguous target person than did control participants. In Study 2, participants who were unobtrusively induced to move in the slow manner that is stereotypic of elderly people subsequently ascribed more elderly-stereotypic characteristics to a target than did control participants. In Study 3, participants who were induced to move slowly were faster than control participants to respond to elderly-stereotypic words in a lexical decision task. Using three different movement inductions, two different stereotypes, and two classic measures of stereotype activation, these studies converge in demonstrating that stereotypes may be activated by stereotypic movements.
Article
There is a considerable body of neuropsychological and neuroimaging evidence supporting the distinction between the brain correlates of noun and verb processing. It is however still not clear whether the observed differences are imputable to grammatical or semantic factors. Beyond the basic difference that verbs typically refer to actions and nouns typically refer to objects, other semantic distinctions might play a role as organizing principles within and across word classes. One possible candidate is the notion of manipulation and manipulability, which may modulate the word class dissociation. We used functional magnetic resonance imaging (fMRI) to study the impact of semantic reference and word class on brain activity during a picture naming task. Participants named pictures of objects and actions that did or did not involve manipulation. We observed extensive differences in activation associated with the manipulation dimension. In the case of manipulable items, for both nouns and verbs, there were significant activations within a fronto-parietal system subserving hand action representation. However, we found no significant effect of word class when all verbs were compared to all nouns. These results highlight the impact of the biologically crucial sensorimotor dimension of manipulability on the pattern of brain activity associated to picture naming.
Article
Evidence from functional neuroimaging of the human brain indicates that information about salient properties of an object-such as what it looks like, how it moves, and how it is used-is stored in sensory and motor systems active when that information was acquired. As a result, object concepts belonging to different categories like animals and tools are represented in partially distinct, sensory- and motor property-based neural networks. This suggests that object concepts are not explicitly represented, but rather emerge from weighted activity within property-based brain regions. However, some property-based regions seem to show a categorical organization, thus providing evidence consistent with category-based, domain-specific formulations as well.
Article
A direct relationship between perception and action implies bi-directionality, and predicts not only effects of perception on action but also effects of action on perception. Modern theories of social cognition have intensively examined the relation from perception to action and propose that mirroring the observed actions of others underlies action understanding. Here, we suggest that this view is incomplete, as it neglects the perspective of the actor. We will review empirical evidence showing the effects of self-generated action on perceptual judgments. We propose that producing action might prime perception in a way that observers are selectively sensitive to related or similar actions of conspecifics. Therefore, perceptual resonance, not motor resonance, might be decisive for grounding sympathy and empathy and, thus, successful social interactions.
Article
The online influence of movement production on motion perception was investigated. Participants were asked to move one of their hands in a certain direction while monitoring an independent stimulus motion. The stimulus motion unpredictably deviated in a direction that was either compatible or incompatible with the concurrent movement. Participants' task was to make a speeded response as soon as they detected the deviation. A reversed compatibility effect was obtained: Reaction times were slower under compatible conditions - that is, when motion deviations and movements went in the same direction. This reversal of a commonly observed facilitatory effect can be attributed to the concurrent nature of the perception-action task and to the fact that what was produced was functionally unrelated to what was perceived. Moreover, by employing an online measure, it was possible to minimize the contribution of short-term