Article
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The close integration between visual and motor processes suggests that some visuomotor transformations may proceed automatically and to an extent that permits observable effects on subsequent actions. A series of experiments investigated the effects of visual objects on motor responses during a categorisation task. In Experiment 1 participants responded according to an object's natural or manufactured category. The responses consisted in uni-manual precision or power grasps that could be compatible or incompatible with the viewed object. The data indicate that object grasp compatibility significantly affected participant response times and that this did not depend upon the object being viewed within the reaching space. The time course of this effect was investigated in Experiments 2-4b by using a go-nogo paradigm with responses cued by tones and go-nogo trials cued by object category. The compatibility effect was not present under advance response cueing and rapidly diminished following object extinction. A final experiment established that the compatibility effect did not depend on a within-hand response choice, but was at least as great with bi-manual responses where a full power grasp could be used. Distributional analyses suggest that the effect is not subject to rapid decay but increases linearly with RT whilst the object remains visible. The data are consistent with the view that components of the actions an object affords are integral to its representation.
Article
Full-text available
Based on the conceptualization of approach as a decrease in distance and avoidance as an increase in distance, we predicted that stimuli with positive valence facilitate behavior for either approaching the stimulus (object as reference point) or for bringing the stimulus closer (self as reference point) and that stimuli with negative valence facilitate behavior for withdrawing from the stimulus or for pushing the stimulus away. In Study 1, we found that motions to and from a computer screen where positive and negative words were presented lead to compatibility effects indicative of an object-related frame of reference. In Study 2, we replicated this finding using social stimuli with different evaluative associations (young vs. old persons). Finally, we present evidence that self vs. object reference points can be induced through instruction and thus lead to opposite compatibility effects even when participants make the same objective motion (Study 3).
Article
Full-text available
Embodied theories of language processing suggest that this motor simulation is an automatic and necessary component of meaning representation. If this is the case, then language and action systems should be mutually dependent (i.e., motor activity should selectively modulate processing of words with an action-semantic component). In this paper, we investigate in two experiments whether evidence for mutual dependence can be found using a motor priming paradigm. Specifically, participants performed either an intentional or a passive motor task while processing words denoting manipulable and nonmanipulable objects. The performance rates (Experiment 1) and response latencies (Experiment 2) in a lexical-decision task reveal that participants performing an intentional action were positively affected in the processing of words denoting manipulable objects as compared to nonmanipulable objects. This was not the case if participants performed a secondary passive motor action (Experiment 1) or did not perform a secondary motor task (Experiment 2). The results go beyond previous research showing that language processes involve motor systems to demonstrate that the execution of motor actions has a selective effect on the semantic processing of words. We suggest that intentional actions activate specific parts of the neural motor system, which are also engaged for lexical-semantic processing of action-related words and discuss the beneficial versus inhibitory nature of this relationship. The results provide new insights into the embodiment of language and the bidirectionality of effects between language and action processing.
Article
Full-text available
Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain.
Article
Full-text available
We investigated the hypothesis that people's facial activity influences their affective responses. Two studies were designed to both eliminate methodological problems of earlier experiments and clarify theoretical ambiguities. This was achieved by having subjects hold a pen in their mouth in ways that either inhibited or facilitated the muscles typically associated with smiling without requiring subjects to pose in a smiling face. Study 1's results demonstrated the effectiveness of the procedure. Subjects reported more intense humor responses when cartoons were presented under facilitating conditions than under inhibiting conditions that precluded labeling of the facial expression in emotion categories. Study 2 served to further validate the methodology and to answer additional theoretical questions. The results replicated Study 1's findings and also showed that facial feedback operates on the affective but not on the cognitive component of the humor response. Finally, the results suggested that both inhibitory and facilitatory mechanisms may have contributed to the observed affective responses.
Article
Full-text available
Previous research has shown that trait concepts and stereotype become active automatically in the presence of relevant behavior or stereotyped-group features. Through the use of the same priming procedures as in previous impression formation research, Experiment 1 showed that participants whose concept of rudeness was printed interrupted the experimenter more quickly and frequently than did participants primed with polite-related stimuli. In Experiment 2, participants for whom an elderly stereotype was primed walked more slowly down the hallway when leaving the experiment than did control participants, consistent with the content of that stereotype. In Experiment 3, participants for whom the African American stereotype was primed subliminally reacted with more hostility to a vexatious request of the experimenter. Implications of this automatic behavior priming effect for self-fulfilling prophecies are discussed, as is whether social behavior is necessarily mediated by conscious choice processes.
Article
Full-text available
This contribution is devoted to the question of whether action-control processes may be demonstrated to influence perception. This influence is predicted from a framework in which stimulus processing and action control are assumed to share common codes, thus possibly interfering with each other. In 5 experiments, a paradigm was used that required a motor action during the presentation of a stimulus. The participants were presented with masked right- or left-pointing arrows shortly before executing an already prepared left or right keypress response. We found that the identification probability of the arrow was reduced when the to-be-executed reaction was compatible with the presented arrow. For example, the perception of a right-pointing arrow was impaired when presented during the execution of a right response as compared with that of a left response. The theoretical implications of this finding as well as its relation to other, seemingly similar phenomena (repetition blindness, inhibition of return, psychological refractory period) are discussed.
Article
Full-text available
Five experiments investigated whether preparation of a grasping movement affects detection and discrimination of visual stimuli. Normal human participants were required to prepare to grasp a bar and then to grasp it as fast as possible on presentation of a visual stimulus. On the basis of the degree of sharing of their intrinsic properties with those of the to-be-grasped bar, visual stimuli were categorized as "congruent" or "incongruent." Results showed that grasping reaction times to congruent visual stimuli were faster than reaction times to incongruent ones. These data indicate that preparation to act on an object produces faster processing of stimuli congruent with that object. The same facilitation was present also when, after the preparation of hand grasping, participants were suddenly instructed to inhibit the prepared grasping movement and to respond with a different motor effector. The authors suggest that these findings could represent an extension of the premotor theory of attention, from orienting of attention to spatial locations to orienting of attention to graspable objects.
Article
Full-text available
This study tested the idea of habits as a form of goal-directed automatic behavior. Expanding on the idea that habits are mentally represented as associations between goals and actions, it was proposed that goals are capable of activating the habitual action. More specific, when habits are established (e.g., frequent cycling to the university), the very activation of the goal to act (e.g., having to attend lectures at the university) automatically evokes the habitual response (e.g., bicycle). Indeed, it was tested and confirmed that, when behavior is habitual, behavioral responses are activated automatically. In addition, the results of 3 experiments indicated that (a) the automaticity in habits is conditional on the presence of an active goal (cf. goal-dependent automaticity; J. A. Bargh, 1989), supporting the idea that habits are mentally represented as goal-action links, and (b) the formation of implementation intentions (i.e., the creation of a strong mental link between a goal and action) may simulate goal-directed automaticity in habits.
Article
Full-text available
Research has illustrated dissociations between "cognitive" and "action" systems, suggesting that different representations may underlie phenomenal experience and visuomotor behavior. However, these systems also interact. The present studies show a necessary interaction when semantic processing of an object is required for an appropriate action. Experiment 1 demonstrated that a semantic task interfered with grasping objects appropriately by their handles, but a visuospatial task did not. Experiment 2 assessed performance on a visuomotor task that had no semantic component and showed a reversal of the effects of the concurrent tasks. In Experiment 3, variations on concurrent word tasks suggested that retrieval of semantic information was necessary for appropriate grasping. In all, without semantic processing, the visuomotor system can direct the effective grasp of an object, but not in a manner that is appropriate for its use.
Article
Full-text available
The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals. Its neural basis, however, has remained a mystery. Here, we propose that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represent goals and the means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task. We review neurophysiological, neurobiological, neuroimaging, and computational studies that support this theory and discuss its implications as well as further issues to be addressed
Article
Full-text available
It is proposed that goals can be activated outside of awareness and then operate nonconsciously to guide self-regulation effectively (J. A. Bargh, 1990). Five experiments are reported in which the goal either to perform well or to cooperate was activated, without the awareness of participants, through a priming manipulation. In Experiment 1 priming of the goal to perform well caused participants to perform comparatively better on an intellectual task. In Experiment 2 priming of the goal to cooperate caused participants to replenish a commonly held resource more readily. Experiment 3 used a dissociation paradigm to rule out perceptual-construal alternative explanations. Experiments 4 and 5 demonstrated that action guided by nonconsciously activated goals manifests two classic content-free features of the pursuit of consciously held goals. Nonconsciously activated goals effectively guide action, enabling adaptation to ongoing situational demands.
Article
Full-text available
Traditional approaches to human information processing tend to deal with perception and action planning in isolation, so that an adequate account of the perception-action interface is still missing. On the perceptual side, the dominant cognitive view largely underestimates, and thus fails to account for, the impact of action-related processes on both the processing of perceptual information and on perceptual learning. On the action side, most approaches conceive of action planning as a mere continuation of stimulus processing, thus failing to account for the goal-directedness of even the simplest reaction in an experimental task. We propose a new framework for a more adequate theoretical treatment of perception and action planning, in which perceptual contents and action plans are coded in a common representational medium by feature codes with distal reference. Perceived events (perceptions) and to-be-produced events (actions) are equally represented by integrated, task-tuned networks of feature codes--cognitive structures we call event codes. We give an overview of evidence from a wide variety of empirical domains, such as spatial stimulus-response compatibility, sensorimotor synchronization, and ideomotor action, showing that our main assumptions are well supported by the data.
Article
Full-text available
We report a new phenomenon associated with language comprehension: the action-sentence compatibility effect (ACE). Participants judged whether sentences were sensible by making a response that required moving toward or away from their bodies. When a sentence implied action in one direction (e.g., "Close the drawer" implies action away from the body), the participants had difficulty making a sensibility judgment requiring a response in the opposite direction. The ACE was demonstrated for three sentences types: imperative sentences, sentences describing the transfer of concrete objects, and sentences describing the transfer of abstract entities, such as "Liz told you the story." These dataare inconsistent with theories of language comprehension in which meaning is represented as a set of relations among nodes. Instead, the data support an embodied theory of meaning that relates the meaning of sentences to human action.
Article
Full-text available
Behavioural, neuropsychological and functional imaging studies suggest possible interactions between number processing and finger representation. Since grasping requires the object size to be estimated in order to determine the appropriate hand shaping, coding number magnitude and grasping may share common processes. In the present study, participants performed either a grip closure or opening depending on the parity of a visually presented digit. Electromyographic recordings revealed that grip closure was initiated faster in response to small digit presentation whereas grip opening was initiated faster in response to large digits. This result was interpreted in reference to a recent theory which proposed that physical and numerical quantities are represented by a generalized magnitude system dedicated to action.
Article
Full-text available
Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.
Article
Full-text available
Observing actions made by others activates the cortical circuits responsible for the planning and execution of those same actions. This observation–execution matching system (mirror-neuron system) is thought to play an important role in the understanding of actions made by others. In an fMRI experiment, we tested whether this system also becomes active during the processing of action-related sentences. Participants listened to sentences describing actions performed with the mouth, the hand, or the leg. Abstract sentences of comparable syntactic structure were used as control stimuli. The results showed that listening to action-related sentences activates a left fronto-parieto-temporal network that includes the pars opercularis of the inferior frontal gyrus (Broca's area), those sectors of the premotor cortex where the actions described are motorically coded, as well as the inferior parietal lobule, the intraparietal sulcus, and the posterior middle temporal gyrus. These data provide the first direct evidence that listening to sentences that describe actions engages the visuomotor circuits which subserve action execution and observation.
Article
Full-text available
The lateral prefrontal cortex has been implicated in a wide variety of functions that guide our behavior, and one such candidate function is selection. Selection mechanisms have been described in several domains spanning different stages of processing, from visual attention to response execution. Here, we consider two such mechanisms: selecting relevant information from the perceptual world (e.g., visual selective attention) and selecting relevant information from conceptual representations (e.g., selecting a specific attribute about an object from long-term memory). Although the mechanisms involved in visual selective attention have been well characterized, much less is known about the latter case of selection. In this article, we review the relevant literature from the attention domain as a springboard to understanding the mechanisms involved in conceptual selection.
Article
Full-text available
The brain basis of action words may be neuron ensembles binding language-and action-related information that are dispersed over both language-and action-related cortical areas. This predicts fast spreading of neuronal activity from language areas to specific sensorimotor areas when action words semantically related to different parts of the body are being perceived. To test this, fast neurophysiological imaging was applied to reveal spatiotemporal activity patterns elicited by words with different action-related meaning. Spoken words referring to actions involving the face or leg were presented while subjects engaged in a distraction task and their brain activity was recorded using high-density magnetoencephalography. Shortly after the words could be recognized as unique lexical items, objective source localization using minimum norm current estimates revealed activation in superior temporal (130 msec) and inferior frontocentral areas (142-146 msec). Face-word stimuli activated inferior frontocentral areas more strongly than leg words, whereas the reverse was found at superior central sites (170 msec), thus reflecting the cortical somatotopy of motor actions signified by the words. Significant correlations were found between local source strengths in the frontocentral cortex calculated for all participants and their semantic ratings of the stimulus words, thus further establishing a close relationship between word meaning access and neurophysiology. These results show that meaning access in action word recognition is an early automatic process reflected by spatiotemporal signatures of word-evoked activity. Word-related distributed neuronal assemblies with specific cortical topographies can explain the observed spatiotemporal dynamics reflecting word meaning access.
Article
Full-text available
Neurophysiological observations suggest that attending to a particular perceptual dimension, such as location or shape, engages dimension-related action, such as reaching and prehension networks. Here we reversed the perspective and hypothesized that activating action systems may prime the processing of stimuli defined on perceptual dimensions related to these actions. Subjects prepared for a reaching or grasping action and, before carrying it out, were presented with location- or size-defined stimulus events. As predicted, performance on the stimulus event varied with action preparation: planning a reaching action facilitated detecting deviants in location sequences whereas planning a grasping action facilitated detecting deviants in size sequences. These findings support the theory of event coding, which claims that perceptual codes and action plans share a common representational medium, which presumably involves the human premotor cortex.
Article
Full-text available
Observing actions and understanding sentences about actions activates corresponding motor processes in the observer-comprehender. In 5 experiments, the authors addressed 2 novel questions regarding language-based motor resonance. The 1st question asks whether visual motion that is associated with an action produces motor resonance in sentence comprehension. The 2nd question asks whether motor resonance is modulated during sentence comprehension. The authors' experiments provide an affirmative response to both questions. A rotating visual stimulus affects both actual manual rotation and the comprehension of manual rotation sentences. Motor resonance is modulated by the linguistic input and is a rather immediate and localized phenomenon. The results are discussed in the context of theories of action observation and mental simulation.
Article
Full-text available
Some words immediately and automatically remind us of odours, smells and scents, whereas other language items do not evoke such associations. This study investigated, for the first time, the abstract linking of linguistic and odour information using modern neuroimaging techniques (functional MRI). Subjects passively read odour-related words ('garlic', 'cinnamon', 'jasmine') and neutral language items. The odour-related terms elicited activation in the primary olfactory cortex, which include the piriform cortex and the amygdala. Our results suggest the activation of widely distributed cortical cell assemblies in the processing of olfactory words. These distributed neuron populations extend into language areas but also reach some parts of the olfactory system. These distributed neural systems may be the basis of the processing of language elements, their related conceptual and semantic information and the associated sensory information.
Article
Full-text available
Four experiments investigated activation of semantic information in action preparation. Participants either prepared to grasp and use an object (e.g., to drink from a cup) or to lift a finger in association with the object's position following a go/no-go lexical-decision task. Word stimuli were consistent to the action goals of the object use (Experiment 1) or to the finger lifting (Experiment 2). Movement onset times yielded a double dissociation of consistency effects between action preparation and word processing. This effect was also present for semantic categorizations (Experiment 3), but disappeared when introducing a letter identification task (Experiment 4). In sum, our findings indicate that action semantics are activated selectively in accordance with the specific action intention of an actor.
Article
Full-text available
When a person views an object, the action the object evokes appears to be activated independently of the person's intention to act. We demonstrate two further properties of this vision-to-action process. First, it is not completely automatic, but is determined by the stimulus properties of the object that are attended. Thus, when a person discriminates the shape of an object, action affordance effects are observed; but when a person discriminates an object's color, no affordance effects are observed. The former, shape property is associated with action, such as how an object might be grasped; the latter, color property is irrelevant to action. Second, we also show that the action state of an object influences evoked action. Thus, active objects, with which current action is implied, produce larger affordance effects than passive objects, with which no action is implied. We suggest that the active object activates action simulation processes similar to those proposed in mirror systems.
Article
Full-text available
The interaction between language and action systems has become an increasingly interesting topic of discussion in cognitive neuroscience. Several recent studies have shown that processing of action verbs elicits activation in the cerebral motor system in a somatotopic manner. The current study extends these findings to show that the brain responses for processing of verbs with specific motor meanings differ not only from that of other motor verbs, but, crucially, that the comprehension of verbs with motor meanings (i.e., greifen, to grasp) differs fundamentally from the processing of verbs with abstract meanings (i.e., denken, to think). Second, the current study investigated the neural correlates of processing morphologically complex verbs with abstract meanings built on stems with motor versus abstract meanings (i.e., begreifen, to comprehend vs. bedenken, to consider). Although residual effects of motor stem meaning might have been expected, we see no evidence for this in our data. Processing of morphologically complex verbs built on motor stems showed no differences in involvement of the motor system when compared with processing complex verbs with abstract stems. Complex verbs built on motor stems did show increased activation compared with complex verbs built on abstract stems in the right posterior temporal cortex. This result is discussed in light of the involvement of the right temporal cortex in comprehension of metaphoric or figurative language.
Article
Full-text available
To investigate the functional connection between numerical cognition and action planning, the authors required participants to perform different grasping responses depending on the parity status of Arabic digits. The results show that precision grip actions were initiated faster in response to small numbers, whereas power grips were initiated faster in response to large numbers. Moreover, analyses of the grasping kinematics reveal an enlarged maximum grip aperture in the presence of large numbers. Reaction time effects remained present when controlling for the number of fingers used while grasping but disappeared when participants pointed to the object. The data indicate a priming of size-related motor features by numerals and support the idea that representations of numbers and actions share common cognitive codes within a generalized magnitude system.
Article
Full-text available
We report two experiments in which production of articulated hand gestures was used to reveal the nature of gestural knowledge evoked by sentences referring to manipulable objects. Two gesture types were examined: functional gestures (executed when using an object for its intended purpose) and volumetric gestures (used when picking up an object simply to move it). Participants read aloud a sentence that referred to an object but did not mention any form of manual interaction (e.g., Jane forgot the calculator) and were cued after a delay of 300 or 750 ms to produce the functional or volumetric gesture associated with the object, or a gesture that was unrelated to the object. At both cue delays, functional gestures were primed relative to unrelated gestures, but no significant priming was found for volumetric gestures. Our findings elucidate the types of motor representations that are directly linked to the meaning of words referring to manipulable objects in sentences.
Article
Words denoting manipulable objects activate sensorimotor brain areas, likely reflecting action experience with the denoted objects. In particular, these sensorimotor lexical representations have been found to reflect the way in which an object is used. In the current paper we present data from two experiments (one behavioral and one neuroimaging) in which we investigate whether body schema information, putatively necessary for interacting with functional objects, is also recruited during lexical processing. To this end, we presented participants with words denoting objects that are typically brought towards or away from the body (e.g., cup or key, respectively). We hypothesized that objects typically brought to a location on the body (e.g., cup) are relatively more reliant on body schema representations, since the final goal location of the cup (i.e., the mouth) is represented primarily through posture and body co-ordinates. In contrast, objects typically brought to a location away from the body (e.g., key) are relatively more dependent on visuo-spatial representations, since the final goal location of the key (i.e., a keyhole) is perceived visually. The behavioral study showed that prior planning of a movement along an axis towards and away from the body facilitates processing of words with a congruent action semantic feature (i.e., preparation of movement towards the body facilitates processing of cup.). In an fMRI study we showed that words denoting objects brought towards the body engage the resources of brain areas involved in the processing information about human bodies (i.e., the extra-striate body area, middle occipital gyrus and inferior parietal lobe) relatively more than words denoting objects typically brought away from the body. The results provide converging evidence that body schema are implicitly activated in processing lexical information.
Article
The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target. Target selection processes prior to the first saccadic eye movement were modulated by the different action intentions. Specifically, fewer saccades to objects with the wrong orientation were made in the grasping condition than in the pointing condition, whereas the number of saccades to an object with the wrong color was the same in the two conditions. Saccadic latencies were similar under the different task conditions, so the results cannot be explained by a speed-accuracy trade-off. The results suggest that a specific action intention, such as grasping, can enhance visual processing of action-relevant features, such as orientation. Together the findings support the view that visual attention can be best understood as a selection-for-action mechanism.
Article
The semantic meaning of a word label printed on an object can have significant effects on the kinematics of reaching and grasping movements directed towards that object. Here, we examined how the semantics of word labels might differentially affect the planning and control stages of grasping. Subjects were presented with objects on which were printed either the word "LARGE" or "SMALL." When the grip aperture in the two conditions was compared, an effect of the words was found early in the reach, but this effect declined continuously as the hand approached the target. This continuously decreasing effect is consistent with a planning/control model of action, in which cognitive and perceptual variables affect how actions are planned but not how they are monitored and controlled on-line. The functional and neurological bases of semantic effects on planning and control are discussed.
Article
Research into the perception of space, time and quantity has generated three separate literatures. That number can be represented spatially is, of course, well accepted and forms a basis for research into spatial aspects of numerical processing. Links between number and time or between space and time, on the other hand, are rarely discussed and the shared properties of all three systems have not been considered. I propose here that time, space and quantity are part of a generalized magnitude system. I outline A Theory Of Magnitude (ATOM) as a conceptually new framework within which to re-interpret the cortical processing of these elements of the environment.
Article
It has been suggested that the processing of action words referring to leg, arm, and face movements (e.g., to kick, to pick, to lick) leads to distinct patterns of neurophysiological activity. We addressed this issue using multi-channel EEG and beam-former estimates of distributed current sources within the head. The categories of leg-, arm-, and face-related words were carefully matched for important psycholinguistic factors, including word frequency, imageability, valence, and arousal, and evaluated in a behavioral study for their semantic associations. EEG was recorded from 64 scalp electrodes while stimuli were presented visually in a reading task. We applied a linear beam-former technique to obtain optimal estimates of the sources underlying the word-evoked potentials. These suggested differential activation in frontal areas of the cortex, including primary motor, pre-motor, and pre-frontal sites. Leg words activated dorsal fronto-parietal areas more strongly than face- or arm-related words, whereas face-words produced more activity at left inferior-frontal sites. In the right hemisphere, arm-words activated lateral-frontal areas. We interpret the findings in the framework of a neurobiological model of language and discuss the possible role of mirror neurons in the premotor cortex in language processing.
Article
Transcranial magnetic stimulation (TMS) was applied to motor areas in the left language-dominant hemisphere while right-handed human subjects made lexical decisions on words related to actions. Response times to words referring to leg actions (e.g. kick) were compared with those to words referring to movements involving the arms and hands (e.g. pick). TMS of hand and leg areas influenced the processing of arm and leg words differentially, as documented by a significant interaction of the factors Stimulation site and Word category. Arm area TMS led to faster arm than leg word responses and the reverse effect, faster lexical decisions on leg than arm words, was present when TMS was applied to leg areas. TMS-related differences between word categories were not seen in control conditions, when TMS was applied to hand and leg areas in the right hemisphere and during sham stimulation. Our results show that the left hemispheric cortical systems for language and action are linked to each other in a category-specific manner and that activation in motor and premotor areas can influence the processing of specific kinds of words semantically related to arm or leg actions. By demonstrating specific functional links between action and language systems during lexical processing, these results call into question modular theories of language and motor functions and provide evidence that the two systems interact in the processing of meaningful information about language and action.
Article
Do approach-avoidance actions create attitudes? Prior influential studies suggested that rudimentary attitudes could be established by simply pairing novel stimuli (Chinese ideographs) with arm flexion (approach) or arm extension (avoidance). In three experiments, we found that approach-avoidance actions alone were insufficient to account for such effects. Instead, we found that these affective influences resulted from the interaction of these actions with a priori differences in stimulus valence. Thus, with negative stimuli, the effect of extension on attitude was more positive than the effect of flexion. Experiment 2 demonstrated that the affect from motivationally compatible or incompatible action can also influence task evaluations. A final experiment, using Chinese ideographs from the original studies, confirmed these findings. Both approach and avoidance actions led to more positive evaluations of the ideographs when the actions were motivationally compatible with the prior valence of the ideographs. The attitudinal impact of approach-avoidance action thus reflects its situated meaning, which depends on the valence of stimuli being approached or avoided.
Article
Three studies demonstrate that stereotypic movements activate the corresponding stereotype. In Study 1, participants who were unobtrusively induced to move in the portly manner that is stereotypic of overweight people subsequently ascribed more overweight-stereotypic characteristics to an ambiguous target person than did control participants. In Study 2, participants who were unobtrusively induced to move in the slow manner that is stereotypic of elderly people subsequently ascribed more elderly-stereotypic characteristics to a target than did control participants. In Study 3, participants who were induced to move slowly were faster than control participants to respond to elderly-stereotypic words in a lexical decision task. Using three different movement inductions, two different stereotypes, and two classic measures of stereotype activation, these studies converge in demonstrating that stereotypes may be activated by stereotypic movements.
Article
There is a considerable body of neuropsychological and neuroimaging evidence supporting the distinction between the brain correlates of noun and verb processing. It is however still not clear whether the observed differences are imputable to grammatical or semantic factors. Beyond the basic difference that verbs typically refer to actions and nouns typically refer to objects, other semantic distinctions might play a role as organizing principles within and across word classes. One possible candidate is the notion of manipulation and manipulability, which may modulate the word class dissociation. We used functional magnetic resonance imaging (fMRI) to study the impact of semantic reference and word class on brain activity during a picture naming task. Participants named pictures of objects and actions that did or did not involve manipulation. We observed extensive differences in activation associated with the manipulation dimension. In the case of manipulable items, for both nouns and verbs, there were significant activations within a fronto-parietal system subserving hand action representation. However, we found no significant effect of word class when all verbs were compared to all nouns. These results highlight the impact of the biologically crucial sensorimotor dimension of manipulability on the pattern of brain activity associated to picture naming.
Article
Evidence from functional neuroimaging of the human brain indicates that information about salient properties of an object-such as what it looks like, how it moves, and how it is used-is stored in sensory and motor systems active when that information was acquired. As a result, object concepts belonging to different categories like animals and tools are represented in partially distinct, sensory- and motor property-based neural networks. This suggests that object concepts are not explicitly represented, but rather emerge from weighted activity within property-based brain regions. However, some property-based regions seem to show a categorical organization, thus providing evidence consistent with category-based, domain-specific formulations as well.
Article
A direct relationship between perception and action implies bi-directionality, and predicts not only effects of perception on action but also effects of action on perception. Modern theories of social cognition have intensively examined the relation from perception to action and propose that mirroring the observed actions of others underlies action understanding. Here, we suggest that this view is incomplete, as it neglects the perspective of the actor. We will review empirical evidence showing the effects of self-generated action on perceptual judgments. We propose that producing action might prime perception in a way that observers are selectively sensitive to related or similar actions of conspecifics. Therefore, perceptual resonance, not motor resonance, might be decisive for grounding sympathy and empathy and, thus, successful social interactions.
Article
The online influence of movement production on motion perception was investigated. Participants were asked to move one of their hands in a certain direction while monitoring an independent stimulus motion. The stimulus motion unpredictably deviated in a direction that was either compatible or incompatible with the concurrent movement. Participants' task was to make a speeded response as soon as they detected the deviation. A reversed compatibility effect was obtained: Reaction times were slower under compatible conditions - that is, when motion deviations and movements went in the same direction. This reversal of a commonly observed facilitatory effect can be attributed to the concurrent nature of the perception-action task and to the fact that what was produced was functionally unrelated to what was perceived. Moreover, by employing an online measure, it was possible to minimize the contribution of short-term memory processes, which has potentially confounded the interpretation of related effects.
Article
In the present study, we recorded the kinematics of grasping movements in order to measure the possible interference caused by digits printed on the visible face of the objects to grasp. The aim of this approach was to test the hypothesis that digit magnitude processing shares common mechanisms with object size estimate during grasping. In the first stages of reaching, grip aperture was found to be larger consequent to the presentation of digits with a high value rather than a low one. The effect of digit magnitude on grip aperture was more pronounced for large objects. As the hand got closer to the object, the influence of digit magnitude decreased and grip aperture progressively reflected the actual size of the object. We concluded that number magnitude may interact with grip aperture while programming the grasping movements.
Article
Recent studies have supported close interactions between language and action-related processes, suggesting comparable neural mechanisms. However, relatively little is known about the semantics involved in action planning. The present study investigated the activation of semantic knowledge in meaningful actions by recording event-related potentials (ERPs). Subjects prepared meaningful or meaningless actions with objects and made a semantic categorization response before executing the action. Words presented could be either congruent or incongruent with respect to the goal of the action. Preparation of meaningful actions elicited a larger anterior N400 for words incongruent to the present action goal as compared to congruent words, while no N400 effect was found when subjects prepared meaningless actions. These findings indicate that the preparation of meaningful actions with objects is accompanied by the activation of semantic information representing the usual action goals associated with those objects.
Article
A growing body of research suggests that comprehending verbal descriptions of actions relies on an internal simulation of the described action. To assess this motor resonance account of language comprehension, we first review recent developments in the literature on perception and action, with a view towards language processing. We then examine studies of language processing from an action simulation perspective. We conclude by discussing several criteria that might be helpful with regard to assessing the role of motor resonance during language comprehension.