Social Neuroscience

Published by Taylor & Francis (Routledge)
Online ISSN: 1747-0927
This study used fMRI to investigate the functioning of the Theory of Mind (ToM) cortical network in autism during the viewing of animations that in some conditions entailed the attribution of a mental state to animated geometric figures. At the cortical level, mentalizing (attribution of metal states) is underpinned by the coordination and integration of the components of the ToM network, which include the medial frontal gyrus, the anterior paracingulate, and the right temporoparietal junction. The pivotal new finding was a functional underconnectivity (a lower degree of synchronization) in autism, especially in the connections between frontal and posterior areas during the attribution of mental states. In addition, the frontal ToM regions activated less in participants with autism relative to control participants. In the autism group, an independent psychometric assessment of ToM ability and the activation in the right temporoparietal junction were reliably correlated. The results together provide new evidence for the biological basis of atypical processing of ToM in autism, implicating the underconnectivity between frontal regions and more posterior areas.
b. Schematic representation of a single moral judgment trial. Stories were presented in 
Moral praise (left) and blame (right) judgments. Error bars represent standard error. 
Percent signal change (PSC) from rest in the RTPJ for praise (left) and blame (right). 
Percent signal change (PSC) from rest in the LTPJ for praise (left) and blame (right). 
Percent signal change (PSC) from rest in the DMPFC for praise (left) and blame 
Moral judgment depends critically on theory of mind (ToM), reasoning about mental states such as beliefs and intentions. People assign blame for failed attempts to harm and offer forgiveness in the case of accidents. Here we use fMRI to investigate the role of ToM in moral judgment of harmful vs. helpful actions. Is ToM deployed differently for judgments of blame vs. praise? Participants evaluated agents who produced a harmful, helpful, or neutral outcome, based on a harmful, helpful, or neutral intention; participants made blame and praise judgments. In the right temporo-parietal junction (right TPJ), and, to a lesser extent, the left TPJ and medial prefrontal cortex, the neural response reflected an interaction between belief and outcome factors, for both blame and praise judgments: The response in these regions was highest when participants delivered a negative moral judgment, i.e., assigned blame or withheld praise, based solely on the agent's intent (attempted harm, accidental help). These results show enhanced attention to mental states for negative moral verdicts based exclusively on mental state information.
Correlation image of 5HT 2A BP nd maps and RD scores, controlling for age as a covariate. Sagital projection overlay of 24 controls on a T1 average template in the MNI space. r, Pearson's correlation coefficient.  
analysis: Partial Spearman coefficients between mean 5HT 2A BP nd in the regions of interest and RD scores, controlling for age as a covariate
Social behavior and desire for social relationships have been independently linked to the serotonergic system, the prefrontal cortex, especially the orbitofrontal cortex (OFC), and the anterior cingulate cortex (ACC). The goal of this study was to explore the role of serotonin 5HT(2A) receptors in these brain regions in forming and maintaining close interpersonal relationships. Twenty-four healthy subjects completed the Temperament and Character Inventory (TCI) prior to undergoing [(18)F]setoperone brain positron emission tomography (PET) to measure serotonin 5HT(2A) receptor availability within the OFC (BA 11 and 47) and ACC (BA 32). We explored the relationship between desire for social relationships, as measured by the TCI reward dependence (RD) scale, and 5HT(2A) receptor non-displaceable binding potential (BP(nd)) in these regions. Scores of RD were negatively correlated with 5HT(2A) BP(nd) in the ACC (BA 32, r = -.528, p = .012) and OFC (BA 11, r = -.489, p = .021; BA 47, r = -.501, p = .017). These correlations were corroborated by a voxel-wise analysis. These results suggest that the serotonergic system may have a regulatory effect on the OFC and ACC for establishing and maintaining social relationships.
The retention of first-order theory of mind (ToM) despite severe loss of grammar has been reported in two patients with left hemisphere brain damage (Varley & Siegal, 200033. Varley , R. and Siegal , M. 2000. Evidence for cognition without grammar from causal reasoning and “theory of mind” in an agrammatic aphasic patient. Current Biology, 10: 723–726. [PubMed], [Web of Science ®], [CSA]View all references; Varley, Siegal, & Want, 200134. Varley , R. , Siegal , M. and Want , S. C. 2001. Severe impairment in grammar does not preclude theory of mind. Neurocase, 7: 489–493. [Taylor & Francis Online], [PubMed], [Web of Science ®], [CSA]View all references). We report a third, and more detailed, case study. Patient PH shows significant general language impairment, and severe grammatical impairment similar to that reported in previous studies. In addition we were able to show that PH's impairment extends to grammatical constructions most closely related to ToM in studies of children (embedded complement clauses and relative clauses). Despite this, PH performed almost perfectly on first-order false belief tasks and on a novel nonverbal second-order false belief task. PH was also successful on a novel test of “ToM semantics” that required evaluation of the certainty implied by different mental state terms. The data strongly suggest that grammar is not a necessary source of structure for explicit ToM reasoning in adults, but do not rule out a critical role for “ToM semantics.” In turn this suggests that the relationship observed between grammar and ToM in studies of children is the result of an exclusively developmental process.
Self-reported scores on the avoidant attachment scale by participants classified by mental health status and the A118G polymorphism.  
Psychometric data (mean ± SD) for the two groups of participants
Self-reported scores on the social anhedonia scale by participants classified by mental health status and the A118G poly- morphism.  
A large body of evidence links altered opioid signaling with changes in social behavior in animals. However, few studies have attempted to determine whether similar links exist in humans. Here we investigate whether a common polymorphism (A118G) in the mu-opioid receptor gene (OPRM1) is associated with alterations in personality traits linked to affiliative behavior and attachment. In a mixed sample (N = 214) of adult healthy volunteers and psychiatric patients, we analyzed the association between the A118G polymorphism of the OPRM1 and two different psychological constructs reflecting individual differences in the capacity to experience social reward. Compared to individuals expressing only the major allele (A) of the A118G polymorphism, subjects expressing the minor allele (G) had an increased tendency to become engaged in affectionate relationships, as indicated by lower scores on a self-report measure of avoidant attachment, and experienced more pleasure in social situations, as indicated by lower scores on a self-report measure of social anhedonia. The OPRM1 variation accounted for about 3.5% of the variance in the two measures. The significant association between the A118G polymorphism and social hedonic capacity was independent of the participants' mental health status. The results reported here are in agreement with the brain opioid hypothesis of social attachment and the established role of opioid transmission in mediating affiliative behavior.
Patients with schizophrenia often show abnormal social interactions, which may explain their social exclusion behaviors. This study aimed to elucidate patients' brain responses to social rejection in an interactive situation. Fifteen patients with schizophrenia and 16 healthy controls participated in the functional magnetic resonance imaging experiment with the virtual handshake task, in which socially interacting contents such as acceptance and refusal of handshaking were implemented. Responses to the refusal versus acceptance conditions were evaluated and compared between the two groups. Controls revealed higher activity in the refusal condition compared to the acceptance condition in the right superior temporal sulcus, whereas patients showed higher activity in the prefrontal regions, including the frontopolar cortex. In patients, contrast activities of the right superior temporal sulcus were inversely correlated with the severity of schizophrenic symptoms, whereas contrast activities of the left frontopolar cortex were positively correlated with the current anxiety scores. The superior temporal sulcus hypoactivity and frontopolar hyperactivity of patients with schizophrenia in social rejection situations may suggest the presence of mentalizing deficits in negative social situations and inefficient processes of socially aberrant stimuli, respectively. These abnormalities may be one of the neural bases of distorted or paranoid beliefs in schizophrenia.
The two studies reported in this article are an extension of the neuroimaging study by Ganis et al. (2003), which provided evidence that different types of lies arise from different cognitive processes. We examined the initial response times (IRTs) to questions answered both deceptively and truthfully. We considered four types of deceptive responses: a coherent set of rehearsed, memorized lies about a life experience; a coherent set of lies spontaneously created about a life experience; a set of isolated lies involving self-knowledge; and a set of isolated lies involving knowledge of another person. We assessed the difference between truthful and deceptive IRTs. Scores from cognitive tasks included in the MiniCog Rapid Assessment Battery (MRAB) were significant predictors of IRT differences. Each type of lie was predicted by a distinct set of MRAB scores. These results provide further evidence that deception is a multifaceted process and that different kinds of lies arise from the operation of different cognitive processes.
To identify the brain regions involved in the interpretation of intentional movement by patients with schizophrenia, we investigated the association between cerebral gray matter (GM) volumes and performance on a theory of mind (ToM) task using voxel-based morphometry. Eighteen patients with schizophrenia and thirty healthy controls participated in the study. Participants were given a moving shapes task that employs the interpretation of intentional movement. Verbal descriptions were rated according to intentionality. ToM performance deficits in patients were found to be positively correlated with GM volume reductions in the superior temporal sulcus and medial prefrontal cortex. Our findings confirm that divergent brain regions contribute to mentalizing abilities and that GM volume reductions impact behavioral deficits in patients with schizophrenia.
Empathy is defined as an individually varying but stable personality trait. To our knowledge this notion seems questionable considering recent studies proving neuronal plasticity not only in childhood and adolescence but over the whole lifespan. We propose a model in which an individual's basic empathic abilities-arising from genetic factors, brain maturation, and early attachment experiences -are continually modulated by the intensity, continuity, and frequency of interpersonal socio-emotional stimulation and challenges. We assume neural processes and their underlying neural structures being modified by social and socio-emotional stimulation. Continuous social interactions should produce noticeable effects on the empathic abilities of an individual independent of age or brain maturation level. In particular, empathic abilities should be learnable and expandable beyond specific developmental windows. To elucidate this hypothesis we surveyed empathy measures of students of various professions with the help of a new instrument, the Questionnaire of Cognitive and Affective Empathy (QCAE) categorizing them into three different groups depending on their subsequent occupational fields: medical students, students of academic social professions, and a control group. Results indicate that continuous socio-emotional stimulation could increase empathic abilities potentially leading to learning effects.
Sample of the three conditions (AI, PCCH, and PCOB) presented to the subjects. The example in the AI condition gives more details on the sequential organization of the task. The two other conditions share the same characteristics. 
AI network localizations, chronometry, and modulation by conditions and groups
Reconstruction of cortical surface and time-courses of selected ROIs in the right hemisphere. Right cortical surface with normalized magnetic activation at 440 ms poststimulus in AI condition, in healthy subjects. Color scale: arbitrary units ( z values). A, B, C: Time-courses (raw data) of the right IPL (A), right TPJ (B), and right pSTS (C) in the three conditions (AI in red, PCCH in blue, and PCOB in green). Upper graph represents patients’ time courses and comparisons of conditions, middle graph healthy controls, lower graph statistical comparisons between groups. Horizontal axis in seconds ( − 0.1 to 0.9 s). Time origin corresponds to pictures’ presentation. Vertical axis in A / m ( × 10 − 12 ). Vertical red dotted lines represent mean peak latencies in each group. Horizontal coloured bars represent time intervals in which comparisons between conditions or groups are found significant: orange, p < .05; red p < .001 (permutation tests). Significant group × condition interactions are indicated with black bordered rectangles. 
Reconstruction of cortical surface and time-courses of selected ROIs in the left hemisphere. Left cortical surface with normalized magnetic activation at 400 ms poststimulus in AI condition, in healthy subjects. See Figure 2 caption for legend. 
Schizophrenia is associated with abnormal cortical activation during theory of mind (ToM), as demonstrated by several fMRI or PET studies. Electrical and temporal characteristics of these abnormalities, especially in the early stages, remain unexplored. Nineteen medicated schizophrenic patients and 21 healthy controls underwent magnetoencephalography (MEG) recording to measure brain response evoked by nonverbal stimuli requiring mentalizing. Three conditions based on comic-strips were contrasted: attribution of intentions to others (AI), physical causality with human characters (PCCH), and physical causality with objects (PCOB). Minimum norm localization was performed in order to select regions of interest (ROIs) within bilateral temporal and parietal regions that showed significant ToM-related activations in the control group. Time-courses of each ROI were compared across group and condition. Reduced cortical activation within the 200 to 600 ms time-window was observed in the selected regions in patients. Significant group by condition interactions (i.e., reduced modulation in patients) were found in right posterior superior temporal sulcus, right temporoparietal junction, and right inferior parietal lobule during attribution of intentions. As in healthy controls, the presence of characters elicited activation in patients' left posterior temporal regions and temporoparietal junction. No group difference on evoked responses' latencies in AI was found. In conclusion, ToM processes in the early stages are functionally impaired in schizophrenia. MEG provides a promising means to refine our knowledge on schizophrenic social cognitive disorders.
Improved source reconstruction using null beamforming at the FFA for the three stimulus categories. (a) Conventional beamforming reconstructs an implausible time course, peaking at around 100 ms. (b) Null beamforming reconstructs the later face-specific peak at 170 ms in the FFA. 
Brain responses to infant, infant with cleft lip, and adult faces. (a) Left: transverse slices with group source reconstruction are shown. Right OFC activity (thresholded at z > 3.1) was present in response to infant faces but diminished for the infant faces with cleft lip or the adult faces. Middle: MEG waveforms (with SE), determined from beamforming analysis, from the OFC, averaged for the three different face categories, show a clear peak in response to typical infant faces at 140 ms. Right: the time-frequency plot shows greater alpha band activity seen in response to the typical infant faces compared with the other faces. (b) The face-selective M170 in the right FFA was similar for the adult and typical infant faces but substantially lower for the infant faces with cleft lip (left: transverse slices with group source reconstruction). Averaged group waveforms (middle) and time-frequency plots (right) illustrate the magnitude of this difference. 
Infant faces elicit early, specific activity in the orbitofrontal cortex (OFC), a key cortical region for reward and affective processing. A test of the causal relationship between infant facial configuration and OFC activity is provided by naturally occurring disruptions to the face structure. One such disruption is cleft lip, a small change to one facial feature, shown to disrupt parenting. Using magnetoencephalography, we investigated neural responses to infant faces with cleft lip compared with typical infant and adult faces. We found activity in the right OFC at 140 ms in response to typical infant faces but diminished activity to infant faces with cleft lip or adult faces. Activity in the right fusiform face area was of similar magnitude for typical adult and infant faces but was significantly lower for infant faces with cleft lip. This is the first evidence that a minor change to the infant face can disrupt neural activity potentially implicated in caregiving.
ERPs to intuitive (correct), religious and non-religious counterintuitive sentences. Top: ERP waveforms at a selection of electrodes for three types of sentence endings. Bottom: difference maps of the significant effects (non-religious minus intuitive on the right and religious minus intuitive on the left) for both N400 (left) and P600 (right) in 350–450-ms and 550–850-ms time windows, respectively. 
Difference waves of results displayed in Figure 1. ERP waveforms at a selection of electrodes for the two counterintuitive sentence endings (religious and non-religious) after subtracting the activity to intuitive (correct) sentence endings. 
Semantic features (ontological categories) of experimental materials
Religious beliefs are both catchy and durable: they exhibit a high degree of adherence to our cognitive system, given their success of transmission and spreading throughout history. A prominent explanation for religion's cultural success comes from the "MCI hypothesis," according to which religious beliefs are both easy to recall and desirable to transmit because they are minimally counterintuitive (MCI). This hypothesis has been empirically tested at concept and narrative levels by recall measures. However, the neural correlates of MCI concepts remain poorly understood. We used the N400 component of the event-related brain potential as a measure of counterintuitiveness of violations comparing religious and non-religious sentences, both counterintuitive, when presented in isolation. Around 80% in either condition were core-knowledge violations. We found smaller N400 amplitudes for religious as compared to non-religious counterintuitive ideas, suggesting that religious ideas are less semantically anomalous. Moreover, behavioral measures revealed that religious ideas are not readily detected as unacceptable. Finally, systematic analyses of our materials, according to conceptual features proposed in cognitive models of religion, did not reveal any outstanding variable significantly contributing to these differences. Refinements of cognitive models of religion should elucidate which combination of factors renders an anomaly less counterintuitive and thus more suitable for recall and transmission.
In primates the gaze conveys important information about what others attend to and about their intentions. The ability to follow the gaze direction of conspecifics has been established for several primate species. It has been proposed to be a precursor for more complex cognitive skills related to mind reading. Studies in humans and other primates have shown that this behavior develops during the period between infancy and adulthood; however, the mechanisms responsible for its emergence are still unknown. In a series of experiments we investigated such mechanisms in macaques (Macaca nemestrina). Results show that juvenile macaques improve their ability to follow the gaze of a human experimenter and that adults' ability to follow gaze is more accurate than that of juveniles. Our data also show that this behavior can emerge as the result of learning processes. The discrepancy between the relatively long period of time needed for the full establishment of the gaze-following behavior and its high sensitivity to conditioning procedures may suggest that social experience and integration of this behavior with other social-cognitive skills are required for its development.
Self-construal priming modulates human behavior and associated neural activity. However, the neural activity associated with the self-construal priming procedure itself remains unknown. It is also unclear whether and how self-construal priming affects neural activity prior to engaging in a particular task. To address this gap, we scanned Chinese adults, using functional magnetic resonance imaging, during self-construal priming and a following resting state. We found that, relative to a calculation task, both interdependent and independent self-construal priming activated the ventral medial prefrontal cortex (MPFC) and the posterior cingulate cortex (PCC). The contrast of interdependent vs. independent self-construal priming also revealed increased activity in the dorsal MPFC and left middle frontal cortex. The regional homogeneity analysis of the resting-state activity revealed increased local synchronization of spontaneous activity in the dorsal MPFC but decreased local synchronization of spontaneous activity in the PCC when contrasting interdependent vs. independent self-construal priming. The functional connectivity analysis of the resting-state activity, however, did not show significant difference in synchronization of activities in remote brain regions between different priming conditions. Our findings suggest that accessible collectivistic/individualistic mind-set induced by self-construal priming is associated with modulations of both task-related and resting-state activity in the default mode network.
Oxytocin modulates many aspects of social cognition and behaviors, including maternal nurturing, social recognition and bonding. Natural variation in oxytocin receptor (OXTR) density in the nucleus accumbens (NAcc) is associated with variation in alloparental behavior, and artificially enhancing OXTR expression in the NAcc enhances alloparental behavior and pair bonding in socially monogamous prairie voles. Furthermore, infusion of an OXTR antagonist into the nucleus accumbens (NAcc) inhibits alloparental behavior and partner preference formation. However, antagonists can promiscuously interact with other neuropeptide receptors. To directly examine the role of OXTR signaling in social bonding, we used RNA interference to selectively knockdown, but not eliminate, OXTR in the NAcc of female prairie voles and examined the impact on social behaviors. Using an adeno-associated viral vector expressing a short hairpin RNA (shRNA) targeting Oxtr mRNA, we reduced accumbal OXTR density in female prairie voles from juvenile age through adulthood. Females receiving the shRNA vector displayed a significant reduction in alloparental behavior and disrupted partner preference formation. These are the first direct demonstrations that OXTR plays a critical role in alloparental behavior and adult social attachment, and suggest that natural variation in OXTR expression in this region alone can create variation in social behavior.
Previous behavioral work suggests that processing information in relation to the self enhances subsequent item recognition. Neuroimaging evidence further suggests that regions along the cortical midline, particularly those of the medial prefrontal cortex (PFC), underlie this benefit. There has been little work to date, however, on the effects of self-referential encoding on source memory accuracy or whether the medial PFC might contribute to source memory for self-referenced materials. In the current study, we used fMRI to measure neural activity while participants studied and subsequently retrieved pictures of common objects superimposed on one of two background scenes (sources) under either self-reference or self-external encoding instructions. Both item recognition and source recognition were better for objects encoded self-referentially than self-externally. Neural activity predictive of source accuracy was observed in the medial PFC (Brodmann area 10) at the time of study for self-referentially but not self-externally encoded objects. The results of this experiment suggest that processing information in relation to the self leads to a mnemonic benefit for source level features, and that activity in the medial PFC contributes to this source memory benefit. This evidence expands the purported role that the medial PFC plays in self-referencing.
A recently published study by the present authors reported evidence that functional changes in the anterior cingulate cortex within a sample of 96 criminal offenders who were engaged in a Go/No-Go impulse control task significantly predicted their rearrest following release from prison. In an extended analysis, we use discrimination and calibration techniques to test the accuracy of these predictions relative to more traditional models and their ability to generalize to new observations in both full and reduced models. Modest to strong discrimination and calibration accuracy were found, providing additional support for the utility of neurobiological measures in predicting rearrest.
Observation of changes in autonomic arousal was one of the first methodologies used to detect deception. Electrodermal activity (EDA) is a peripheral measure of autonomic arousal and one of the primary channels used in polygraph exams. In an attempt to develop a more central measure to identify lies, the use of functional magnetic resonance imaging (fMRI) to detect deception is being investigated. We wondered if adding EDA to our fMRI analysis would improve our diagnostic ability. For our approach, however, adding EDA did not improve the accuracy in a laboratory-based deception task. In testing for brain regions that replicated as correlates of EDA, we did find significant associations in right orbitofrontal and bilateral anterior cingulate regions. Further work is required to test whether EDA improves accuracy in other testing formats or with higher levels of jeopardy.
The motive to achieve success (MAS) and motive to avoid failure (MAF) are two different but classical kinds of achievement motivation. Though many functional magnetic resonance imaging studies have explored functional activation in motivation-related conditions, research has been silent as to the brain structures associated with individual differences in achievement motivation, especially with respect to MAS and MAF. In this study, the voxel-based morphometry method was used to uncover focal differences in brain structures related to MAS and MAF measured by the Mehrabian Achieving Tendency Scale in 353 healthy young Chinese adults. The results showed that the brain structures associated with individual differences in MAS and MAF were distinct. MAS was negatively correlated with regional gray matter volume (rGMV) in the medial prefrontal cortex (mPFC)/orbitofrontal cortex while MAF was negatively correlated with rGMV in the mPFC/subgenual cingulate gyrus. After controlling for mutual influences of MAS and MAF scores, MAS scores were found to be related to rGMV in the mPFC/orbitofrontal cortex and another cluster containing the parahippocampal gyrus and precuneus. These results may predict that compared with MAF, the generation process of MAS may be more complex and rational, thus in the real world, perhaps MAS is more beneficial to personal growth and guaranteeing the quality of task performance.
Recent neuroimaging studies on "theory of mind" have demonstrated that the medial prefrontal cortex (PFC) is involved when subjects are engaged in various kinds of mentalising tasks. Although a large number of neuroimaging studies have been published, a relatively small amount of neuropsychological evidence supports involvement of the medial PFC in theory of mind reasoning. We recruited two neurological cases with damage to the medial PFC and initially performed the standard neuropsychological assessments for intelligence, memory, and executive functions. To examine theory of mind performance in these two cases, four kinds of standard and advanced tests for theory of mind were used, including first- and second-order false belief tests, the strange stories test, and the faux pas recognition test. Both patients were also requested to complete the questionnaire for the autism-spectrum quotient. Neither case showed impairment on standard theory of mind tests and only mild impairments were seen on advanced theory of mind tests. This pattern of results is basically consistent with previous studies. The most interesting finding was that both cases showed personality changes after surgical operations, leading to characteristics of autism showing a lack of social interaction in everyday life. We discuss herein the possible roles of the medial PFC and emphasize the importance of using multiple approaches to understand the mechanisms of theory of mind and medial prefrontal functions.
It has previously been shown that observing an action made by a human, but not by a robot, interferes with executed actions (Kilner, Paulignan, & Blakemore, 2003). Here, we investigated what aspect of human movement causes this interference effect. Subjects made arm movements while observing a video of either a human making an arm movement or a ball moving across the screen. Both human and ball videos contained either biological (minimum jerk) or non-biological (constant velocity) movements. The executed and observed arm movements were either congruent (same direction) or incongruent (tangential direction) with each other. The results showed that observed movements are processed differently according to whether they are made by a human or a ball. For the ball videos, both biological and non-biological incongruent movements interfered with executed arm movements. In contrast, for the human videos, the velocity profile of the movement was the critical factor: only incongruent, biological human movements interfered with executed arm movements. We propose that the interference effect could be due either to the information the brain has about different types of movement stimuli or to the impact of prior experience with different types of form and motion.
(a) Sensory sharing versus (b) motor sharing. 
Performing an action and observing it activate the same internal representations of action. The representations are therefore shared between self and other (shared representations of action, SRA). But what exactly is shared? At what level within the hierarchical structure of the motor system do SRA occur? Understanding the content of SRA is important in order to decide what theoretical work SRA can perform. In this paper, we provide some conceptual clarification by raising three main questions: (i) are SRA semantic or pragmatic representations of action?; (ii) are SRA sensory or motor representations?; (iii) are SRA representations of the action as a global unit or as a set of elementary motor components? After outlining a model of the motor hierarchy, we conclude that the best candidate for SRA is intentions in action, defined as the motor plans of the dynamic sequence of movements. We shed new light on SRA by highlighting the causal efficacy of intentions in action. This in turn explains phenomena such as inhibition of imitation.
Sample stimuli. In Study 1, typicality of target sex was manipulated by morphing sexually dimorphic information of the internal face. In Study 2, eye color was manipulated to be either dark or light. 
Diagram of the electrode montage used in Studies 1 and 2. 
Grand-average waveforms for typical and atypical faces of Study 1. Negative is plotted up. Top of the figure corresponds with anterior aspect of the head, as depicted in Figure 2. Note the higher negativity for atypical faces (relative to typical faces) emerging around 250 ms at anterior sites and around 350 ms at posterior sites. These denote the N300 and N400. 
Voltage maps of normalized difference waves, atypical-typical, depicting differences in neural potentials to typical vs. atypical faces. Top of each figure corresponds with anterior aspect of the head, as depicted in Figure 2. A negativity effect emerges around 250 at anterior sites, which then gradually moves more posterior into a strong centro-posterior negativity effect. This reflects the evolution of enlarged N300 and N400 effects in response to atypical faces. 
Grand-average LRPs for typical and atypical faces of Study 1. The LRP for atypical faces grows larger in size than the LRP for typical faces, indicating greater competition between the motor cortices. 
Using event-related potentials, we investigated how the brain extracts information from another's face and translates it into relevant action in real time. In Study 1, participants made between-hand sex categorizations of sex-typical and sex-atypical faces. Sex-atypical faces evoked negativity between 250 and 550 ms (N300/N400 effects), reflecting the integration of accumulating sex-category knowledge into a coherent sex-category interpretation. Additionally, the lateralized readiness potential revealed that the motor cortex began preparing for a correct hand response while social category knowledge was still gradually evolving in parallel. In Study 2, participants made between-hand eye-color categorizations as part of go/no-go trials that were contingent on a target's sex. On no-go trials, although the hand did not actually move, information about eye color partially prepared the motor cortex to move the hand before perception of sex had finalized. Together, these findings demonstrate the dynamic continuity between person perception and action, such that ongoing results from face processing are immediately and continuously cascaded into the motor system over time. The preparation of action begins based on tentative perceptions of another's face before perceivers have finished interpreting what they just saw.
Hemodynamic brain responses (oxyHb in mmol/L) measured in 4-month-old infants during action observation. Regions of interest (ROIs) used for our analysis are marked on the schematic infant head model (ANT = anterior ROI, POS = posterior ROI, INF = inferior ROI, SUP = superior ROI). This graph depicts mean oxygenated hemoglobin concentration changes (±SEM) in anterior ROI-premotor (a and b) and the inferior ROI-temporal (d and e) brain regions during the four experimental conditions (form: human vs. robot; motion: human vs. robot). Channels that were summarized to regions of interest and used to calculate the mean oxygenated concentration changes are marked on the head model (c) for each hemisphere.
Much research has been carried out to understand how human brains make sense of another agent in motion. Current views based on human adult and monkey studies assume a matching process in the motor system biased toward actions performed by conspecifics and present in the observer's motor repertoire. However, little is known about the neural correlates of action cognition in early ontogeny. In this study, we examined the processes involved in the observation of full body movements in 4-month-old infants using functional near-infrared spectroscopy to measure localized brain activation. In a 2 × 2 design, infants watched human or robotic figures moving in a smooth, familiar human-like manner, or in a rigid, unfamiliar robot-like manner. We found that infant premotor cortex responded more strongly to observe robot-like motion compared with human-like motion. Contrary to current views, this suggests that the infant motor system is flexibly engaged by novel movement patterns. Moreover, temporal cortex responses indicate that infants integrate information about form and motion during action observation. The response patterns obtained in premotor and temporal cortices during action observation in these young infants are very similar to those reported for adults. These findings thus suggest that the brain processes involved in the analysis of an agent in motion in adults become functionally specialized very early in human development.
Numerous cortical regions respond to aspects of the human form and its actions. What is the contribution of the extrastriate body area (EBA) to this network? In particular, is the EBA involved in constructing a dynamic representation of observed actions? We scanned 16 participants with fMRI while they viewed two kinds of stimulus sequences. In the coherent condition, static frames from a movie of a single, intransitive whole-body action were presented in the correct order. In the incoherent condition, a series of frames from multiple actions (involving one actor) were presented. ROI analyses showed that the EBA, unlike area MT + and the posterior superior temporal sulcus, responded more to the incoherent than to the coherent conditions. Whole brain analyses revealed increased activation to the coherent sequences in parietal and frontal regions that have been implicated in the observation and control of movement. We suggest that the EBA response adapts when succeeding images depict relatively similar postures (coherent condition) compared to relatively different postures (incoherent condition). We propose that the EBA plays a unique role in the perception of action, by representing the static structure, rather than dynamic aspects, of the human form.
It has been proposed that common codes for vision and action emerge from associations between an individual's production and simultaneous observation of actions. This typically first-person view of one's own action subsequently transfers to the third-person view when observing another individual. We tested vision-action associations and the transfer from first-person to third-person perspective by comparing novel hand-action sequences that were learned under three conditions: first, by being performed and simultaneously viewed from a first-person perspective; second, by being performed but not seen; and third, by being seen from a first-person view without being executed. We then used functional magnetic resonance imaging (fMRI) to compare the response to these three types of learned action sequences when they were presented from a third-person perspective. Visuomotor areas responded most strongly to sequences that were learned by simultaneously producing and observing the action sequences. We also note an important asymmetry between vision and action: Action sequences learned by performance alone, in the absence of vision, facilitated the emergence of visuomotor responses, whereas action sequences learned by viewing alone had comparably little effect. This dominance of action over vision supports the notion of forward/predictive models of visuomotor systems.
Traditionally, communication has been defined as the intentional exchange of symbolic information between individuals. In contrast, the mirror system provides a basis for nonsymbolic and nonintentional information exchange between individuals. We believe that understanding the role of the mirror system in joint action has the potential to serve as a bridge between these two domains. The present study investigates one crucial component of joint action: the ability to represent others' potential actions in the same way as one's own in the absence of perceptual evidence. In two experiments a joint spatial numerical association of response codes (SNARC) effect is demonstrated, providing further evidence that individuals form functionally equivalent representations of their own and others' potential actions. It is shown that numerical (symbolic) stimuli that are mapped onto a spatially arranged internal representation (a mental number line) can activate a co-represented action in the same way as spatial stimuli. This generalizes previous results on co-representation.We discuss the role of the mirror system in co-representation as a basis for shared intentionality and communication.
The understanding of actions of tool use depends on the motor act that is performed and on the function of the objects involved in the action. We used event-related potentials (ERPs) to investigate the processes that derive both kinds of information in a task in which inserting actions had to be judged. The actions were presented as two consecutive frames, one showing an effector/instrument and the other showing a potential target object of the action. Two mismatches were possible. An orientation mismatch occurred when the spatial object properties were not consistent with a motor act of insertion being performed (i.e., different orientations of insert and slot). A functional mismatch happened when the instrument (e.g., screwdriver) would usually not be applied to the target object (e.g., keyhole). The order in which instrument and target object were presented was also varied. The two kinds of mismatch gave rise to similar but not identical negativities in the latency range of the N400 followed by a positive modulation. The results indicate that the motor act and the function of the objects are derived by two at least partially different subprocesses and become integrated into a common representation of the observed action.
Stimuli. This figure shows example images for each of the four-actor orientations and two reaching conditions. The table configuration column illustrates the spatial layout of the table (blue, viewer at 6 pm) and of the actor (green). The object was always at one of the two locations marked with an orange disk; so, if it was on the viewer's left, it would be on the actor's right or vice versa. The distance between the center of the image and the object was constant in every condition. A mirrored set of images with the actor on the right-hand side of the table was also used. 
Factorial design and contrasts. (a) This table depicts the 2 × 4 factorial design and lists the acronyms for each condition. (b) The table lists the major contrasts calculated to address each of our three research questions. Abbreviation: VPT, visual perspective taking. 
Behavioral results. (a) Mean reaction time for correct trials and (b) mean error rate for each task are illustrated. 
Effects of viewing a reaching actor. Reach > no-reach in the altercentric task (a) and reach > no-reach in the egocentric task (b), both reveal engagement of occipitotemporal regions. There was overlap between these two contrasts in occipitotemporal cortex (c). Abbreviations as Table 1. 
Taking another person's viewpoint and making sense of their actions are key processes that guide social behavior. Previous neuroimaging investigations have largely studied these processes separately. The current study used functional magnetic resonance imaging to examine how the brain incorporates another person's viewpoint and actions into visual perspective judgments. Participants made a left-right judgment about the location of a target object from their own (egocentric) or an actor's visual perspective (altercentric). Actor location varied around a table and the actor was either reaching or not reaching for the target object. Analyses examined brain regions engaged in the egocentric and altercentric tasks, brain regions where response magnitude tracked the orientation of the actor in the scene and brain regions sensitive to the action performed by the actor. The blood oxygen level-dependent (BOLD) response in dorsomedial prefrontal cortex (dmPFC) was sensitive to actor orientation in the altercentric task, whereas the response in right inferior frontal gyrus (IFG) was sensitive to actor orientation in the egocentric task. Thus, dmPFC and right IFG may play distinct but complementary roles in visual perspective taking (VPT). Observation of a reaching actor compared to a non-reaching actor yielded activation in lateral occipitotemporal cortex, regardless of task, showing that these regions are sensitive to body posture independent of social context. By considering how an observed actor's location and action influence the neural bases of visual perspective judgments, the current study supports the view that multiple neurocognitive "routes" operate during VPT.
Humans are frequently confronted with goal-directed tasks that can not be accomplished alone, or that benefit from co-operation with other agents. The relatively new field of social cognitive neuroscience seeks to characterize functional neuroanatomical systems either specifically or preferentially engaged during such joint-action tasks. Based on neuroimaging experiments conducted on critical components of joint action, the current paper outlines the functional network upon which joint action is hypothesized to be dependant. This network includes brain areas likely to be involved in interpersonal co-ordination at the action, goal, and intentional levels. Experiments focusing specifically on joint-action situations similar to those encountered in real life are required to further specify this model.
Summary of observed and predicted properties of mirror neurons in premotor (F5, from Rizzolatti et al., 1996 and Figure 2) and inferior parietal cortex (7b, from Fogassi et al., 2005, Figure 3, and Figure 4). Each action sequence can be analysed and described by agents (X, Y, general form in “S”, like subject), action forms (A, B, general form in “V”, like verb), goal of action (objects, general form in “O”), and other variables (others, general form in “Ot”). These elements are expressed in symbols as shown in the “Coding” column. In this column, “*” denotes compatibility with any substitutes. Elements in parentheses indicate that these must be specific, not compatible with any substitutes. Possible “Neuro-cognitive processes” are proposed in the rightmost column. The bottom row depicts our predictions based on these results, suggesting that when mirror neurons with five different properties are integrated into a system, the “Equivalence relations” will emerge in the brain to produce a human-like flexible linguistic system.
Generalization process of action understandings. A: A representative example of inferior parietal neurons whose response patterns resemble those of classical mirror neurons in the F5 premotor cortex. Neural discharge in raster plots (top of the graph) and histograms showing numbers of spikes occurring per 100 ms bin (ordinate) are shown along the time axis (abscissa). Behavioural events were examined frame by frame (30 frames per second). This format is identical for all neural activity graphs in Figures 2–4. In this condition, the experimenter reached into the container, which sat in front of the monkey on the table, picked up the reward with his or her fingers and handed it to the monkey, which then grabbed the food, brought it to its mouth and ate it. This neuron discharged both when the monkey observed the experimenter picking up a piece of food with a precision grip (red bars under the abscissa, and red circle in the inset) and when the monkey picked up food in the same manner (blue underline and circle). B: Neural recording sites. Data from four hemispheres of three monkeys are projected onto the right parietal area (area shown by the square in the inset) of one monkey, normalized in relation to species-general configurations of intraparietal sulcus. cs, central sulcus; ips, intraparietal sulcus; ls, lateral sulcus. The red dot indicates the electrode tract from which the neurons depicted in A were recorded.
The anterior portion of the inferior parietal cortex possesses comprehensive representations of actions embedded in behavioural contexts. Mirror neurons, which respond to both self-executed and observed actions, exist in this brain region in addition to those originally found in the premotor cortex. We found that parietal mirror neurons responded differentially to identical actions embedded in different contexts. Another type of parietal mirror neuron represents an inverse and complementary property of responding equally to dissimilar actions made by itself and others for an identical purpose. Here, we propose a hypothesis that these sets of inferior parietal neurons constitute a neural basis for encoding the semantic equivalence of various actions across different agents and contexts. The neurons have mirror neuron properties, and they encoded generalization of agents, differentiation of outcomes, and categorization of actions that led to common functions. By integrating the activities of these mirror neurons with various codings, we further suggest that in the ancestral primates' brains, these various representations of meaningful action led to the gradual establishment of equivalence relations among the different types of actions, by sharing common action semantics. Such differential codings of the components of actions might represent precursors to the parts of protolanguage, such as gestural communication, which are shared among various members of a society. Finally, we suggest that the inferior parietal cortex serves as an interface between this action semantics system and other higher semantic systems, through common structures of action representation that mimic language syntax.
Schematic representation of the experimental set-up and cue stimuli. The cue stimuli were projected on top of the object that was located at the middle of the table. There were six different cue stimuli indicating the type of action to be performed: (a) No action; (b) Partner A lifts the object and then places it back; (c) Partner B lifts the object and then places it back; (d) The Confederate lifts the object and then places it back; (e) Partner A lifts the object and gives it to Partner B, who will then place the object back to its starting location; (f) Partner B lifts the object and gives it to Partner A, who will then place the object back to its starting location.  
Schematic example of trial. The trial started with the presentation of a fixation cross for 1000 ms. Then a cue stimulus appeared for 200 ms indicating the type of action to be performed (in this example " joint action " ). Following another fixation period of 800 ms, the imperative stimulus appeared for 200 ms prompting the participants to act. The period of interest was the time interval (1000 ms) between cue onset and imperative stimulus onset (foreperiod).  
Decrease in asynchrony between the onset of the receive response and the onset of the (earlier) give response on a trial-bytrial basis. (F(1, 15) = 88.89, p < .001). Paired t-tests showed that there was no significant difference in action onset for acting individually compared with giving the object to the interaction partner (t(15) = 1.82, p > .05). However, participants were significantly slower to initiate the receiving action in the joint condition both compared to initiating individual action (t(15) = −9.64, p < .001) and compared to initiating the giving action in the joint condition (t(15) = −9.34, p < .001). Examining action onset times on a trial-by-trial basis (Figure 3) revealed that participants took less and less time to initiate their actions as the experiment progressed. This speed-up was relatively small in the individual condition (−0.25 ms/trial) and in the give condition (−0.23 ms/trial). The speed-up was clearly larger in the receive condition (−1.71 ms/trial). The more extensive speed-up in the receiving condition compared to the giving condition implied a continuous decrease (−1.17 ms/trial) in the asynchrony between the giver's and the receiver's action onsets in joint action trials. Thus, the efficiency of interpersonal coordination was constantly improving throughout the experiment. Compared with a complete lack of improvement in the efficiency of coordination (zero slope), this effect was statistically significant (t(31) = −5.05, p < .001).
(a) Color-coded, grand average waveforms derived from pooled electrode sites (FCz, FC1, FC2, Fz, and Cz, highlighted as gray circles) and scalp voltage distributions of the P3a component (top view) from 200 to 250 ms after cue onset. (b) Color-coded, grand average waveforms from pooled electrode sites (P4, P6, PO4, and PO8) and scalp voltage distributions of the lateral-P3b component (back view) from 260 to 330 ms after cue onset. (c) Color-coded, grand average waveforms from pooled electrode sites (Pz, POz, PO3, and PO4) and scalp voltage distributions of the medial-P3b component (back view) from 450 to 500 ms after cue onset. The gray bars indicate the latency window for amplitude analysis. The vertical dashed line at time 0 denotes cue onset.  
Color-coded, grand average motor CNV waveforms derived from pooled electrode sites (Cz, C1, FCz, and CPz, highlighted as gray circles) and scalp topographies (top view) in the last 200 ms (indicated by the gray square) before go stimulus onset. The gray bars indicate the latency window for amplitude analysis. The vertical dashed lines at times 0 and 1000 denote cue onset and go stimulus onset, respectively.
It has been postulated that when people engage in joint actions they form internal representations not only of their part of the joint task but of their co-actors' parts of the task as well. However, empirical evidence for this claim is scarce. By means of high-density electroencephalography, this study investigated whether one represents and simulates the action of an interaction partner when planning to perform a joint action. The results showed that joint action planning compared with individual action planning resulted in amplitude modulations of the frontal P3a and parietal P3b event-related potentials, which are associated with stimulus classification, updating of representations, and decision-making. Moreover, there was evidence for anticipatory motor simulation of the partner's action in the amplitude and peak latency of the late, motor part of the Contingent Negative Variation, which was correlated with joint action performance. Our results provide evidence that when people engage in joint tasks, they represent in advance each other's actions in order to facilitate coordination.
Examples of stimuli used in the primary and secondary tasks of Experiments 1 and 2. Body stimuli used in the primary memory task. Panels A and B demonstrate the changes in body posture characteristic of a ‘‘different’’ trial. Panels C and D illustrate arm and leg position stimuli for the secondary-movement tasks. 
The dual-task experimental set up used in Experiments 1 and 2. Stimuli for the primary task were presented on the computer screen on the right. Stimuli for the secondary task were presented on the computer screen on the left. In this example, participants matched their own arm positions to a series of model’s arm positions on the left screen. Participants were told to remember the body position of the model on the right screen. 
Experiment 1. With a 5 s ISI, a significant body-part-specific interaction (i.e., the Body Part Moved)Body Part Cued interaction) was found that revealed relatively better performance when the same body-part region was attended on a model's posture as was moved by the participant.
The accurate perception of other people and their postures is essential for functioning in a social world. Our own bodies organize information from others to help us respond appropriately by creating self–other mappings between bodies. In this study, we investigated mechanisms involved in the processing of self–other correspondences. Reed and Farah (199547. Reed , C. L. and Farah , M. J. 1995. The psychological reality of the body schema: A test with normal participants. Journal of Experimental Psychology: Human Perception and Performance, 21: 334–343. [CrossRef], [PubMed], [Web of Science ®], [CSA]View all references) showed that a multimodal, articulated body representation containing the spatial relations among parts of the human body was accessed by both viewing another's body and moving one's own. Use of one part of the body representation facilitated the perception of homologous areas of other people's bodies, suggesting that inputs from both the self and other activated the shared body representation. Here we investigated whether this self–other correspondence produced rapid facilitation or required additional processing time to resolve competing inputs for a shared body representation. Using a modified Reed and Farah dual-task paradigm, we found that processing time influenced body-position memory: an interaction between body-part moved and body-part attended revealed a relative facilitation effect at the 5 s ISI, but interference at the 2 s ISI. Our results suggest that effective visual-motor integration from the self and other requires time to activate shared portions of the spatial body representation.
Stimuli examples for ID2-W8 combination. Left panel: human movement as depicted by a hand moving between two identical targets. Right panel: object movement depicted by a pen moving between two identical targets. 
Example trial of the task presented in the scanner.
Conjunction analysis -human movement: ID2 < ID3 and ID3 < ID4
Mean perceived movement time as a function of target width (W) and (A) index of difficulty (upper panel) and (B) movement amplitude (lower panel). The corresponding linear regression lines and coefficients of determination are also provided.
SPM maps for human movement > object movement contrast displayed on a rendered brain and on a single-subject template T1, at a combined threshold of maxima ( Z > 2.5) and cluster ( p < .05) larger than 200 mm 3 . 
Previous neuroimaging studies support the assumption of a strong link between perception and action, demonstrating that the motor system is involved when others' actions are observed. One question that is still open to debate is which aspects of observed actions engage the motor system. The present study tested whether motor activation corresponds to the difficulty of the observed action, using Fitts's law. This law postulates that the difficulty of any movement (ID) is a function of the distance to the target (A) and the target width (W). In an observation task, the ID of the observed action was manipulated orthogonally to W (by using five different As). The results revealed activity in the primary motor cortex, the supplementary motor area, and the basal ganglia in response to increasing ID levels, but not in response to different levels of A or W. Thus, activation in the motor system during action observation is not driven by perceptual parameters but by the motor difficulty of the observed action.
Lateralized magnetic fields were recorded from 12 subjects using a 151 channel magnetoencephalography (MEG) system to investigate temporal and functional properties of motor activation to the observation of goal-directed hand movements by a virtual actor. Observation of left and right hand movements generated a neuromagnetic lateralized readiness field (LRF) over contralateral motor cortex. The early onset of the LRF and the fact that the evoked component was insensitive to the correctness of the observed action suggest the operation of a fast and automatic form of motor resonance that may precede higher levels of action understanding.
Ideomotor movements may arise in observers while they watch other people's actions. Previous studies have shown that ideomotor movements are guided by both perceptual and intentional characteristics of the actions being observed (perceptual induction and intentional induction, respectively; cf. Knuf, Aschersleben, & Prinz, 2001; de Maeght & Prinz, 2004). In the present study we explore the functional basis of intentional induction. More specifically we raise the issue of whose intentions count for intentional induction: observers' own intentions or observees' (implied) intentions? We studied ideomotor movements in a cooperative and a competitive task setting. In the cooperative setting observers' and observees' intentions were identical, but in the competitive setting they were different. Results indicate that ideomotor movements are guided by the observers' own intentions, not the observees' implied intentions. Our findings suggest that, though observers understand the intentions of others, their ideomotor movements are guided by their own intentions, expressing what they themselves wish to see the other is doing.
Children with high resistance to peer influences differ from their low-resistance counterparts in the degree of functional connectivity in fronto-parietal and prefrontal cortical networks. Here we explored the possibility that the degree of morphological similarities across the same cortical regions also varies as a function of this behavioral trait. Using structural magnetic-resonance (MR) images, we measured cortical thickness in a total of 295 adolescents (12 to 18 years of age). We found that inter-regional correlations in cortical thickness increased with the resistance to peer influence (RPI); this was especially the case, in female adolescents, in the premotor and prefrontal networks. We also observed significant differences between the adolescents with high and low RPI scores in their general intelligence and the scores of positive youth development. We suggest that these morphological findings might reflect differences, between adolescents with high vs. low resistance to peer influences, in a repeated and concurrent engagement of these networks in social context.
Body postures provide clear signals about emotional expressions, but so far it is not clear what muscle patterns are associated with specific emotions. This study lays the groundwork for a Body Action Coding System by investigating what combinations of muscles are used for emotional bodily expressions and assessing whether these muscles also automatically respond to the perception of emotional behavior. Surface electromyography of muscles in the arms (biceps and triceps) and shoulders (upper trapezius and deltoids) were measured during both active expression and passive viewing of fearful and angry bodily expressions. The biceps, deltoids, and triceps are recruited strongly for the expression of anger and fear expression predominantly depends on the biceps and the deltoids. During passive viewing, all muscles automatically activate during the passive viewing of anger. During fear perception, a clear activation can be seen in the trapezius, deltoid, and triceps muscles, whereas the biceps shows inhibition. In conclusion, this study provides more insight into the perception and expression of emotions in the body.
Social (A) and nonsocial stimuli (B, C) used in the studies. (D) Exemplary depiction of the event structure for congruent responses to the face stimulus ISI: interstimulus interval. Taken from Schilbach et al., in press.  
Neural correlates of incongruent responses to face stimulus. IFG: inferior frontal gyrus; ACC: anterior cingulate cortex; DMPFC: dorsal medial prefrontal cortex; THA: thalamus; DS: dorsal striatum. Taken from Schilbach et al., 2010.  
This paper proposes an empirical hypothesis that in some cases of social interaction we have an immediate perceptual access to others' minds in the perception of their embodied intentionality. Our point of departure is the phenomenological insight that there is an experiential difference in the perception of embodied intentionality and the perception of non-intentionality. The other's embodied intentionality is perceptually given in a way that is different from the givenness of non-intentionality. We claim that the phenomenological difference in the perception of embodied intentionality and non-intentionality translates into an account of how, in some cases of social cognition, we perceive mental properties in the perception of embodied intentionality. The hypothesis derives support from a host of recent empirical studies in social neuroscience which demonstrate the importance of embodied engagements in understanding other minds. These studies reveal that embodied intersubjective interaction often builds on our ability to understand other minds in an immediate perceptual way not adequately investigated by theory-theory (TT) and simulation theories (ST) of mind-reading. We argue that there is a genuine, nontrivial difference in the informational content of the perception of embodied intentionality and the perception of non-intentionality which leads to a further difference in the way information is processed in the case of perception of embodied intentionality as opposed to the perception of non-intentionality. The full significance of such difference is appreciated only within an account of perception which views perception and action as tightly coupled. Thus, we propose an "action-oriented account of social perception" to develop a neurophilosophical account of the perceptual knowledge of other minds.
Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.
A forced-choice social foraging method was used to explore how free-ranging rhesus monkeys make inferences about other individuals' goals and intentions. Subjects saw an experimenter perform an action towards one of two potential food sources, then were allowed to approach and choose one of those sources. Results showed that subjects selectively picked the food source targeted by the experimenter's action only when the action was within the rhesus' motor repertoire. Further studies explored the extent to which rhesus attend to the details of the goal as well as the means by which the goal was obtained, with results paralleling those obtained from cellular recordings of macaque mirror neurons. Monkeys' pattern of success and failure supports the hypothesis that motor areas play a functionally significant role in event parsing and action understanding.
The present fMRI study was aimed at assessing the cortical areas active when individuals observe non-object-directed actions (mimed, symbolic, and meaningless), and when they imagine performing those same actions. fMRI signal increases in common between action observation and motor imagery were found in the premotor cortex and in a large region of the inferior parietal lobule. While the premotor cortex activation overlapped that previously found during the observation and imagination of object-directed actions, in the parietal lobe the signal increase was not restricted to the intraparietal sulcus region, known to be active during the observation and imagination of object-directed actions, but extended into the supramarginal and angular gyri. When contrasting motor imagery with the observation of non-object-directed actions, signal increases were found in the mesial frontal and cingulate cortices, the supramarginal gyrus, and the inferior frontal gyrus. The opposite contrast showed activation virtually limited to visual areas. In conclusion, the present data define the common circuit for observing and imagining non-object-directed actions. In addition, they show that the representation of non-object-directed actions include parietal regions not found to be involved in coding object-directed actions.
Example stimuli used in the experiment. (A) Snapshots taken from a stimulus used in the Ordinary × Mouth condition. (B) Snapshots taken from a stimulus used in the Extraordinary × Ear condition. 
Areas of interest for the eye movement analysis. The blue rectangle depicts the mouth area used for the analysis of the anticipatory looks; the red rectangle depicts the ear area. 
(A) Powerspectra of individual infants averaged over conditions and over the central electrodes. (B) Topoplot displaying the power in the frequency band from 7.5 to 8.3 Hz averaged over all conditions. 
Topoplot displaying the difference in power between the Extraordinary and the Ordinary action conditions in the frequency band from 7.5 to 8.3 Hz. The white dots indicate the electrodes that were included in the analysis. 
Frequency of visual anticipations. (A) The percentage of anticipatory looks to the mouth in the stimuli with the target area Mouth for Ordinary (left line) and Extraordinary actions (right line). (B) The percentage of anticipatory looks to the ear in the stimuli with the target area Ear for Ordinary (left line) and Extraordinary actions (right line). 
Infants make predictions about actions they observe already during the first year of life. To investigate the role of the motor system in predicting the end state of observed actions, 12-month-old infants were shown movies of ordinary and extraordinary object-directed actions. The stimuli displayed a female actor who picked up an everyday object (a cup or a phone) and brought it to either her mouth or her ear. In this way, a similar movement could be ordinary (e.g., cup to mouth) or extraordinary (e.g., phone to mouth) depending on the object used. Infants' EEG and eye movements were recorded. We found a significantly stronger motor activation, indicated by a stronger desynchronization in the mu-frequency band over fronto-central areas, during observation of extraordinary compared to ordinary actions. This is explained within the computational framework of Kilner, Friston, and Frith (200713. Kilner , J. M. , Friston , K. J. and Frith , C. D. 2007. Predictive coding: An account of the mirror neuron system. Cognitive Processing, 8: 159–166. [CrossRef], [PubMed]View all references), who suggest that the motor system is used to generate predictions about actions we observe. If the observed action deviates from the initially expected path, additional predictions have to be generated, resulting in a stronger motor activation during perception of extraordinary actions. In sum, it appears that from early in life, the motor system is involved in making predictions about how an observed action will end.
Examples of the experimental stimuli. Images represented upper and lower limbs performing meaningful and meaningless actions. All left images were obtained by mirroring the corresponding right ones. 
Normalized RTs and accuracy of response means (9 MSE) of EBA and vPMc stimulation conditions
Talairach coordinates of stimulated sites corresponding to left and right vPMc and EBA. For each participant, stereotaxic coordinates corresponding to the target areas were obtained by means of the SofTaxic neuronavigation system. 
Raw RTs and accuracy of response means (9 MSE) of all experimental conditions
Procedure examples of possible, impossible and biomechanical plausibility discriminations. Repetitive transcranial magnetic stimulation (rTMS) was applied with a delay of 150 ms after sample presentation. On each trial, a 10-Hz dual-pulse train lasting 200 ms was delivered during mask presentation. 
Single-pulse transcranial magnetic stimulation (TMS) studies show that action observation facilitates the onlooker's cortico-spinal system supporting the notion of motor mirroring. Repetitive transcranial magnetic stimulation (rTMS) over ventral premotor cortex (vPMc) impairs visual discrimination of body actions. Although studies suggest that the action observation-execution matching system may map only actions that belong to the observer's motor repertoire, we demonstrated comparable motor and premotor facilitation during observation of biomechanically possible as well as impossible actions. It has also been shown that seeing impossible body movements activates the extrastriate body area (EBA). Using event-related rTMS, we sought to determine whether vPMc and EBA are actively involved in the visual discrimination of actions performed through biomechanically possible or impossible kinematics and of their biomechanical plausibility. Stimulation of vPMc impaired discrimination of possible actions while leaving intact the discrimination of biomechanically impossible actions and of biomechanical plausibility. No effect of EBA rTMS on any type of action processing was found. Thus, vPMc is crucial for discrimination of the goal of actions that can be actually performed suggesting that this area is involved in the visual processing of goal-directed actions.
Previous evidence indicates that we understand others' actions not only by perceiving their visual features but also by their sound. This raises the possibility that brain regions responsible for action understanding respond to cues coming from different sensory modalities. Yet no studies, to date, have examined if this extends to olfaction. Here we addressed this issue by using functional magnetic resonance imaging. We searched for brain activity related to the observation of an action executed towards an object that was smelled rather than seen. The results show that temporal, parietal, and frontal areas were activated when individuals observed a hand grasping a smelled object. This activity differed from that evoked during the observation of a mimed grasp. Furthermore, superadditive activity was revealed when the action target-object was both seen and smelled. Together these findings indicate the influence of olfaction on action understanding and its contribution to multimodal action representations.
Trial description for sequentially presented task elements for the virtual (A) and the real (B) modality. Each trial started with the first-person character running around a corner for 4 s. The character was then confronted with one of three possible situations according to the stimulus categories: (1) A wall or place with no person visible (low-level-baseline), (2) an armed friend (nonviolent), (3) and an armed enemy (violent). The scene then froze and participants in the experiment were told to press a button with their right index finger to allow the scene to continue. Depending to the stimulus category, the scene appropriately terminated with the character: (1) approaching the displayed wall or place (low-level-baseline), (2) approaching the friend and resting next to him (nonviolent), (3) or shooting the enemy (violent). The total length of each trial lasting from the beginning of a scene to the beginning of the fixation dot was approximately 7 s. 
Stimuli of different scenario types were presented in a pseudorandomized nonstationary probabilistic sequence. There were ten different scenarios that were repeatedly presented four times along the experimental run, which comprised a total of 240 experimental stimuli (ten different scenarios by three categories and two modalities presented four times). The stimuli were presented in two blocks, each presenting stimuli of only one modality (120 realistic scenes or 120 virtual scenes). The presentation order of runs was balanced across study participants. As one of the game characters wore a face mask, the attribution of who is the ‘‘ friend ’’ or the ‘‘ enemy ’’ was counterbalanced across participants to control for face-speci fi c effects. Trial elements are illustrated at the left side. 
Glass-brain views (left) and rendered statistics (right) for violent versus nonviolent scenario contrasts ( p B .001, k ] 10 voxel threshold). Upper panel shows activation patterns for the virtual and lower panel for the real modality. MNI to Talairach transformed coordinates are given in Table 1. 
Glass-brain views of single individual fMRI analyses displaying the results of violent vs. nonviolent scenario contrasts (p B.001, k ]10 voxel threshold).
Studies investigating the effects of violent computer and video game playing have resulted in heterogeneous outcomes. It has been assumed that there is a decreased ability to differentiate between virtuality and reality in people that play these games intensively. FMRI data of a group of young males with (gamers) and without (controls) a history of long-term violent computer game playing experience were obtained during the presentation of computer game and realistic video sequences. In gamers the processing of real violence in contrast to nonviolence produced activation clusters in right inferior frontal, left lingual and superior temporal brain regions. Virtual violence activated a network comprising bilateral inferior frontal, occipital, postcentral, right middle temporal, and left fusiform regions. Control participants showed extended left frontal, insula and superior frontal activations during the processing of real, and posterior activations during the processing of virtual violent scenarios. The data suggest that the ability to differentiate automatically between real and virtual violence has not been diminished by a long-term history of violent video game play, nor have gamers' neural responses to real violence in particular been subject to desensitization processes. However, analyses of individual data indicated that group-related analyses reflect only a small part of actual individual different neural network involvement, suggesting that the consideration of individual learning history is sufficient for the present discussion.
Frames extracted from the four video clips which served as stimuli for the present experiment. Specifically, for all video clips, the onset of the reach-to-grasp movement and the final phase of the action sequence are represented.
A large body of research reports that perceiving body movements of other people activates motor representations in the observer's brain. This automatic resonance mechanism appears to be imitative in nature. However, action observation does not inevitably lead to symmetrical motor facilitation: Mirroring the observed movement might be disadvantageous for successfully performing joint actions. What remains unknown is how we are to resolve the possible conflict between the automatic tendency to "mirror" and the need to perform different context-related complementary actions. By using single-pulse transcranial magnetic stimulation, we found that observation of a double-step action characterized by an implicit complementary request engendered a shift from symmetrical simulation to reciprocity in the participants' corticospinal activity. Accordingly, differential motor facilitation was revealed for the snapshots evoking imitative and complementary gestures despite the fact that the observed type of grasp was identical. Control conditions in which participants observed the same action sequence but in a context not implying a complementary request were included as well. The results provide compelling evidence that when an observed action calls for a nonidentical complementary action, an interplay between the automatic tendency to resonate with what is observed and to implicitly prepare for the complementary action does emerge. In other words, implicit complementary requests might have the ability to draw attention to specific features of the context affording nonidentical responses.
The Diagnostic > Irrelevant contrast under spontaneous and intentional instructions. Whole-brain activation thresholded at p < .005 (uncorrected) with at least 10 voxels. Circles indicate ROIs with significant activation after FDR correction. The overlap was created using MRIcro, showing selected areas under intentional (red) or spontaneous (green) instructions at the same whole-brain threshold, and their overlap (yellow). % Signal change was based on ROIs with 15 mm sphere created by MarsBaR. Significant changes (p < .05) are indicated by an asterisk above the relevant conditions; gray bar = Irrelevant, black bar = Diagnostic, S = Spontaneous, I = Intentional.  
Pearson correlations between memory (proportion correct) for trait-diagnostic words in the sentence completion task and activation (parameter estimates) in the right TPJ and PC.  
Peak voxel and number of voxels in the regions of interest from the Diagnostic > Irrelevant contrast for the first and last sentence under Intentional instructions (p < .01, uncorrected) 
This fMRI study analyzes inferences on other persons' traits, whereby half of the participants were given spontaneous ("read") instructions while the other half were given intentional ("infer the person's trait") instructions. Several sentences described the behavior of a target person from which a strong trait could be inferred (trait diagnostic) or not (trait nondiagnostic). A direct contrast between spontaneous and intentional instructions revealed no significant differences, indicating that the same social mentalizing network was recruited. There was, however, a difference with respect to different brain areas that passed the significance threshold, suggesting that this common network was recruited to a different degree. Specifically, spontaneous inferences significantly recruited only core mentalizing areas, including the temporo-parietal junction and medial prefrontal cortex, whereas intentional inferences additionally recruited other brain areas, including the (pre)cuneus, superior temporal sulcus, temporal poles, and parts of the premotor and parietal cortex. These results suggest that intentional instructions invite observers to think more about the material they read, and consider it in many ways besides its social impact. Future research on the neurological underpinnings of trait inference might profit from the use of spontaneous instructions to get purer results that involve only the core brain areas in social judgment.
Playing alone (conditions C and D) 
Playing in co-operation (conditions A and B)
Functional imaging studies have identified a network of brain regions associated with theory of mind (ToM); the attribution of mental states to other people. Similar regions have also been observed in studies where people play games that involve either competing or co-operating with another person. Such games are thought to place implicit demands on ToM processes. Co-operation with others has also been shown to elicit brain responses in areas associated with the processing of reward, suggesting that co-operation is an intrinsically rewarding process. In this study, we used a factorial design to assess the interaction between co-operation and the availability of financial rewards in a guessing game. Twelve subjects were scanned with functional magnetic resonance imaging (fMRI) while they performed a guessing game with and without co-operation, and under both these conditions with and without financial reward. The main effect of co-operation was associated with neural responses in theory of mind regions, while the main effect of financial reward was associated with neural responses in reward regions. Critically the response to reward in medial orbitofrontal cortex was significantly enhanced when subjects were co-operating. This suggests that rewards achieved through co-operation are more valuable than rewards achieved alone.
The STEP1 ' STEP2 Á REST contrast on the left shows BOLD activations in the auditory cortices, the posterior thalamus, and the bilateral STSp regions ( p uncorrected B .001). On the right, the contrast NOISE1 ' NOISE2 Á REST shows activations in the auditory cortices, the left inferior frontal lobe, the posterior thalamus, and the left posterior insula. All statistical maps are overlaid on the group average anatomical image; T values are presented in the color bar. 
The BOLD activations are shown on the left, superimposed on the group average anatomical image, and the respective percentage-of-signal-change histograms are shown on the right. The respective contrasts, in which the brain region was obtained, are marked * in each histogram ( p uncorrected B .005). Blue 0 STEP1, Light blue 0 NOISE1, Red 0 STEP2, Pink 0 NOISE2. Contrast STEP1 Á NOISE1 revealed activations in the left STSp and left amygdala, and contrast STEP2 Á NOISE2 in subcallosal gyrus, right temporal pole, and right amygdala. The color bar presents the T values. 
Human footsteps carry a vast amount of social information, which is often unconsciously noted. Using functional magnetic resonance imaging, we analyzed brain networks activated by footstep sounds of one or two persons walking. Listening to two persons walking together activated brain areas previously associated with affective states and social interaction, such as the subcallosal gyrus bilaterally, the right temporal pole, and the right amygdala. These areas seem to be involved in the analysis of persons' identity and complex social stimuli on the basis of auditory cues. Single footsteps activated only the biological motion area in the posterior STS region. Thus, hearing two persons walking together involved a more widespread brain network than did hearing footsteps from a single person.
Top-cited authors
Simon Baron-Cohen
  • University of Cambridge
Emma Chapman
  • Bournemouth University
Giorgia Silani
  • University of Vienna
Chris D Frith
  • University College London
Jordan Grafman
  • Shirley Ryan AbilityLab