ArticlePDF Available

Quality of Professional Players' Poker Hands Is Perceived Accurately From Arm Motions

Psychological Science
24(11) 2335 –2338
© The Author(s) 2013
Reprints and permissions:
DOI: 10.1177/0956797613487384
Short Report
In the card game of poker, players attempt to disguise
cues to the quality of their hand, either by concealment
(e.g., adopting the well-known, expressionless “poker
face”) or by deception. Recent work, however, demon-
strates that motor actions can sometimes betray inten-
tions. The same action can have different movement
dynamics depending on the underlying intention
(Becchio, Sartori, & Castiello, 2010), and these subtle dif-
ferences can be decoded by observers (Becchio, Manera,
Sartori, Cavallo, & Castiello, 2012; Sartori, Becchio, &
Castiello, 2011). Thus, professional poker players’ inten-
tions may be visible from their actions while moving
poker chips to place bets. Even though professional play-
ers may be able to regulate their facial expressions, their
motor actions could betray the quality of their poker
hand. In three studies, we tested this hypothesis by exam-
ining observers’ perceptions of poker-hand quality. We
also examined individual differences in sensitivity to non-
verbal behavior and potential diagnostic motor behaviors
as cues to hand quality.
Study 1
Twenty brief silent video clips (mean duration = 1.60 s,
SD = 0.68 s) of professional poker players placing a bet
were extracted from randomly sampled videos of the
2009 World Series of Poker (WSOP) tournament. Three
versions of each clip were produced: Unaltered clips
showed players’ bodies from the table up, face-only
clips showed players from the chest up, and arms-only
clips showed only players’ arms pushing chips into the
table. Each player’s objective likelihood of winning dur-
ing the bet was known (WSOP displays these statistics
on-screen; however, we kept this information from par-
ticipants by obscuring part of the screen). The number of
chips wagered was not confounded with the likelihood
of winning (i.e., chip values varied markedly—no partici-
pants were poker experts nor knew chip values; see the
Supplemental Material available online for information
about the game of poker, WSOP, and further method-
ological details).
Seventy-eight undergraduates were divided into three
groups based on the type of clip they were shown. Each
group viewed the 20 clips in a random order and judged
the quality of each poker hand (1 = very bad, 7 = very
good). Next, participants rated their overall confidence in
their judgments (1 = not at all confident, 7 = very confi-
dent) and their experience with poker (1 = none, 7 = a
lot). Finally, they completed a measure of nonverbal sen-
sitivity (Bänziger, Scherer, Hall, & Rosenthal, 2011).
Data were analyzed using multilevel linear models
with quality ratings of the hand depicted in each clip,
nested within participants, predicting objective likeli-
hoods of winning. Specifically, the model included par-
ticipants’ quality ratings at Level 1, a set of dummy codes
representing condition at Level 2 (the face-only condition
was the reference group because our primary hypothesis
concerned a comparison between judgments based on
facial expressions vs. arm movements or vs. upper-body
movements), and all interactions predicting objective
likelihoods of winning. This analysis revealed the pre-
dicted interaction between the arms-only (vs. face-only)
condition and quality ratings, b = 1.68, t(1554) = 2.88, p =
.004, such that the arms-only group’s ratings significantly
predicted likelihoods of winning, b = 0.94, t(1554) = 2.26,
p = .02, whereas the face-only group’s ratings marginally
inversely predicted likelihoods of winning, b = 0.74,
t(1554) = 1.81, p = .07. The interaction between the
487384PSSXXX10.1177/0956797613487384Slepian et al.Perception of Poker Hands
Corresponding Authors:
Michael L. Slepian, Tufts University, Department of Psychology, 490
Boston Ave., Medford, MA 02155
Nalini Ambady, Stanford University, Department of Psychology, 450
Serra Mall, Stanford, CA 94305
Quality of Professional Players’ Poker
Hands Is Perceived Accurately From
Arm Motions
Michael L. Slepian1, Steven G. Young2, Abraham M. Rutchick3,
and Nalini Ambady4
1Tufts University; 2Fairleigh Dickinson University; 3California State University, Northridge;
and 4Stanford University
Received 11/1/12; Revision accepted 4/2/13
at TUFTS UNIV on November 9, 2013pss.sagepub.comDownloaded from
2336 Slepian et al.
upper-body (vs. face-only) condition and quality ratings
was not significant, b = 0.95, t(1554) = 1.65, p = .10.
Reconducting these analyses with the individual-differ-
ence measures entered as predictors revealed no two- or
three-way interactions, ps > .07.1
We also examined participants’ accuracy scores, which
were computed by correlating participants’ poker-hand
ratings with players’ objective likelihoods of winning. If
these scores were significantly different from zero, per-
formance was different from chance (Table 1). Correlations
between these accuracy scores and participants’ nonver-
bal sensitivity, poker experience, and overall confidence
in their judgments were separately explored (Table 1).
These analyses also showed that judgments in the face-
only group were marginally worse than chance, which
suggests that players exhibited deceptive facial cues.
When isolating arm movements, however, analyses
showed that untrained participants judged the quality of
poker hands better than chance, which suggests that per-
ceptions of arm movements exert an independent influ-
ence on judgments of poker-hand quality. Judgments
made when viewing the players’ upper body (arm
motions plus the face) were at chance. Additionally,
when watching arm motions only, participants’ nonver-
bal sensitivity and poker experience were positively cor-
related with their accuracy.
Study 2
In Study 2, we replicated the arms-only accuracy finding
from Study 1 with a new set of silent video clips to ensure
the generalizabilty of the effect. Twenty-two new, ran-
domly sampled, chest-down close-ups of players placing
bets during the 2009 WSOP were extracted from video
clips as in Study 1 (mean duration = 1.54 s, SD = 0.74 s).
Again, the number of chips wagered was not confounded
with the likelihood of winning (see the Supplemental
Material). Thirty undergraduates judged poker-hand
quality from these new clips. As in the previous study,
data were analyzed with a multilevel model. Results rep-
licated those of Study 1. When participants viewed arm
motions, their judgments again predicted the objective
quality of professional poker players’ hands, b = 1.46,
t(558) = 2.70, p = .004. Participants’ performance was
greater than chance when they judged poker-hand qual-
ity from viewing players’ arm motions (Table 1).
Study 3
Players who have strong poker hands should be more
confident than players who have weak hands, and per-
haps this confidence is expressed in motor actions. To
the extent that participants’ poker-hand quality ratings
were influenced by player confidence, having partici-
pants judge player confidence could yield similar results.
Previous work demonstrates that anxiety disrupts smooth-
ness of body movement (Beuter & Duda, 1985), which
suggests that confidence (i.e., lack of anxiety) might be
revealed via smoother actions. Therefore, in Study 3, we
had participants in one condition judge player confi-
dence, and in a second condition, they judged how
smoothly the chips were pushed into the center of the
table. If greater confidence in players relates to smoother
motor action, smoothness judgments might also predict
likelihoods of winning.
Forty undergraduates viewed the same randomly
ordered videos from Study 2, judging player confidence
(“How confident does this person seem?”) or action
smoothness (“How smooth is this person’s movement?”;
1 = not at all, 7 = very). They subsequently completed
the measure of nonverbal sensitivity used in Study 1. We
ran a multilevel model, including participants’ quality
Table 1. Mean Accuracy in All Conditions and Correlations Between Accuracy and Individual-Difference Measures
Study and condition Mean accuracy Nonverbal sensitivity Poker experience Confidence in judgments
Study 1
Upper body .02 [–.06, .09] .14 .14 .19
Face only –.07 [–.15, .01] .17 –.32 –.26
Arms only .07 [.01, .14] .40* .39* .26
Study 2 .15 [.11, .19]
Study 3
Player confidence .15 [.07, .24] .46*
Smoothness of movement .29 [.22, .36] .14
Note: Accuracy scores are the correlation of participants’ ratings of the quality of poker hands with players’ objective likelihoods of winning.
Values in brackets are 95% confidence intervals (created using Fisher’s transformed zs and then converted back to r values). If the 95%
confidence interval includes zero, accuracy is at chance.
*p < .05.
at TUFTS UNIV on November 9, 2013pss.sagepub.comDownloaded from
Perception of Poker Hands 2337
ratings at Level 1, a dummy code representing judgment
condition (with the player-confidence condition as the
reference group) at Level 2, and the interaction predict-
ing objective likelihoods of winning. Analyses revealed
a main effect of participants’ quality ratings, b = 3.33,
t(855) = 4.17, p < .001, but no significant interaction of
ratings with judgment condition, b = 0.54, t(855) = 0.58,
p = .56. Reconducting this analysis with the addition of
participants’ nonverbal-sensitivity scores and all interac-
tions did not reveal any significant main effects of non-
verbal sensitivity or interactions with nonverbal sensitivity
and other variables, ps < .64. Thus, both player confi-
dence and smoothness judgments significantly predicted
likelihoods of winning, which suggests that movement
smoothness might be a valid cue for assessing poker-
hand quality. It is unknown, however, how participants
interpreted “smoothness” or whether the players’ move-
ments that participants rated as smooth were truly
smoother than other players’ movements. Other physical
factors, such as speed, likely played a role (see Patel,
Fleming, & Kilner, 2012).
As in Study 1, we also explored correlations between
participants’ nonverbal sensitivity and accuracy scores.
Participants’ nonverbal sensitivity significantly correlated
with their accuracy as indexed by ratings of players’ con-
fidence, but not with their accuracy as indexed by ratings
of players’ smoothness of movement (Table 1), which
suggests the possibility that individual differences in non-
verbal sensitivity can be overcome when participants are
explicitly directed to attend to potentially diagnostic
motor cues.2
In three studies with two unique video sets, observers
naive to the quality of professional players’ poker hands
could judge, better than chance, poker-hand quality from
merely observing players’ arm actions while placing bets.
The accuracy of participants’ judgments when viewing
players’ upper bodies was no different from chance, and
when observing players’ faces, participants’ accuracy was
nearly worse than chance, which suggests that players’
facial cues were deceptive. Arm motions might provide a
more diagnostic cue to poker-hand quality than other
nonverbal behaviors. Additionally, correlations between
nonverbal sensitivity and accuracy from viewing arm
motions suggest a positive relationship between the two
(see Table 1), and movement smoothness might be a
valid cue for assessing poker-hand quality, although
more research is needed to document the moderators of
the present effects.
These findings are notable because the players in the
stimulus clips were highly expert professionals compet-
ing in the high-stakes WSOP tournament. Additionally,
judges were untrained observers (cf. Ekman & O’Sullivan,
1991) watching clips on average less than 2 s long (see
Ambady & Rosenthal, 1992). Nevertheless, professional
poker players’ motor actions were revealing, enabling
perceivers to decode poker-hand quality from minimal
visual information. Even in very restrictive settings, motor
actions can yield important diagnostic information.
Author Contributions
M. L. Slepian, S. G. Young, A. M. Rutchick, and N. Ambady
conceived and designed the studies. M. L. Slepian, S. G. Young,
and A. M. Rutchick conducted the studies and analyzed the
data. All authors wrote the manuscript.
We thank Saheela Mehrotra for assistance in conducting the
experiments and Aneeta Rattan, Takuya Sawaoka, and Jessica
Salerno for helpful comments on an earlier version of this
Declaration of Conflicting Interests
The authors declared that they had no conflicts of interest with
respect to their authorship or the publication of this article.
This work was supported in part by National Science Foundation
Grant BCS-0435547 to N. Ambady and by a National Science
Foundation Graduate Research Fellowship to M. L. Slepian.
Supplemental Material
Additional supporting information may be found at http://pss
1. The Quality Rating × Upper-Body Condition (vs. Face-Only
Condition) × Participant Confidence interaction was significant,
b = 0.71, t(1476) = 2.19, p = .03, but subsequent two-way
interactions were nonsignificant, ps > .07, which makes it dif-
ficult to interpret the three-way interaction.
2. Additionally, smoothness judgments yielded larger accu-
racy than confidence judgments. This is an example of when
judgments in a “micro” domain (physical properties of action)
may be a more diagnostic cue than judgments in a “molar”
domain (the meaning behind an action), whereas the reverse
is typically the case (see Weisbuch, Slepian, Clarke, Ambady,
& Veenstra-Van der Weele, 2010). Such conclusions about
greater accuracy, or higher correlations, in one condition than
in the other must be made with caution, however, because
neither nonverbal sensitivity nor judgment condition signifi-
cantly interacted with quality ratings in predicting objective
likelihoods to win.
Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive
behavior as predictors of interpersonal consequences: A
meta-analysis. Psychological Bulletin, 111, 256–274.
at TUFTS UNIV on November 9, 2013pss.sagepub.comDownloaded from
2338 Slepian et al.
Bänziger, T., Scherer, K. R., Hall, J. A., & Rosenthal, R. (2011).
Introducing the MiniPONS: A short multichannel version
of the Profile of Nonverbal Sensitivity (PONS). Journal of
Nonverbal Behavior, 35, 189–204.
Becchio, C., Manera, V., Sartori, L., Cavallo, A., & Castiello,
U. (2012). Grasping intentions: From thought experiments
to empirical evidence. Frontiers in Human Neuroscience,
6, 117. Retrieved from
Becchio, C., Sartori, L., & Castiello, U. (2010). Toward you: The
social side of actions. Current Directions in Psychological
Science, 19, 183–188.
Beuter, A., & Duda, J. L. (1985). Sport psychology analysis of
the arousal/motor performance relationship in children
using movement kinematics. Journal of Sport & Exercise
Psychology, 7, 229–243.
Ekman, P., & O’Sullivan, M. (1991). Who can catch a liar?
American Psychologist, 46, 913–920.
Patel, D., Fleming, S. M., & Kilner, J. M. (2012). Inferring subjec-
tive states through the observation of actions. Proceedings
of the Royal Society B: Biological Sciences, 279, 4853–4860.
Sartori, L., Becchio, C., & Castiello, U. (2011). Cues to inten-
tion: The role of movement information. Cognition, 119,
Weisbuch, M., Slepian, M. L., Clarke, A., Ambady, N., &
Veenstra-Van der Weele, J. (2010). Behavioral stability
across time and situations: Nonverbal versus verbal consis-
tency. Journal of Nonverbal Behavior, 34, 43–56.
at TUFTS UNIV on November 9, 2013pss.sagepub.comDownloaded from
... When card games are concerned, digital or tangible, players often seek to extract hidden information from their opponents by analysing social signals such as speech, body motion and facial expressions [17]. Slepian et al. [26] discuss that poker players' motor actions oftentimes betray their intentions, possibly emitting unintended signals to opponent players about their hand quality. The same principle can apply to a digital card game like Heartstone; offline Hearthstone competitions often allow eye contact between players, despite it being a digital game (see Figure 1). ...
... OpenFace provides per-frame estimations of presence and intensity for several facial Action Units (AU), as described in the Facial Action Coding System (FACS) [13]. In particular, OpenFace can detect AUs 1,2,4,5,6,7,9,10,12,14,15,17,20,23,25,26,28,and 45. To maximise the robustness of our collected dataset, we discarded all video frames where OpenFace's confidence of AU estimation was below 98%. ...
... For example, Phil Hellmuth has a record 16 World Series of Poker in-person tournament wins, and might succeed at this format for example due to his skill at reading other players' body language (Slepian et al. 2013), a skill which poker theorists emphasize for in-person environments (Caro 2003). Hellmuth has been quoted as summarizing this skill as follows, 'I seem to look right into people's souls sometimes. ...
Full-text available
Most gamblers lose money, and this means that a behavioral dependence to gambling can cause harm. However, some professional gamblers win consistently, and there is little academic literature on their psychology and how they differ from disordered gamblers. To contribute to this understudied area, we qualitatively analyzed interviews with 19 elite online professional poker players, by examining factors from the disordered gambling and decision-making literatures. Like disordered gamblers, participants displayed aspects of a behavioral dependence to gambling, but contrastingly did not generally experience harm. Other contrasts included their rational approach to statistical thinking, a general self-reported tendency to not be impulsive, and their social connections with other experts. One factor that did not yield clear contrasting results was whether or not they experienced early big wins. Parallels with the decision-making literature included their assessment of decision quality based on expected value rather than realized outcomes, their reluctance to take risks outside of their 'circle of competence,' and their 'active open-minded' thinking style. This study contributes to gambling psychology via an in-depth exploration of an understudied group. ARTICLE HISTORY
... In support of this notion, human perceivers can use subtle differences in movement kinematics to predict intention, 9,10 discern deception, 33 and even infer the value of a poker hand from subtle variations in movement kinematics. 34 This suggests that human perceivers are sensitive to information encoded in movement kinematics. 6 However, further work is needed to explore the intriguing possibility of whether this sensitivity could extend to understanding motivational strategy and choice information from movement kinematics. ...
Full-text available
Decisions, including social decisions, are ultimately expressed through actions. However, very little is known about the kinematics of social decisions, and whether movements might reveal important aspects of social decision-making. We addressed this question by developing a motor version of a widely used behavioral economic game - the Ultimatum Game - and using a multivariate kinematic decoding approach to map parameters of social decisions to the single-trial kinematics of individual responders. Using this approach, we demonstrated that movement contains predictive information about both the fairness of a proposed offer and the choice to either accept or reject that offer. This information is expressed in personalized kinematic patterns that are consistent within a given responder, but that vary from one responder to another. These results provide insights on the relationship between decision-making and sensorimotor control, as they suggest that hand kinematics can reveal hidden parameters of complex, social interactive, choice.
... This suggests that people who hold growth mindsets are often motivated to engage with someone else's bias, or issues of bias more generally. Considering these findings alongside a separate body of social cognition research showing that people are surprisingly good at spontaneously reading others' intentions from their actions (Becchio et al., 2012;Slepian et al., 2013;Uleman et al., 1996; also see Woods & Ruscher, 2021), we hypothesized that confrontation may indicate to perceivers that a person who speaks up holds a growth, rather than fixed, mindset about prejudice and bias. In other words, when someone speaks up to confront a biased statement, observers imbue their action with an assumed intention of changing the biased actor's beliefs and/or their behavior. ...
Full-text available
We report the first investigation of whether observers draw information about mindsets from behavior, specifically prejudice confrontation. We tested two questions across 10 studies (N = 3,168). First, would people who observe someone confront a biased comment (vs. remain silent) see them as endorsing more growth (vs. fixed) mindsets about prejudice and bias? If so, would the growth mindset perceptions that arise from confrontation (vs. remaining silent) attenuate the backlash that observers exhibit against confronters? We investigated these questions using scenarios (Studies 1, 2a-b, 4, 5a-d), naturalistic confrontations of national, race, and gender stereotypes reported retrospectively (Study 3), and an in-person laboratory experiment of actual confrontations of racial bias (Study 6). Correlational and experimental methods yielded support for our core hypotheses: People spontaneously imbue someone who confronts a biased comment with more growth mindset beliefs about prejudice and bias (Studies 1, 2a-b, 4, 6), regardless of whether participants observe the confrontation (Studies 1, 2a-b, 5a-d) or are being confronted themselves (Studies 2a-4, 6). The growth mindset perceptions arising from these confrontations suppress backlash, assessed by classic interpersonal perceptions (Studies 4-5) and judgments of interpersonal warmth and willingness to interact again in the future (Study 6), both when the confronter was a target of the biased behavior (Studies 1-5), and when they were an ally (Study 6), in both correlational studies (Study 3-4) and when growth mindset (about personality, Study 5; about prejudice, Study 6) was manipulated, confirming causality. We discuss implications for the study of mindsets, confrontation, and intergroup relations.
... This suggests that visual processing develops to optimize the information provided by both hands and faces (Fausey et al., 2016). Finally, and intriguingly, Slepian, Young, Rutchick, and Ambady (2013) showed that observers are able to gauge the quality of a professional poker player's poker-hand from their hand and arm movements, while facial cues are deceptive. ...
Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. Statement of relevance Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
... Moreover, by decomposing the component process of intention reading, our approach could be useful for identifying targets for intervention. There is evidence that TD observers can be explicitly guided to attend to potentially diagnostic features in visual kinematics (23). Based on the findings of the current study, a promising direction will be to investigate whether tutoring (either explicit or implicit) can promote alignment in observers with autism. ...
Full-text available
Significance A major challenge in studying intention reading is high motor variability. Analyses conducted across trials provide insights into what happens on average; however, they may obscure how individual observers read intention information in individual movements. We combined motion tracking, psychophysics, and computational analyses to examine intention reading in autism spectrum disorders (ASDs) with single-trial resolution. Results revealed that a sizeable fraction of ASD observers can identify intention-informative variations in ASD (but not in typically developing) movement kinematics, but they are nonetheless unable to extract the encoded intention information. This approach not only enhances our basic understanding of mind reading in ASD but also provides potential avenues for the rational design of training procedures to improve the reading of others’ actions.
... Broadcasting metacognitive representations to other agents for suprapersonal control involves verbal and nonverbal communication. For nonverbal communication, people automatically produce signals such as postures, action kinematics, gestures, facial expressions, and vocal qualities that convey confidence [70,71]. For example, in many English-speaking subcultures, upright posture, a serious facial expression, and vocal depth communicate assurance. ...
Full-text available
Metacognition – the ability to represent, monitor and control ongoing cognitive processes – helps us perform many tasks, both when acting alone and when working with others. While metacognition is adaptive, and found in other animals, we should not assume that all human forms of metacognition are gene-based adaptations. Instead, some forms may have a social origin, including the discrimination, interpretation, and broadcasting of metacognitive representations. There is evidence that each of these abilities depends on cultural learning and therefore that cultural selection might shape human metacognition. The cultural origins hypothesis is a plausible and testable alternative that directs us towards a substantial new programme of research.
In order to inform the debate whether cortical areas related to action observation provide a pragmatic or a semantic representation of goal-directed actions, we performed 2 functional magnetic resonance imaging (fMRI) experiments in humans. The first experiment, involving observation of aimless arm movements, resulted in activation of most of the components known to support action execution and action observation. Given the absence of a target/goal in this experiment and the activation of parieto-premotor cortical areas, which were associated in the past with direction, amplitude, and velocity of movement of biological effectors, our findings suggest that during action observation we could be monitoring movement kinematics. With the second, double dissociation fMRI experiment, we revealed the components of the observation-related cortical network affected by 1) actions that have the same target/goal but different reaching and grasping kinematics and 2) actions that have very similar kinematics but different targets/goals. We found that certain areas related to action observation, including the mirror neuron ones, are informed about movement kinematics and/or target identity, hence providing a pragmatic rather than a semantic representation of goal-directed actions. Overall, our findings support a process-driven simulation-like mechanism of action understanding, in agreement with the theory of motor cognition, and question motor theories of action concept processing.
People automatically generate first impressions from others' faces, even with limited time and information. Most research on social face evaluation focuses on static morphological features that are embedded "in the face" (e.g., overall average of facial features, masculinity/femininity, cues related to positivity/negativity, etc.). Here, we offer the first investigation of how variability in facial emotion affects social evaluations. Participants evaluated targets that, over time, displayed either high-variability or low-variability distributions of positive (happy) and/or negative (angry/fearful/sad) facial expressions, despite the overall averages of those facial features always being the same across conditions. We found that high-variability led to consistently positive perceptions of authenticity, and thereby, judgments of perceived happiness, trustworthiness, leadership, and team-member desirability. We found these effects were based specifically in variability in emotional displays (not intensity of emotion), and specifically increased the positivity of social judgments (not their extremity). Overall, people do not merely average or summarize over facial expressions to arrive at a judgment, but instead also draw inferences from the variability of those expressions.
Full-text available
Estimating another person's subjective confidence is crucial for social interaction, but how this inference is achieved is unknown. Previous research has demonstrated that the speed at which people make decisions is correlated with their confidence in their decision. Here, we show that (i) subjects are able to infer the subjective confidence of another person simply through the observation of their actions and (ii) this inference is dependent upon the performance of each subject when executing the action. Crucially, the latter result supports a model in which motor simulation of an observed action mediates the successful understanding of other minds. We conclude that kinematic understanding allows access to the higher-order cognitive processes of others, and that this access plays a central role in social interactions.
Full-text available
Skepticism has been expressed concerning the possibility to understand others' intentions by simply observing their movements: since a number of different intentions may have produced a particular action, motor information-it has been argued-might be sufficient to understand what an agent is doing, but not her remote goal in performing that action. Here we challenge this conclusion by showing that in the absence of contextual information, intentions can be inferred from body movement. Based on recent empirical findings, we shall contend that: (1) intentions translate into differential kinematic patterns; (2) observers are especially attuned to kinematic information and can use early differences in visual kinematics to anticipate the intention of an agent in performing a given action; (3) during interacting activities, predictions about the future course of others' actions tune online action planning; (4) motor activation during action observation subtends a complementary understanding of what the other is doing. These findings demonstrate that intention understanding is deeply rooted in social interaction: by simply observing others' movements, we might know what they have in mind to do and how we should act in response.
Full-text available
Humans spend most of their time interacting with other people. It is the motor organization subtending these social interactions that forms the main theme of this article. We review recent experimental studies testing whether it is possible to differentiate the kinematics of an action performed by an agent acting in isolation from the kinematics of the very same action performed within a social context. The results indicate that social context shapes action planning and that in the context of a social interaction, flexible online adjustments take place between partners. These observations provide novel insights on the social dimension of motor planning and control.
Full-text available
Body movement provides a rich source of cues about other people's goals and intentions. In the present research, we investigate how well people can distinguish between different social intentions on the basis of movement information. Participants observed a model reaching toward and grasping a wooden block with the intent to cooperate with a partner, compete against an opponent, or perform an individual action. In Experiment 1, a temporal occlusion procedure was used as to determine whether advance information gained during the viewing of the initial phase of an action allowed the observers to discriminate across movements performed with different intentions. In Experiment 2, we examined what kind of cues observers relied upon for the discrimination of intentions by masking selected spatial areas of the model (i.e., the arm or the face) maintaining the same temporal occlusion as for Experiment 1. Results revealed that observers could readily judge whether the object was grasped with the intent to cooperate, compete, or perform an individual action. Seeing the arm was better than seeing the face for discriminating individual movements performed at different speeds (natural-speed vs. fast-speed individual movements). By contrast, seeing the face was better than seeing the arm for discriminating social from individual movements performed at a comparable speed (cooperative vs. natural-speed individual movements, competitive vs. fast-speed individual movements). These results demonstrate that observers are attuned to advance movement information from different cues and that they can use such kind of information to anticipate the future course of an action.
Full-text available
Behavioral consistency has been at the center of debates regarding the stability of personality. We argue that people are consistent but that such consistency is best observed in nonverbal behavior. In Study 1, participants' verbal and nonverbal behaviors were observed in a mock interview and then in an informal interaction. In Study 2, medical students' verbal and nonverbal behaviors were observed during first- and third-year clinical skills evaluation. Nonverbal behavior exhibited consistency across context and time (a duration of 2 years) whereas verbal behavior did not. Discussion focuses on implications for theories of personality and nonverbal behavior.
The purpose of this interdisciplinary study was to assess the impact of arousal on motor performance by examining the kinematic characteristics of a stepping motion in high and low arousal conditions on 9 subjects. Raw data were recorded from a rotary shutter video camera and digitized automatically by interfacing the videomotion analyzer with the digitizing board of a microcomputer. Three-dimensional orbital plots of the hip, knee, and ankle angle covariations revealed that the subjects used two different strategies to perform the skill. Phase plane analyses revealed a tight coupling between joint position and velocity in both conditions for the hip and the knee. Differences in movement kinematics between low and high arousal conditions were most visible in the ankle joint whose phase planes displayed an increased number of self-crossings (loops) in the high arousal condition. It was suggested that under high arousal, what was once automatic and smooth in terms of the ankle joint now comes under more volitional control, which is less smooth and efficient. Practical implications of the present study are suggested.
A meta-analysis was conducted on the accuracy of predictions of various objective outcomes in the areas of clinical and social psychology from short observations of expressive behavior (under 5 min). The overall effect size for the accuracy of predictions for 38 different results was .39. Studies using longer periods of behavioral observation did not yield greater predictive accuracy; predictions based on observations under 0.5 min in length did not differ significantly from predictions based on 4- and 5-min observations. The type of behavioral channel (such as the face, speech, the body, tone of voice) on which the ratings were based was not related to the accuracy of predictions. Accuracy did not vary significantly between behaviors manipulated in a laboratory and more naturally occurring behavior. Last, effect sizes did not differ significantly for predictions in the areas of clinical psychology, social psychology, and the accuracy of detecting deception. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Despite extensive research activity on the recognition of emotional expression, there are only few validated tests of individual differences in this competence (generally considered as part of nonverbal sensitivity and emotional intelligence). This paper reports the development of a short, multichannel, version (MiniPONS) of the established Profile of Nonverbal Sensitivity (PONS) test. The full test has been extensively validated in many different cultures, showing substantial correlations with a large range of outcome variables. The short multichannel version (64 items) described here correlates very highly with the full version and shows reasonable construct validity through significant correlations with other tests of emotion recognition ability. Based on these results, the role of nonverbal sensitivity as part of a latent trait of emotional competence is discussed and the MiniPONS is suggested as a convenient method to perform a rapid screening of this central socioemotional competence. KeywordsNonverbal sensitivity–Emotional competence–Emotional intelligence–Assessment
The ability to detect lying was evaluated in 509 people including law-enforcement personnel, such as members of the U.S. Secret Service, Central Intelligence Agency, Federal Bureau of Investigation, National Security Agency, Drug Enforcement Agency, California police and judges, as well as psychiatrists, college students, and working adults. A videotape showed 10 people who were either lying or telling the truth in describing their feelings. Only the Secret Service performed better than chance, and they were significantly more accurate than all of the other groups. When occupational group was disregarded, it was found that those who were accurate apparently used different behavioral clues and had different skills than those who were inaccurate.
Grasping intentions: From thought experiments to empirical evidence Toward you: The social side of actions
  • C Becchio
  • V Manera
  • L Sartori
  • A Cavallo
  • U Castiello
Becchio, C., Manera, V., Sartori, L., Cavallo, A., & Castiello, U. (2012). Grasping intentions: From thought experiments to empirical evidence. Frontiers in Human Neuroscience, 6, 117. Retrieved from Neuroscience/10.3389/fnhum.2012.00117/full Becchio, C., Sartori, L., & Castiello, U. (2010). Toward you: The social side of actions. Current Directions in Psychological Science, 19, 183-188.