Conference PaperPDF Available

Gesture use in social interaction: how speakers' gestures can reflect listeners' thinking

Authors:

Abstract and Figures

The question as to why we move our hands and arms while we speak has intrigued many researchers in the past, and it still does. However, there has been much debate concerning the cause and function of these spontaneous movements which often represent meaningful information. Some argue that imagistic gestures benefit mainly the speaker, while others argue that they predominantly serve to assist the communication of information to an interlocutor. Two experimental studies are presented in this paper, which examine the influence of social-interactional processes on iconic gestures. The first focuses on the use of gesture in association with speakers' clarification of verbal (lexical) ambiguity. The second study investigates the influence of common ground on gesture use. The findings obtained from these studies support the notion that social context does influence gesture and that speakers use iconic gestures for their interlocutors, i.e. because they intend to communicate.
Content may be subject to copyright.
1
Gesture use in social interaction:
how speakers’ gestures can reflect listeners’ thinking
Holler, Judith & Beattie, Geoffrey
University of Manchester
judith.holler@manchester.ac.uk
geoff.beattie@manchester.ac.uk
Abstract
The question as to why we move our hands and arms while we speak has intrigued many
researchers in the past, and it still does. However, there has been much debate concerning the
cause and function of these spontaneous movements which often represent meaningful
information. Some argue that imagistic gestures benefit mainly the speaker, while others
argue that they predominantly serve to assist the communication of information to an
interlocutor. Two experimental studies are presented in this paper, which examine the
influence of social-interactional processes on iconic gestures. The first focuses on the use of
gesture in association with speakers’ clarification of verbal (lexical) ambiguity. The second
study investigates the influence of common ground on gesture use. The findings obtained
from these studies support the notion that social context does influence gesture and that
speakers use iconic gestures for their interlocutors, i.e. because they intend to communicate.
Key-words: Iconic gestures, gesture production, social interaction, ambiguity, common
ground
1. Introduction
When people talk they usually move their hands and arms while they speak. Many of these
gestures are imagistic gestures. McNeill (1985) was amongst the first to point out that these
images spontaneously created by our hands reveal important insights into speakers’ thoughts.
This is because, he argues, gesture and speech are tightly connected; they share an early
computational stage in the process of utterance formation and the two sides remain in
constant dialogue throughout this process. Imagery and linguistic content unfold together in
what McNeill (e.g. 1992, 2005) refers to as a dialectic process. The end product is an
utterance that comprises a linguistic side expressed in speech, as well as an imagistic side
expressed in gesture. Therefore, the verbal components of the utterances speakers produce
contain only part of the message a speaker is trying to convey, and the imagistic hand
gestures accompanying these verbal components can add considerable amounts of semantic
information to the speech (e.g. McNeill 1992; for experimental evidence for the
communicative effects of gestures in addition to speech see Beattie, 2003 and Beattie &
Shovelton 1999a, 1999b, 2001, 2002).
Although research has shown that imagistic hand gestures can communicate, why speakers
make these gestures is still a much debated issue. Some researchers argue that communicative
effects of gestures are merely accidental and not intended (e.g. Butterworth & Hadar, 1989;
Krauss, Chen, & Gottesman, 2000; Krauss, Morrel-Samuels, & Colasante, 1991) and that,
instead, the main function of these gestures is to facilitate lexical retrieval and thus to benefit
2
the speaker rather than the listener. Other researchers have argued against this, opining that
speech-accompanying hand gestures are communicatively intended and strongly influenced
by conversational context (e.g. Bavelas & Chovil, 2000; Kendon, 1983, 1985, 2004).
Several investigations have provided experimental evidence that suggests that speakers do
indeed produce gestures for their addressees. For example, Beattie & Aboudan (1994) found
that speakers produce more imagistic gestures in dialogic interaction than when they talk in
monologue. Bavelas, Kenwood, Johnson & Phillips (2002) showed that speakers produce
more imagistic hand gestures when they were told that a video made of them while
describing certain stimuli would be shown to other people than when they were told that an
audio recording would be played to the other participants. Furthermore, the gestures they
produced in the latter condition were more redundant with the speech (i.e. they added less
information) than those produced in the former. Furuyama (2000) examined hand gestures
made by teachers and learners in an origami task which, amongst others, revealed that
speakers specifically oriented certain gestures that they used in this context towards their
addressee. Further evidence comes from an investigation by Özyürek (2000, 2002); in these
studies, she analysed speakers’ use of shared gesture space when talking to one or two
addressees, and when talking to addressees that were either located opposite or towards the
side of the speaker. The analysis showed that speakers alter the way they represent certain
motion events in gesture space by taking into account how their own and their interlocutors’
gesture space, constituting part of the social-interactional context, intersect.
These studies provide important first insights into the effects social-interactional processes
have on gesture use and to what extent speakers produce gestures for their addressees. What
the research does show is that both the presence of an addressee and dialogue between
interactants affect the frequency of gestures, and the former also affects the way in which
gesture and speech interact in the representation of semantic information (in terms of the
degree of redundancy or complementarity). It also shows that it affects how speakers
represent information in terms of the form of gestures, their orientation and movement in
gesture space. However, apart from physical co-presence (or visibility), spatial arrangements
and the extent of verbal interactivity, an important question is to what extent speakers take
into consideration their addressees’ thinking when gesturing.
The two studies described in this paper examine this question. Study 1 uses lexical ambiguity
as a test case to investigate whether speakers anticipate addressees’ understanding problems
and use gesture to provide semantic information to prevent these problems from occurring.
Study 2 has a wider focus as it investigates the effect of ‘common ground’ (the knowledge
that interactants in conversation share, e.g. Clark, 1996) on gesture use, i.e. the speaker’s
more general anticipation of the addressee’s knowledge and thinking.
2. Study 1
In this experiment, 10 speakers were asked to reproduce sentences which contained
homonyms and were globally ambiguous (e.g., ‘The old man’s glasses were filthy’
[homonym: glasses; alternative interpretations: drinking glasses and spectacles]). They were
then asked what the ambiguous sentence could mean in one sense and what in the other,
which was intended to simulate a request for clarification often posed by addressees in
everyday talk (such as, ‘what do you mean?’, or, ‘do you mean x or y?’).
3
Fig. 1: Participant using disambiguating gestures referring to the concept of ‘drinking
glasses’ (left) and ‘spectacles’ (right).
The analysis focused on how speakers would deal with the ambiguities and how they would
draw upon the two modalities, gesture and speech, in order to resolve them. The results
showed that in 140 instances speakers recognised and attempted to resolve the ambiguity
(either using only gesture, only speech or both). In 65 out of these 140 cases (46%), speakers
used gesture to disambiguate what they were saying (in addition to or in absence of speech);
regarding seven out of these 65 cases (11%; or 5% if the total amount of disambiguation
attempts is considered) gesture was the only source of disambiguating information (i.e. the
speech remained ambiguous, such as ‘it could mean glasses or it could mean glasses’,
accompanied by two gestures, one with each mention of the word ‘glasses’, representing the
concepts ‘drinking glasses’ and ‘spectacles’ (in many cases only one of the two meanings
was not disambiguated verbally but only gesturally; two different meanings and their
disambiguation were always counted as separate instances). In the remaining 133 cases
(95%), speech was used in a disambiguating manner, and 58 of these 133 cases were
accompanied by disambiguating hand gestures (44%). Thus, it appears that speech was used
to disambiguate in the large majority of cases but gesture was used to disambiguate in
addition to speech almost half of the time, and in some cases indeed as the only source of
disambiguating information.
1
However, we know from past research that the very nature of dialogue can increase the
frequency with which gestures are used by speakers (Beattie & Aboudan, 1994). Therefore, it
could be that the requests for clarification placed by the addressee themselves encouraged the
frequent gesture use. In order to test this, some of the homonyms were inserted into four
different picture stories (created in a way that allowed for incorporating both alternative
meanings of a homonym into the context of the story in close proximity), along with non-
ambiguous control words. When asked to narrate the picture stories to interlocutors who did
not know the story content, it was found that the ambiguous words were accompanied by a
proportionally larger number of gestures, and this difference was statistically significant
(T=5, N=10, p<.02).
1
This is quite clear evidence that speakers do anticipate their addressees’ thought process, at
least when it comes to individual words that might cause confusion. However, does this mean
that speakers take into account the wider conversational context when anticipating
addressees’ thinking? Two individual examples we have come across seem to suggest that
they do. The first one stems from the same study just described, more precisely in association
with explaining the alternative meanings of the word ‘pot’. Whereas three of the participants
4
who gestured while explaining this particular ambiguity used their hands to represent the
round, bowl-like shape of a cooking pot when contrasting it to the concept of marijuana,
participant 7 (see Table 1) did something else. Instead of representing the pot as a container
of some sort with a round element to it, this speaker imitated to be gripping an oblong-shaped
handle with one hand. Such a handle is quite typical for English cooking pots (more so than
one on either side of the pot). This variation may of course simply illustrate the idiosyncrasy
that characterises imagistic gestural representations. However, another possibility is that the
reason lies in the comparisons the speakers were making. Speakers 4, 8 and 10 compare the
concept of a pot that represents a container (such as a pan/cooking pot, jug or plant pot)
which is typically round and bowl shaped to a concept that shares neither of these qualities
(i.e. the drug). In these cases, the gestures showing the round, bowl-like shape of the
container are clearly disambiguating. However, speaker 7 considers three alternative
interpretations, rather than just two. First, she refers to a flower pot, without an
accompanying gesture. The concept of a flower pot usually is round and bowl-shaped in
some sense. Then she refers to the concept of a cooking pot, or pan, and this reference is
accompanied by a gesture. However, in order for this gesture to be disambiguating, it must
represent something other than the round bowl-shape of the pan, since these features are
shared with the concept of a flower pot. At this point the speaker uses a gesture which does
exactly this – it represents the handle of a saucepan, a feature that is clearly not associated
with either a flower pot or marijuana and thus is disambiguating.
If this variation in gesture is indeed the consequence of the speaker being aware as to what
the semantic aspects of the individual concepts are which would be most effective in terms of
disambiguation, this would suggest that when ‘designing’ their gestures, speakers take into
account their addressees’ understanding and potential understanding problems. In this case,
the speaker had to bear in mind that the addressee will have been thinking of a flower pot,
and consider what the most effective gestural representation might be for differentiating this
kind of pot from the concept of a cooking pot.
Table 1: Participants’ verbal and gestural responses when explaining the alternative
interpretations of the homonym ‘pot’ (in the order in which they were uttered).
Participant Gesture Speech
1
-
-
2
-
-
a. drugs
b. plant pot
3
-
-
4
a. hands create a round space in
between them
a. pottery pot
5
b. right index and middle finger
form a v-shape, moving fore and
back in front of the mouth
b. cannabis
5
-
-
6
-
-
a. pan
b. cannabis
7
-
b. right hand imitates to be
gripping a handle
c. right index and middle finger
form a v-shape in front of the face
a. flower pot
b. pan
c. smoking pot, marijuana
8
a. hands represent a round bowl-
shape
-
a. physical object
b. something that you smoke
9
-
-
a. plant pot
b. drugs
10
a. a round space is created
between the hands
-
a. jug thing
b. cannabis
A second example stems from a different investigation for which we made some pilot
observations. Participants, again, were made to use homonyms to describe individual
pictures. For example, one picture showed a desk with a computer, a keyboard and a mouse,
some other utensils and a cage with a mouse inside it, playing with its toys. Participants had
to refer to both the computer mouse and the animal mouse, and the focus was on how and
when they would use speech and gesture. Here, speakers would refer to the computer mouse
by holding the right hand in front of the body with the back of the hand pointing upwards, the
fingers held together and bent so that they formed a small sphere inside the hand, imitating
6
the shape a hand adopts when moving a computer mouse. However, this gesture could
equally well be used to refer to the animal mouse, showing its shape and size. Interestingly,
in this case, speakers tended to distinguish the animal from the PC mouse by referring to
things with which the animal was associated in the picture – namely a wheel in which the
mouse was running. The accompanying gesture used in this context was that of an extended
index finger moving round in quick circles, referring to the wheel’s motion.
Although this example also refers to only a few individual instances of gestural behaviour
that have been observed, it provides important hints as to what might be happening here. In
this last example, it seems that the speakers were aware of the in this case visually shared
context between them and their interlocutors. Thus, they were able to draw on the content of
the picture as common ground and assume the connection between the wheel and the animal
mouse as shared knowledge. Referring to the animal mouse by representing the wheel in
which it plays instead of the mouse itself was therefore the most effective way of gesturally
disambiguating the two concepts in this particular context.
To sum up, these data of how speakers deal with lexical ambiguity show quite clearly that
they use both communicational channels (speech and gesture) to resolve ambiguity.
Moreover, in instances where requests for clarification are not explicitly posed but potential
understanding problems have to be anticipated speakers prevent these from occurring by
drawing on the gestural channel also. Furthermore, some individual examples suggest that
speakers do not just produce gestures of a ‘standardised form’ in terms of what they think
best represents ‘a drinking glass’ or a ‘cooking pot’, irrespective of the context in which a
concept is referred to. Rather, it seems that speakers consider what type of information is
most disambiguating in the current conversational context, bearing in mind visually shared
context as well as the semantic information with which they have provided their addressee in
the immediately preceding talk.
However, the above mentioned examples are only first indicators that gestures may be
influenced by speakers taking into account what their addressees know and think. The
question remains as to whether this is limited to ambiguous speech and to problems in
communication, or whether speakers take their addressees’ thinking into account on a more
general basis. As referred to in the Introduction, people in talk usually share knowledge about
the topic of conversation, or they build up shared knowledge over the course of a
conversation. This shared knowledge is considered common ground. In talk, speakers do take
into account this type of common ground when designing their utterances – at least with
regard to the verbal side of utterances; for example, it has been shown that referential
descriptions tend to become shorter, generally less complex and reduced to the information
required by the addressee to understand the reference (e.g. Clark & Wilkes-Gibbs, 1986). A
major question is whether this also affects the gestural side of utterances. If both speech and
gesture are part of language, then we should expect that it does. Experimental studies are
currently in progress investigating the influence of common ground on speech and gesture
use; a first analysis of some of these data is presented subsequently.
3. Study 2
This study experimentally manipulated common ground by using two conditions, one in
which pairs of interactants were given the chance to jointly familiarise themselves with the
content of a range of stimulus pictures (common ground, or CG-condition), and another in
7
which participants were not given the opportunity to do so (no CG-condition). There were 8
pairs in each condition which produced data that was considered in the analysis. However,
the actual experimental task was the same in both conditions. One participant from each pair
was asked to describe the position of a certain entity in each of the picture stimuli. The
pictures showed busy scenes of various kinds of objects, such as buildings, as well as cartoon
characters carrying out different kinds of actions; the speakers referred to various entities in
order to guide their respective addressee, who was not able to see the picture, to the
appropriate point in the picture where the target entity was positioned. Based on the speaker’s
description, the addressee had to mark this position on a copy of the stimulus pictures which
were handed to them after each description (but which did not show the target entity).
One aim of this analysis was to find out whether common ground has an effect on how
speakers use gesture, or more precisely, whether speakers draw on the gestural channel less
often when common ground exists. To test this, the number of words used by speakers in the
two conditions was counted as well as the number of iconic gestures. Then the proportional
use of gestures was calculated (i.e. number of gestures made, divided by the number of words
used) to account for the different lengths of the picture descriptions and thus to arrive at a
standardised measure.
The total number of gestures produced in the CG-condition was 130, compared to 318 in the
no CG-condition (or an average number of 16.25 compared to 39.75 gestures per speaker).
The overall number of words produced in the CG-condition was 2689, compared to 4211
words in the no CG-condition, or an average of 336.13 words per speaker compared to an
average of 526.38 words. The proportion of gestures used per a hundred words was 5%
(130/2689) in the CG-condition, and 8% (318/4211) in the no CG-condition when
considering the total number of words and gestures. When calculating the average proportion
per speaker, the proportion was 5% in the CG-condition and 6% in the no CG-condition, i.e.
in the CG-condition speakers accompanied a mere one per cent less words with gesture. This
difference was not statistically significant; (U=21.5, n
1
=8, n
2
=8, n.s.).
Figs. 1 and 2: Total number of words and gestures produced in the two experimental
conditions, as well as the percentage of words accompanied by gesture.
A possible reason for this lack of difference could have been the rather complex stimulus
material in that the time participants had to familiarise themselves with the pictures in the CG
condition may not have been sufficient for them to take in all of its content, and hence not all
of it was assumed as known, thus not considered common ground. For this reason, the same
analysis was carried out taking into consideration references to selected entities only (a
0
1000
2000
3000
4000
5000
no CG-
condition
CG-condition
words
gestures
0
1
2
3
4
5
6
7
8
9
10
no CG-
condition
CG-
condition
% words
accompanied
by gesture
8
house, a bridge, a knot in a pipe), which speakers in both conditions referred to frequently as
they were fairly close to the position of the target entity and rather big in the context of the
picture, making them very suitable landmarks.
Speakers in the CG-condition used a total of 17 gestures to refer to these entities, or an
average of 2.1 gestures per speakers, and speakers in the no CG-condition used a total of 41
gestures when referring to the respective entities, or 5.1 gestures per speaker, on average. The
total number of words used to refer to the selected entities in the CG-condition was 205, and
the average per speaker was 25.6 words. In the no CG-condition, the total number of words
was 261, and the average per speaker was 32.6 words. When considering the total number of
words and gestures, the proportion of gestures used per a hundred words was 8% (17/205) in
the CG-condition, and 16% (41/261) in the no CG-condition (i.e. twice as many gestures
were used by speakers in the no CG-condition). When calculating the average proportion per
speaker, the proportion was 8% in the CG-condition and 13% in the no CG-condition;
however, this difference was not statistically significant (U=22.5, n
1
=8, n
2
=8, n.s.).
Figs. 3 and 4: Number of words and gestures produced in the two experimental conditions to
refer to the selected entities, as well as the percentage of words accompanied by gesture.
The question is whether this lack of significant difference in terms of the proportional use of
gestures means that common ground has no effect at all on gesture use. In order to answer
this question we have to take a more detailed look at the individual gestural representations.
One difference that appeared as rather striking concerned the degree of elaborateness that the
gestures showed (by this we mean the degree of definition visible in the gestures), with those
from the CG-condition appearing considerably less elaborate. To analyse whether this was a
reliable difference, the gestures used to refer to the selected entities were examined more
closely. Two independent judges (both blind to the experimental conditions) scored the
elaborateness of the 58 individual gestures on a 7-point Likert scale, ranging from ‘very
elaborate’ to ‘not very elaborate’. Their scores showed a strong correlation (r
s
(58)=.721,
p<.0001). The two scores from the judges were averaged for each gesture to achieve a more
objective measure. Based on these scores, an average elaborateness score was determined for
each speaker (based on all the gestures a speaker produced with the respective referential
descriptions) so that the two experimental groups could be compared statistically. This
comparison yielded a significant result (U=4.5, n
1
=5, n
2
=7, p<.03), with the elaborateness of
the gestures in the CG-condition being lower than that of the gestures produced in the no CG
-condition.
0
50
100
150
200
250
300
no CG-
condition
CG-condition
words
gestures
5
6
7
8
9
10
11
12
13
14
15
no CG-
condition
CG-
condition
% words
accompanied
by gesture
9
The fact that the proportional number of gestures used by speakers in the two experimental
conditions did not differ significantly seems to suggest that the gestures still fulfil an
important communicational function even when common ground exists, at least in the context
of the experimental task carried out by participants in the study described here. However, the
question is what type of function, and whether some of these functions are specific to talk in
which common ground exists.
The finding that the gestures produced in the common ground condition were significantly
less elaborate than those made in the no common ground condition supports very similar
evidence from a study by Gerwing & Bavelas (2004) who found that gestures become more
‘sloppy’ when common ground exists (which seems to capture something very similar to the
‘elaborateness’ that we measured). They also found that gestures become significantly less
informative when speaker and recipient share common ground. This is a very interesting
finding indeed and future research will need to investigate whether the decrease in
elaborateness, or precision, affects the representation of semantic information. Further, we
need to examine in more detail how gestures become less informative, focusing in particular
on how this process influences the semantic interaction of the two modalities, gesture and
speech.
4. Conclusion
The findings reported in this paper corroborate previous findings which have shown that
social processes in interaction do affect gesture use. Moreover, the findings demonstrate that
speakers do anticipate their addressees’ thinking when gesturing. This goes against the notion
that gestures are not communicatively intended. Further, it shows that gesture production
theories need to explicitly incorporate the influence of social processes that are inherent to
face-to-face communication. Theories that limit their focus too much on either only the
speaker or only the recipient in order to explain the occurrence and use of gestures or their
effect on comprehension may not always be looking at the full picture. This argument
parallels Clark’s (1996) criticism of traditional psycholinguistic theories which focus on
either the speaker or the recipient, rather than viewing language use as a collaborative activity
between two or more individuals.
References
Bavelas, J. B., & Chovil, N. (2000). Visible acts of meaning: An integrated message model
of language in face-to-face dialogue. Journal of Language & Social Psychology, 19, 163-
194.
Bavelas, J. B., Kenwood, C., Johnson, T., & Phillips, B. (2002). An experimental study of
when and how speakers use gestures to communicate. Gesture, 2, 1–17.
Beattie, G. (2003). Visible Thought: The New Psychology of Body Language. London:
Routledge.
10
Beattie, G., & Aboudan, R. (1994). Gestures, pauses and speech: An experimental
investigation of the effects of changing social context on their precise temporal
relationships. Semiotica, 99, 239-272.
Beattie, G., & Shovelton, H. (1999a). Do iconic hand gestures really contribute anything to
the semantic information conveyed by speech? An experimental investigation. Semiotica,
123, 1-30.
Beattie, G., & Shovelton, H. (1999b). Mapping the range of information contained in the
iconic hand gestures that accompany spontaneous speech. Journal of Language and
Social Psychology, 18, 438-462.
Beattie, G., & Shovelton, H. (2001). An experimental investigation of the role of different
types of iconic gesture in communication: a semantic feature approach. Gesture, 1, 129-
149.
Beattie, G., & Shovelton, H. (2002). An experimental investigation of some properties of
individual iconic gestures that mediate their communicative power. British Journal of
Psychology, 93, 179-192.
Butterworth, B., & Hadar, U. (1989). Gesture, speech, and computational stages: a reply to
McNeill. Psychological Review, 96, 168-174.
Clark, H. H. (1996). Using Language. Cambridge: Cambridge University Press.
Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition,
22, 1-39.
Furuyama, N. (2000). Gestural interaction between the instructor and the learner in origami
instruction. In D. McNeill (Ed.), Language and Gesture (pp. 99-117). Cambridge:
Cambridge University Press.
Gerwing, J., & Bavelas, J. B. (2004). Linguistic influences on gesture’s form. Gesture, 4,
157–195.
Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers
use them to clarify verbal ambiguity for the listener?. Gesture, 3, 127-154.
Kendon, A. (1983). Gesture and speech: How they interact. In J. M. Wiemann & R. P.
Harrison (Eds.), Nonverbal Interaction (pp. 13-45). Beverly Hills: Sage.
Kendon, A. (1985). Some uses of gesture. In D. Tannen & M. Saville-Troike (Eds.),
Perspectives on Silence (pp. 215-234). Norwood: Ablex.
Kendon, A. (2004). Gesture: Visible Action as Utterance. Cambridge: Cambridge
University Press.
Krauss, R.M., Chen, Y., & Gottesman, R.F. (2000). Lexical gestures and lexical retrieval: A
process model. In D. McNeill (Ed.), Language and Gesture (pp. 261-283). Cambridge:
Cambridge University Press.
11
Krauss, R.M., Morrel-Samuels, P., & Colasante, C. (1991). Do conversational hand gestures
communicate?. Journal of Personality and Social Psychology, 61, 743-754.
McNeill, D. (1985). So you think gestures are nonverbal?. Psychological Review, 92, 350-
371.
McNeill, D. (1992). Hand and Mind: What Gestures Reveal about Thought. Chicago:
University of Chicago Press.
McNeill, D. (2005). Gesture & Thought. Chicago: University of Chicago Press.
Özyürek, A. (2000). The influence of addressee location on spatial language and
representational gestures of direction. In D. McNeill (Ed.), Language and Gesture (pp.
64-83). Cambridge: Cambridge University Press.
Özyürek, A. (2002). Do speakers design their co-speech gestures for their addressees?. The
effects of addressee location on representational gestures. Journal of Memory and
Language, 46, 688-704.
1
These figures have previously been published in Holler & Beattie (2003).
... This communicative signaling system is powerful in that the signals are dynamically adapted for the context in which they are used. For example, representational gestures (Kendon, 2004;McNeill, 1994) show systematic modulations dependent upon the communicative or social context in which they occur (Campisi & Özyürek, 2013;Galati & Galati, 2015;Gerwing & Bavelas, 2004;Holler & Beattie, 2005). Although these gestures are an important aspect of human communication, it is currently unclear how the addressee benefits from this communicative modulation. ...
... There is growing evidence that adults modulate their action and gesture kinematics when communicating with other adults, depending on the communicative context. For example, adults adapt to addressees' knowledge by producing gestures that are larger (Bavelas, Gerwing, Sutton, & Prevost, 2008;Campisi & Özyürek, 2013), more complex (Gerwing & Bavelas, 2004;Holler & Beattie, 2005), and Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s0042 6-019-01198 -y) contains supplementary material, which is available to authorized users. ...
Article
Full-text available
Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees' comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more-(n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor's faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more-compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.
... This communicative signaling system is powerful in that the signals are dynamically adapted for the context in which they are used. For example, representational gestures (Kendon, 2004;McNeill, 1994) show systematic modulations dependent upon the communicative or social context in which they occur (Campisi & Özyürek, 2013;Galati & Galati, 2015;Gerwing & Bavelas, 2004;Holler & Beattie, 2005). Although these gestures are an important aspect of human communication, it is currently unclear how the addressee benefits from this communicative modulation. ...
... There is growing evidence that adults modulate their action and gesture kinematics when communicating with other adults, depending on the communicative context. For example, adults adapt to addressees' knowledge by producing gestures that are larger (Bavelas, Gerwing, Sutton, & Prevost, 2008;Campisi & Özyürek, 2013), more complex (Gerwing & Bavelas, 2004;Holler & Beattie, 2005), and Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s0042 6-019-01198 -y) contains supplementary material, which is available to authorized users. ...
Article
Full-text available
Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees’ comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor’s faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.
... For example, when meant to be more informative to an observer, pointing gestures are made slower than when the gesture will not be used by an observer (Peeters, Chu, Holler, Hagoort, & Özyürek, 2015). Furthermore, during a demonstration or explanation, a gap in common knowledge between speaker and addressee leads to gestures that are larger (Bavelas, Gerwing, Sutton, & Prevost, 2008;Campisi & Özyürek, 2013), more complex or precise (Galati & Brennan, 2014;Gerwing & Bavelas, 2004;Holler & Beattie, 2005) and are produced higher in space (Hilliard & Cook, 2016). Whether these kinematic modulations are comparable to those observed in actions in similar communicative settings, has not been assessed. ...
Article
Full-text available
Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension. We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative. Our study showed that during production the communicative context modulates space-time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable. Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
... For example, it is relevant for speech recognition. Gesture has been found to aid disambiguation of words with which it co-occurs by depicting semantic aspects of the referents denoted by these words (86,201,202). Information obtained from gesture could, thus, contribute to making speech recognition applications more robust. ...
... Subsequently, according to Holler and Beattie (2005), gesture eases lexical retrieval; additionally, speaking with hands and body posture indicates the intentions of interlocutors more clearly; moreover, Butcher and Goldin-Meadow (2000) posit that adding gesture to word provides children with an extra way of communication that helps them to convey their meaning just through simple hand movement even before their entrance in holophrastic stage, (e.g., show a cup while uttering "mine"). Gestures increase learners' awareness and noticing, and internalize the content and sense of words (Kidd & Holler, 2009 Regardless of the specific language teaching method adopted to teach certain content, a number of suggestions have been made for procedures which help students to develop their knowledge on ambiguous words. ...
Article
Full-text available
The study aimed to shed light on the use of gesture in resolving lexical ambiguity employed by TEFL students. To this end, 60 intermediate Iranian learners, studying at Kish Way Language School in Iran were recruited. The participants were randomly put into two groups: one experimental group and one control group. In the experimental groups homonyms were taught through gesture, but the control group learned homonyms through in Audio-lingual method. The results highlighted the value of gesture in resolving lexical ambiguity. Moreover, to investigate whether or not there were any significant relationships between spatial and kinesthetic intelligences on the one hand and the ability to resolve lexical ambiguity on the other, a Pearson correlation procedure was used. The results showed a significant relationship between spatial/ kinesthetic intelligences and the ability to resolve lexical ambiguity.
... Their findings revealed that speakers use less precise, less complex, or less informative gestures when talking to participants with whom they mutually share knowledge, as compared with when communicating with participants with whom they do not share this knowledge. Similarly, Holler and Beattie (2007) found that COMMON GROUND,GESTURE AND SPEECH 269 gestures appear less elaborate when common ground exists than when it does not. They also compared gesture rate but found no significant effect of common ground on the number of iconic gestures in relation to the number of words produced. ...
Article
Full-text available
Much research has been carried out into the effects of mutually shared knowledge (or common ground) on verbal language use. This present study investigates how common ground affects human communication when regarding language as consisting of both speech and gesture. A semantic feature approach was used to capture the range of information represented in speech and gesture. Overall, utterances were found to contain less semantic information when interlocutors had mutually shared knowledge, even when the information represented in both modalities, speech and gesture, was considered. However, when considering the gestures on their own, it was found that they represented only marginally less information. The findings also show that speakers gesture at a higher rate when common ground exists. It appears therefore that gestures play an important communicational function, even when speakers convey information which is already known to their addressee.
Chapter
This chapter looks into live performances to study how rap lyrics cultivating collective participation on vinyl (and other media) connect powerfully with live audiences. It also analyzes the stage dynamics of rap MCs and their use of call-and-response on stage, with the audience, with back-up MCs or with their DJs.
Book
Full-text available
Are you saying one thing whilst your hands reveal another? Are you influenced by other people's body language without even knowing it? Darting through examples found anywhere from the controlled psychology laboratory to modern advertising and the Big Brother TV phenomenon, official Big Brother psychologist Geoffrey Beattie takes on the issue of what our everyday gestures mean and how they affect our relationships with other people. For a long time psychologists have misunderstood body language as an emotional nonverbal side effect. In this book Geoffrey Beattie ranges across the history of communication from Cicero to Chomsky to demonstrate that by adding to or even contradicting what we say, gestures literally make our true thoughts visible. A unique blend of popular examples and scientific research presented in language that everybody can understand, Visible Thought is an accessible and groundbreaking text that will appeal to those interested in social psychology and anyone who wants to delve beneath the surface of human interaction. Geoffrey Beattie is the official Big Brother psychologist and Professor at the Department of Psychology, University of Manchester. He is a recipient of the Spearman Medal awarded by the British Psychological Society for 'published psychological work of outstanding merit'.
Article
Full-text available
L'A. verifie par experience la these de D. McNeill selon laquelle les gestes et le discours partagent la meme structure psychologique et le meme niveau computationnel de l'esprit, en vertu du caractere verbal des gestes qui se trouvent etroitement lies au contenu propositionnel du discours, ainsi qu'a sa dimension temporelle et sociale: il s'agit en effet d'observer la frequence, la duree et la nature du discours, des gestes et des pauses selon trois cas de figure: le contexte non-social d'un individu isole, son monologue social en presence d'autrui, son dialogue social avec un interlocuteur
Article
Full-text available
This study tested McNeill’s theory that the iconic gestures that accompany speech in everyday talk convey critical information in interpersonal communication. Using a structured interview to measure the amount of information respondents receive from clause-length clips depicting aspects of a cartoon story, we discovered that when respondents could see the iconic gestures as well as hear the speech they received significantly more accurate information about those aspects of the original story depicted in the clips than when they just heard the speech. We also discovered that it was only with respect to certain semantic categories, namely, the relative position and the size of objects, that the beneficial effect of gestural communication was significant. We then considered in detail how specific attributes of actions and of objects are communicated via the iconic representation within individual gestures. Lastly, we discuss the implications of these findings for models of human communication in this area.
Article
Full-text available
L'A. presente les deux principales theories actuelles concernant les gestes de la main qui accompagnent le discours : la theorie de McNeill, selon laquelle les gestes et le discours sont deux facettes d'une meme structure psychologique, et la theorie de Butterworth et Hadar, ou le cote linguistique du langage est le seul qui soit necessaire pour rendre compte du sens du message et ou le geste apparait comme redondant par rapport au discours. A partir de l'analyse d'enregistrements video de sujets racontant des dessins animes, l'A. tente de determiner la part d'information semantique qui revient au discours et celle qui revient aux gestes accompagnateurs
Article
Full-text available
The authors propose that dialogue in face-to-face interaction is both audible and visible; language use in this setting includes visible acts of meaning such as facial displays and hand gestures. Several criteria distinguish these from other nonverbal acts: (a) They are sensitive to a sender-receiver relationship in that they are less likely to occur when an addressee will not see them, (b) they are analogically encoded symbols, (c) their meaning can be explicated or demonstrated in context, and (d) they are fully integrated with the accompanying words, although they may be redundant or nonredundant with these words. For these particular acts, the authors eschew the term nonverbal communication because it is a negative definition based solely on physical source. Instead, they propose an integrated message model in which the moment-by-moment audible and visible communicative acts are treated as a unified whole.
Article
Full-text available
Gesture, or visible bodily action intimately involved in the activity of speaking, has long fascinated scholars and laymen alike. Written by a leading authority on the subject, this book draws on the analysis of everyday conversations to demonstrate the varied role of gestures in the construction of utterances. Publication of this definitive account of the topic marks a major development in semiotics as well as in the emerging field of gesture studies.