ArticlePDF Available

Gesture's role in creating and learning language

Authors:

Abstract

Imagine a child who has never seen or heard language. Would such a child be able to invent a language? Despite what one might guess, the answer is "yes". This chapter describes children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the children have not been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate--they gesture--and those gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The properties of language that we find in these gestures are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. In contrast to these deaf children who are inventing language with their hands, hearing children are learning language from a linguistic model. But they too produce gestures, as do all hearing speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992). Indeed, young hearing children often use gesture to communicate before they use words. Interestingly, changes in a child's gestures not only predate but also predict changes in the child's early language, suggesting that gesture may be playing a role in the language-learning process. This chapter begins with a description of the gestures the deaf child produces without speech. These gestures assume the full burden of communication and take on a language-like form--they are language. This phenomenon stands in contrast to the gestures hearing speakers produce with speech. These gestures share the burden of communication with speech and do not take on a language-like form--they are part of language.
GESTURE’S ROLE IN CREATING AND LEARNING LANGUAGE
Susan Goldin-Meadow
Abstract
Imagine a child who has never seen or heard language. Would such a child be able to invent a
language? Despite what one might guess, the answer is "yes". This chapter describes children who
are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the
children have not been exposed to sign language, either by their hearing parents or their oral
schools. Nevertheless, the children use their hands to communicate––they gesture––and those
gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The
properties of language that we find in these gestures are just those properties that do not need to be
handed down from generation to generation, but can be reinvented by a child de novo. They are
the resilient properties of language, properties that all children, deaf or hearing, come to language-
learning ready to develop.
In contrast to these deaf children who are inventing language with their hands, hearing children
are learning language from a linguistic model. But they too produce gestures, as do all hearing
speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992).
Indeed, young hearing children often use gesture to communicate before they use words.
Interestingly, changes in a child's gestures not only predate but also predict changes in the child's
early language, suggesting that gesture may be playing a role in the language-learning process.
This chapter begins with a description of the gestures the deaf child produces without speech.
These gestures assume the full burden of communication and take on a language-like form––they
are language. This phenomenon stands in contrast to the gestures hearing speakers produce with
speech. These gestures share the burden of communication with speech and do not take on a
language-like form––they are part of language.
Gesture produced without speech can become language
When deaf children are exposed to sign language from birth, they learn that language as
naturally as hearing children learn spoken language (Newport and Meier 1985). However,
90% of deaf children are not born to deaf parents who could provide early access to sign
language. Rather, they are born to hearing parents who, quite naturally, expose their children
to speech. Unfortunately, it is extremely uncommon for deaf children with severe to
profound hearing losses to acquire spoken language without intensive and specialized
instruction. Even with instruction, their acquisition of speech is markedly delayed (Conrad
1979; Mayberry 1992).
The ten children my colleagues and I studied were severely to profoundly deaf (Goldin-
Meadow 2003a). Their hearing parents had decided to educate them in oral schools where
sign systems are neither taught nor encouraged. At the time of our observations, the children
ranged in age from 1;2 to 4;10 (years;months) and had made little progress in oral language,
occasionally producing single words but never combining those words into sentences. In
addition, they had not been exposed to a conventional sign system of any sort (e.g.,
American Sign Language or a manual code of English). The children thus knew neither sign
nor speech.
NIH Public Access
Author Manuscript
Enfance
. Author manuscript; available in PMC 2013 March 22.
Published in final edited form as:
Enfance
. 2010 September 22; 2010(3): 239–255. doi:10.4074/S0013754510003034.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Under such inopportune circumstances, these deaf children might be expected to fail to
communicate, or perhaps to communicate only in non-symbolic ways. The impetus for
symbolic communication might require a language model, which all of these children
lacked. However, this turns out not to be the case. Many studies have shown that deaf
children will spontaneously use gestures – called “homesigns” – to communicate if they are
not exposed to a conventional sign language (Fant 1972; Lenneberg 1964; Moores 1974;
Tervoort 1961). Children who use gesture in this way are clearly communicating. But are
they communicating in a language-like way? The focus of my work has been to address this
question. I do so by identifying linguistic constructions that the deaf children use in their
gesture systems. These properties of language, which the children are able to fashion
without benefit of linguistic input, are what I call the “resilient” properties of language
(Goldin-Meadow 1982; 2003a).
The resilient properties of language
I describe below the resilient properties of language that we have found thus far in the ten
deaf children’s gesture systems (Goldin-Meadow 2003a). There may, of course, be many
others – just because we haven’t found a particular property in a deaf child’s homesign
gesture system doesn’t mean it’s not there. We have found properties at the word- and
sentence-levels, as well as properties of language use.
Words
The deaf children’s gesture words have many properties that are found in the words of all
natural languages. The gestures are
stable
in form, although they needn’t be. It would be
easy for the children to make up a new gesture to fit every new situation (and, indeed, that
appears to be what hearing speakers do when they gesture along with their speech, cf.
McNeill 1992). But that’s not what the deaf children do. They develop a stable store of
forms that they use in a range of situations – they develop a lexicon, an essential component
of all languages (Goldin-Meadow, Butcher, Mylander and Dodge 1994).
Moreover, the gestures the children develop are composed of parts that form
paradigms
, or
systems of contrasts. When the children invent a gesture form, they do so with two goals in
mind – the form must not only capture the meaning they intend (a gesture-world relation),
but it must also contrast in a systematic way with other forms in their repertoire (a gesture-
gesture relation). In addition, the parts that form these paradigms are
categorical
. For
example, one child used a
Fist
handshape to represent grasping a balloon string, a drumstick,
and handlebars – grasping actions requiring considerable variety in diameter in the real
world. The child did not distinguish objects of varying diameters within the
Fist
category,
but did use his handshapes to distinguish objects with small diameters as a set from objects
with large diameters (e.g., a cup, a guitar neck, the length of a straw), which were
represented by a
CLarge
hand. The manual modality can easily support a system of analog
representation, with hands and motions reflecting precisely the positions and trajectories
used to act on objects in the real world. But the children don’t choose this route. They
develop categories of meanings that, although essentially iconic, have hints of
arbitrariness
about them (the children don’t, for example, all have the same form-meaning pairings for
handshapes, Goldin-Meadow, Mylander and Butcher 1995; Goldin-Meadow, Mylander and
Franklin 2007).
Finally, the gestures the children develop are differentiated by
grammatical function
. Some
serve as nouns, some as verbs, some as adjectives. As in natural languages, when the same
gesture is used for more than one grammatical function, that gesture is marked
(morphologically and syntactically) according to the function it plays in the particular
sentence (Goldin-Meadow
et al
1994). For example, if a child were to use a twisting gesture
Goldin-Meadow Page 2
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
in a verb role, that gesture would likely be produced near the jar to be twisted open (i.e., it
would be inflected), it would
not
be abbreviated, and it would be produced
after
a pointing
gesture at the jar. In contrast, if the child were to use the twisting gesture in a noun role, the
gesture would likely be produced in neutral position near the chest (i.e., it would
not
be
inflected), it would be abbreviated (produced with one twist rather than several), and it
would occur
before
the pointing gesture at the jar.
Sentences
The deaf children’s gesture sentences have a variety of sentential properties found in all
natural languages. Underlying each sentence is a
predicate frame
that determines how many
arguments can appear along with the verb in the surface structure of that sentence (Goldin-
Meadow 1985). For example, four slots underlie a gesture sentence about transferring an
object, one for the verb and 3 for the arguments (actor, patient, recipient). In contrast, three
slots underlie a gesture sentence about eating an object, one for the verb and 2 for the
arguments (actor, patient).
Moreover, the arguments of each sentence are marked according to the thematic role they
play. There are three types of markings that are resilient (Goldin-Meadow and Mylander
1984; Goldin-Meadow
et al
1994):
1.
Deletion
– The children consistently produce and delete gestures for arguments as a
function of thematic role; for example, they are more likely to delete a gesture for
the object or person playing the role of transitive actor (soldier in “soldier beats
drum”) than they are to delete a gesture for an object or person playing the role of
intransitive actor (soldier in “soldier marches to wall”) or patient (drum in “soldier
beats drum”).
2.
Word order
– The children consistently order gestures for arguments as a function
of thematic role; for example, they place gestures for intransitive actors and
patients in the first position of their two-gesture sentences (soldier-march; drum-
beat).
3.
Inflection
– The children mark with inflections gestures for arguments as a function
of thematic role; for example, they displace a verb gesture in a sentence toward the
object that is playing the patient role in that sentence (the “beat” gesture would be
articulated near, but not on, a drum).
In addition,
recursion
, which gives natural languages their generative capacity, is a resilient
property of language. The children form complex gesture sentences out of simple ones
(Goldin-Meadow 1982). For example, one child pointed at me, produced a “wave” gesture,
pointed again at me, and then produced a “close” gesture to comment on the fact that I had
waved before closing the door – a complex sentence containing two propositions: “Susan
waves” (proposition 1) and “Susan closes door” (proposition 2). The children systematically
combine the predicate frames underlying each simple sentence, following principles of
sentential and phrasal conjunction. When there are semantic elements that appear in both
propositions of a complex sentence, the children have a systematic way of
reducing
redundancy
, as do all natural languages (Goldin-Meadow 1982; 1987).
Language use
The deaf children use their gestures for many of the central functions that all natural
languages serve. They use gesture to make requests, comments, and queries about things and
events that are happening in the situation – that is, to communicate about the
here-and-now
.
Importantly, however, they also use their gestures to communicate about the non-present –
Goldin-Meadow Page 3
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
displaced
objects and events that take place in the past, the future, or in a hypothetical world
(Butcher, Mylander and Goldin-Meadow 1991; Morford and Goldin-Meadow 1997).
In addition to these rather obvious functions that language serves, the children use their
gestures to communicate with themselves – to
self-talk
(Goldin-Meadow 2003a). They also
use their gestures to refer to their own or to others’ gestures – for
metalinguistic
purposes
(Singleton, Morford and Goldin-Meadow 1993). And finally, the children use their gestures
to tell stories about themselves and others – to
narrate
(Phillips, Goldin-Meadow and Miller
2001). They tell stories about events they or others have experienced in the past, events they
hope will occur in the future, and events that are flights of imagination. For example, in
response to a picture of a car, one child produced a “break” gesture, an “away” gesture, a
pointing gesture at his father, a “car-goes-onto-truck” gesture. He paused and produced a
“crash” gesture and repeated the “away” gesture. The child was telling us that his father’s
car had crashed, broken, and gone onto a tow truck. Note that, in addition to producing
gestures to describe the event itself, the child produced what we have called a narrative
marker – the “away” gesture, which marks a piece of gestural discourse as a narrative in the
same way that “once upon a time” is often used to signal a story in spoken discourse.
Do gestures produced with speech serve as a model for gestures produced
without speech?
The deaf children we study are not exposed to a conventional sign language and thus cannot
be fashioning their gestures after such a system. They are, however, exposed to the gestures
that their hearing parents use when they speak. These gestures are likely to serve as relevant
input to the gesture systems that the deaf children construct. The question is what does this
input look like and how do the children use it?
We first ask whether the gestures that the hearing parents use with their deaf children exhibit
the same structure as their children’s gestures. If so, these gestures could serve as a model
for the deaf children's system. If not, we have an opportunity to observe how the children
transform the input they do receive into a system of communication that has many of the
properties of language.
The hearing parents’ gestures are not structured like their deaf children’s
Hearing parents gesture when they talk to young children (Bekken 1989; Shatz 1982;
Iverson, Capirci, Longobardi and Caselli 1999) and the hearing parents of our deaf children
are no exception. The deaf children’s parents were committed to teaching them to talk and
therefore talked to their children as often as they could. And when they talked, they
gestured.
We looked at the gestures that the hearing mothers produced when talking to their deaf
children. However, we looked at them not as they were meant to be looked at, but as a deaf
child might look at them. We turned off the sound and analyzed the gestures using the same
analytic tools that we used to describe the deaf children’s gestures (Goldin-Meadow and
Mylander 1983; 1984). We found that the hearing mothers’ gestures do not have structure
when looked at from a deaf child’s point of view.
We find no evidence of structure at any level in the mothers’ gestures. With respect to
gestural “words,” the mothers did not have a
stable
lexicon of gestures (Goldin-Meadow
et
al
1994); nor were their gestures composed of
categorical
parts that formed
paradigms
(Goldin-Meadow
et al
1995) or varied with
grammatical function
(Goldin-Meadow
et al
1994). With respect to gestural “sentences,” the mothers rarely concatenated their gestures
into strings and thus provided little data from which we (or their deaf children, for that
Goldin-Meadow Page 4
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
matter) could abstract
predicate frames
or
deletion, word order,
and
inflectional
marking
patterns (Goldin-Meadow and Mylander 1984). Whereas all of the children produce
complex sentences displaying
recursion
, only some of the mothers did and they first
produced these sentence types later than their children (Goldin-Meadow 1982). With respect
to gestural use, the mothers did not make
displaced reference
with their gestures (Butcher
et
al
1991), nor did we find evidence of any of the other uses to which the children put their
gestures, including
story-telling
(e.g., Phillips
et al
2001).
Of course, it may be necessary for the deaf children to see hearing people gesturing in
communicative situations in order to get the idea that gesture can be appropriated for the
purposes of communication. However, in terms of how the children
structure
their gestured
communications, there is no evidence that this structure comes from the children’s hearing
mothers. Thus, although the deaf children may be using hearing peoples’ gestures as a
starting point, they go well beyond that point – transforming the gestures they see into a
system that looks very much like language.
Exploring the deaf child’s transformation of gesture into homesign
How can we learn more about this process of transformation? The fact that hearing speakers
across the globe gesture differently when they speak affords us with an excellent opportunity
to explore if – and how – deaf children make use of the gestural input that their hearing
parents provide. For example, the gestures that accompany Turkish and Spanish look very
different from those that accompany English and Mandarin. As described by Talmy (1985),
Spanish and Turkish are verb-framed languages whereas English and Mandarin are satellite-
framed languages (Talmy 1985). This distinction depends primarily on the way in which the
path of a motion is packaged. In a satellite-framed language, both path and manner can be
encoded within a verbal clause; manner is encoded in the verb itself (
flew
) and path is coded
as an adjunct to the verb, a satellite (e.g.,
down
in the sentence "the bird flew down"). In a
verb-framed language, path is bundled into the verb while manner is introduced
constructionally outside the verb, in a gerund, a separate phrase, or clause (e.g., if English
were a verb-framed language, the comparable sentence would be “the bird exits flying”).
One effect of this typological difference is that manner can, depending upon pragmatic
context (Allen
et al
2005; Papafragou and Gleitman 2006), be omitted from sentences in
verb-framed languages (Slobin 1996).
However, McNeill (1998) has observed an interesting compensation – although manner is
omitted from Spanish-speakers'
talk
, it frequently crops up in their
gestures
. Moreover, and
likely because Spanish-speakers' manner gestures do not co-occur with a particular manner
word, their gestures tend to spread through multiple clauses (McNeill 1998). As a result,
Spanish-speakers' manner gestures are longer and may be more salient to a deaf child than
the manner gestures of English- or Mandarin-speakers. Turkish-speakers also produce
gestures for manner relatively frequently, producing more manner
only
gestures (e.g.,
fingers wiggling in place to represent feet alternating while walking) than English speakers,
who produce more gestures containing
both
manner and path (fingers wiggling as the hand
crosses space; Kita and Özyürek 2003; Özyürek and Kita 1999; Özyürek
et al
2007). These
gestural patterns can be traced to the typological difference between English and Turkish –
manner and path are expressed in separate clauses in Turkish but in the same clause in
English. Manner-only gestures are thus less frequent in English- and Mandarin-speakers
than in Spanish- and Turkish-speakers.
These four cultures – Spanish, Turkish, American, and Chinese – thus offer an excellent
opportunity to examine the effects of hearing speakers' gestures on the gesture systems
developed by deaf children. Our plan in future work is to take advantage of this opportunity.
If deaf children in all four cultures develop gesture systems with the same structure despite
Goldin-Meadow Page 5
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
wide differences in the gestures they see, we will have strong evidence of the biases children
themselves must bring to a communication situation. If, however, the children differ in the
gesture systems they construct, we will be able to explore how a child’s construction of a
language-like gesture system can be influenced by the gestures he or she sees. We have
already found that American deaf children exposed only to the gestures of their hearing
English-speaking parents create gesture systems that are very similar in structure to the
gesture systems constructed by Chinese deaf children exposed to the gestures of their
hearing Mandarin-speaking parents (Goldin-Meadow and Mylander 1998). The question
now is whether these children’s gesture systems are different from those of Spanish and
Turkish deaf children of hearing parents.
An experimental manipulation of gesture with and without speech
The hearing mothers of each of the deaf children in our studies were committed to teaching
their children to speak. As a result, they never gestured without talking. And, like all
speakers’ gestures, the gestures that the hearing mothers produced formed an integrated
system with the speech they accompanied (McNeill 1992). The mothers’ gestures were thus
constrained by speech and were not “free” to take on the resilient properties of language
found in their children’s gestures. The obvious question is what would happen if we forced
the mothers to keep their mouths shut.
We did just that – although the participants in our study were undergraduates at the
University of Chicago, not the deaf children’s hearing mothers (Goldin-Meadow, McNeill
and Singleton 1996). We asked English-speakers who had no previous experience with sign
language to describe a series of videotaped scenes using their hands and not their mouths.
We then compared the resulting gestures to the gestures these same adults produced when
asked to describe the scenes using speech.
We found that when using gesture on its own, the adults frequently produced discrete
gestures and combined those gestures into strings. Moreover, the strings were reliably
ordered, with gestures for certain semantic elements occurring in particular positions in the
string; that is, there was structure across the gestures at the sentence level. In addition, the
verb-like action gestures that the adults produced when using gesture on its own could be
divided into handshape and motion parts, with the handshape of the action frequently
conveying information about the objects in its semantic frame; that is, there was some
structure within the gesture at the word level. Importantly, these properties did not appear in
the gestures that these same adults produced along with speech. Thus, only when asked to
use gesture on its own did the adults produced gestures characterized by segmentation and
combination. Moreover, they constructed these gesture combinations with essentially no
time for reflection on what might be fundamental to language-like communication.
The adults might have gotten the inspiration to order their gestures from their own English
language. However, the particular order that they used in their gestures did
not
follow
canonical English word order. For example, adults were asked to describe a doughnut-
shaped object that arcs out of an ashtray. When using gesture without speech, the adults
produced a gesture for the ashtray first, followed by a gesture for the doughnut, and finally a
gesture for the arcing-out action (Goldin-Meadow
et al
1996; Gershkoff-Stowe and Goldin-
Meadow 2002). Note that a typical description of this scene in English would follow a
different order: “The doughnut arcs out of the ashtray.”
To explore the generality of this phenomenon, we asked speakers of four languages differing
in their predominant word orders (English, Turkish, Spanish, Chinese) to describe events
using gesture without speech). We found that the word orders the speakers used in their
everyday speech did
not
influence their gestures –– speakers of all four languages used the
Goldin-Meadow Page 6
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
same gesture order. For example, to describe a captain swinging a pail, the adults produced a
gesture for the captain (Actor), then produced a gesture for the pail (Patient), and finally a
gesture for the swinging action (Act), that is, an Actor-Patient-Act (ArPA) order. The ArPA
order was also found when a different group of speakers of the same four languages were
asked to reconstruct the events using transparent pictures. The adults were given no
indication that the order in which they stacked the transparencies was the focus of the study;
in fact, the background of each transparency was clear so that the final product looked the
same independent of the order in which the transparencies were stacked. Nevertheless, the
adults tended to pick up the transparency for the Actor, followed by the transparency for the
Patient, and finally the transparency for the Act, thus again displaying the ArPA order
(Goldin-Meadow
et al
2008). Note that the deaf children inventing their own homesign
systems tended to place gestures for Patients before gestures for Acts (the children
frequently omitted gestures for Actors in transitive relations). Moreover, ArPA is the order
currently emerging in a sign language created spontaneously without any apparent external
influence. Al-Sayyid Bedouin Sign Language arose within the last 70 years in an isolated
community with a high incidence of profound prelingual deafness. In the space of one
generation, the language assumed grammatical structure, including ArPA order (Sandler,
Meir, Padden and Aronoff 2005).
Although the adults in our studies incorporated many linguistic properties into the gestures
they produced when using gesture on its own, they did not develop all of the properties
found in natural language, or even all of the properties found in the gesture systems of the
deaf children. In particular, they failed to develop a system of internal contrasts in their
gestures. When incorporating handshape information into their action gestures, they rarely
used the same handshape to represent an object, unlike the deaf child whose handshapes for
the same objects were consistent in form and in meaning (Singleton, Morford and Goldin-
Meadow 1993). Thus, a system of contrasts in which the form of a symbol is constrained by
its relationship to other symbols in the system (as well as by its relationship to its intended
referent) is
not
an immediate consequence of symbolically communicating information to
another. The continued experience that the deaf children had with a stable set of gestures (cf.
Goldin-Meadow
et al
1994) may be required for a system of contrasts to emerge in those
gestures.
In sum, when gesture is called upon to fulfill the communicative functions of speech, it
immediately takes on the properties of segmentation and combination that are characteristic
of speech. The appearance of these properties in the adults’ gestures is particularly striking
given that these properties were
not
found in the gestures that these same adults produced
when asked to describe the scenes in speech. When the adults produced gestures along with
speech, they rarely combined those gestures into strings and rarely used the shape of the
hand to convey any object information at all (Goldin-Meadow
et al
1996). In other words,
they did not use their gestures as building blocks for larger units, either sentence or word
units. Rather, they used their gestures to holistically and mimetically depict the scenes in the
videotapes, as speakers typically do when they spontaneously gesture along with their talk, a
topic to which we now turn, focusing in particular on the gestures children produce during
the early stages of language learning.
Gesture produced with speech is part of language
Months before hearing children are able to produce words to refer to people, places, and
things, they gesture (Acredolo and Goodwyn 1985; 1989; Bates 1976; Bates
et al
1979).
Young children often point at objects for which they do not yet have words. Interestingly,
the fact that a child has pointed at an object increases the likelihood that the child will learn
a word for that object within the next few months, suggesting that early gesture may pave
Goldin-Meadow Page 7
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
the way for later word learning (Iverson and Goldin-Meadow 2005). In addition, children
use iconic or conventional gestures that convey action information (e.g., moving the hand
repeatedly to mouth to convey eating; extending an open palm next to a desired object to
indicate give).
In addition to expanding children’s vocabularies, gesture also paves the way for their early
sentences. Children combine pointing gestures with words to express sentence-like
meanings (“eat” + point at cookie) months before they can express these same meanings in a
word + word combination (“eat cookie”). Importantly, the age at which children first
produce gesture + speech combinations of this sort reliably predicts the age at which they
first produce two-word utterances (Goldin-Meadow and Butcher 2003; Iverson and Goldin-
Meadow 2005; Iverson
et al
2008). Gesture thus serves as a signal that a child will soon be
ready to begin producing multi-word sentences. Moreover, the types of gesture + speech
combinations children produce change over time and presage changes in children’s speech
(Özcalıskan and Goldin-Meadow 2005). For example, children produce gesture + speech
combinations conveying more than one proposition (akin to a complex sentence, e.g., “I like
it” + eat gesture) several months before producing a complex sentence entirely in speech (“I
like to eat it”). Gesture thus continues to be at the cutting edge of early language
development, providing stepping-stones to increasingly complex linguistic constructions.
Finding that gesture predicts the child’s initial steps into language learning raises the
possibility that gesture could be instrumental in bringing that learning about. Gesture has the
potential to play a causal role in language learning in at least two non-mutually exclusive
ways.
First, children’s gestures could elicit from their parents the kinds of words and sentences that
the children need to hear in order to take their next linguistic steps. For example, a child who
does not yet know the word “cat” might refer to the animal by pointing at it. His mother
might say in response to the point, “yes, that’s a cat,” thus supplying him with just the word
he is looking for. Or a child in the one-word stage might point at her father while saying
“cup.” Her mother replies, “that’s daddy’s cup,” thus translating the child’s gesture + word
combination into a simple (and relevant) sentence. It turns out that mothers often “translate”
their children’s gestures into words, thus providing timely models for how one- and two-
word ideas can be expressed in English (Goldin-Meadow
et al
2007). Gesture thus offers a
mechanism by which children can point out their thoughts to others, who then calibrate their
speech to those thoughts and potentially facilitate language learning.
The second way in which gesture could play a causal role in language learning is through its
cognitive effects (Goldin-Meadow and Wagner 2005). Work on older school-aged children
solving math problems has found that encouraging children to produce gestures conveying a
correct problem-solving strategy increases the likelihood that those children will learn to
solve the problem correctly (Cook and Goldin-Meadow 2006; Goldin-Meadow, Cook and
Mitchell 2008; see also Broaders
et al
2007 and Cook, Mitchell and Goldin-Meadow 2007).
These findings suggest that the act of gesturing can promote learning. Similarly, when
learning language, the act of pointing to an object might itself make it more likely that the
pointer will learn a word for that object. Future work is needed to explore whether gesture
can promote language learning not only by allowing children to elicit timely input from their
communication partners, but also by directly influencing their own cognitive state.
Conclusions
Gesture is chameleon-like in its form and that form is tied to the function the gesture is
serving. When gesture assumes the full burden of communication, acting on its own without
Goldin-Meadow Page 8
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
speech, it takes on a language-like form, even when the gesturer is a young child who has
not had access to a usable model of a conventional language. As such, gesture can reveal the
linguistic biases that children bring to the task of communication and may be the best
window we have onto those biases. Interestingly, however, when gesture shares the burden
of communication with speech, it loses its language-like structure, assuming instead a
holistic and unsegmented form. Although not language-like in structure when it
accompanies speech, gesture still forms an important part of language. As such, it can tell us
when children are ready to learn language and may even play a role in facilitating the
learning. Gesture can be
part of
language or can itself
be
language and thus sheds light on
what it means to be a language.
Acknowledgments
This research was supported by grants from the National Science Foundation (BNS 8810879), the National Institute
of Deafness and Other Communication Disorders (R01 DC00491), the National Institutes of Child Health and
Human Development (R01 HD47450 and P01 HD 40605), and the Spencer Foundation.
References
Acredolo LP, Goodwyn SW. Symbolic gesturing in language development. Human Development.
1985; 28:40–49.
Acredolo LP, Goodwyn SW. Symbolic gesturing in normal infants. Child Development. 1989;
59:450–466. [PubMed: 2452052]
Allen S, Özyürek A, Kita S, Brown A, Furman R, Ishizuka T. Language-specific and universal
influences in children's syntactic packaging of manner and path: A comparison of English,
Japanese, and Turkish. Cognition. 2007; 102:16–48. [PubMed: 16442518]
Bates, E. Language and context. New York: Academic Press; 1976.
Bates, E.; Benigni, L.; Bretherton, I.; Camanioni, L.; Volterra, V. The emergence of symbols:
cognition and communication in infancy. New York: Academic Press; 1979.
Bekken, K. Unpublished doctoral dissertation. University of Chicago; 1989. Is there "Motherese" in
gesture?.
Broaders SC, Cook SW, Mitchell Z, Goldin-Meadow S. Making children gesture reveals implicit
knowledge and leads to learning. Journal of Experimental Psychology: General. 2007; 136(4):539–
550. [PubMed: 17999569]
Butcher C, Mylander C, Goldin-Meadow S. Displaced communication in a self-styled gesture system:
Pointing at the non-present. Cognitive Development. 1991; 6:315–342.
Conrad, R. The deaf child. London: Harper & Row; 1979.
Cook SW, Goldin-Meadow S. The role of gesture in learning: Do children use their hands to change
their minds? Journal of Cognition and Development. 2006; 7:211–232.
Cook SW, Mitchell Z, Goldin-Meadow S. Gesturing makes learning last. Cognition. 2008; 106:1047–
1058. [PubMed: 17560971]
Fant, LJ. Ameslan: An introduction to American Sign Language. Silver Springs, Md.: National
Association of the Deaf; 1972.
Feyereisen, P.; de Lannoy, J-D. Gestures and speech: Psychological investigations. Cambridge:
Cambridge University Press; 1991.
Gershkoff-Stowe L, Goldin-Meadow S. Is there a natural order for expressing semantic relations?
Cognitive Psychology. 2002; 45(3):375–412. [PubMed: 12480479]
Goldin-Meadow, S. The resilience of recursion: A study of a communication system developed
without a conventional language model. In: Wanner, E.; Gleitman, LR., editors. Language
acquisition: The state of the art. N.Y.: Cambridge University Press; 1982.
Goldin-Meadow, S. Language development under atypical learning conditions: Replication and
implications of a study of deaf children of hearing parents. In: Nelson, K., editor. Children's
Language. Vol. Vol. 5. Hillsdale, N.J.: Erlbaum; 1985. p. 197-245.
Goldin-Meadow Page 9
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Goldin-Meadow, S. Underlying redundancy and its reduction in a language developed without a
language model: The importance of conventional linguistic input. In: Lust, B., editor. Studies in
the acquisition of anaphora: Applying the constraints. Vol. Vol. II. Boston, Mass: D. Reidel
Publishing Company; 1987. p. 105-133.
Goldin-Meadow, S. The resilience of language: What gesture creation in deaf children can tell us
about language-learning in general. New York: Psychology Press; 2003a.
Goldin-Meadow, S. Hearing gesture: How our hands help us think. Cambridge, MA: Harvard
University Press; 2003b.
Goldin-Meadow, S.; Butcher, C. Pointing toward two-word speech in young children. In: Kita, S.,
editor. Pointing: Where language, culture, and cognition meet. Mahwah, NJ: Earlbaum; 2003.
Goldin-Meadow S, Butcher C, Mylander C, Dodge M. Nouns and verbs in a self-styled gesture
system: What's in a name? Cognitive Psychology. 1994; 27:259–319. [PubMed: 7828423]
Goldin-Meadow S, Cook SW, Mitchell ZA. Gesturing gives children new ideas about math.
Psychological Science. 2008 revision under review.
Goldin-Meadow S, Goodrich W, Sauer E, Iverson J. Young children use their hands to tell their
mothers what to say. Developmental Science. 2007; 10:778–785. [PubMed: 17973795]
Goldin-Meadow S, McNeill D, Singleton J. Silence is liberating: Removing the handcuffs on
grammatical expression in the manual modality. Psychological Review. 1996; 103:34–55.
[PubMed: 8650298]
Goldin-Meadow S, Mylander C. Gestural communication in deaf children: The non-effects of parental
input on language development. Science. 1983; 221:372–374. [PubMed: 6867713]
Goldin-Meadow S, Mylander C. Gestural communication in deaf children: The effects and non-effects
of parental input on early language development. Monographs of the Society for Research in Child
Development. 1984; 49:1–121. [PubMed: 6537463]
Goldin-Meadow S, Mylander C. Spontaneous sign systems created by deaf children in two cultures.
Nature. 1998; 91:279–281. [PubMed: 9440690]
Goldin-Meadow S, Mylander C, Butcher C. The resilience of combinatorial structure at the word level:
Morphology in self-styled gesture systems. Cognition. 1995; 56:195–262. [PubMed: 7554795]
Goldin-Meadow S, Mylander C, Franklin A. How children make language out of gesture:
Morphological structure in gesture systems developed by American and Chinese deaf children.
Cognitive Psychology. 2007; 55:87–135. [PubMed: 17070512]
Goldin-Meadow S, So W-C, Özyürek A, Mylander C. The natural order of events: How speakers of
different languages represent events nonverbally. Proceedings of the National Academy of
Sciences. 2008 in press.
Goldin-Meadow S, Wagner SM. How our hands help us learn. Trends in Cognitive Science. 2005;
9:230–241.
Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychological
Science. 2005; 16:368–371.
Iverson JM, Capirci O, Longobardi E, Caselli MC. Gesturing in mother-child interaction. Cognitive
Development. 1999; 14:57–75.
Iverson JM, Capirci O, Volterra V, Goldin-Meadow S. Learning to talk in a gesture-rich world: Early
communication of Italian vs. American children. First Language. 2008; 28:164–181. [PubMed:
19763226]
Kendon, A. Gesticulation and speech: Two aspects of the process of utterance. In: Key, MR., editor.
The relationship of verbal and nonverbal communication. The Hague: Mouton; 1980. p. 207-228.
Kita S, Özyürek A. What does cross-linguistic variation in semantic coordination of speech and
gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal
of Memory and Language. 2003; 48:16–32.
Kita, S. How representational gestures help speaking. In: McNeill, D., editor. Language and gesture.
Cambridge, MA: MIT Press; 2000. p. 162-185.
Mayberry, RI. The cognitive development of deaf children: Recent insights. In: Segalowitz, S.; Rapin,
I., editors; Boller, F.; Graffman, J., editors. Child Neuropsychology Volume 7, Handbook of
Neuropsychology. Amsterdam: Elsevier; 1992. p. 51-68.(Series eds.)
Goldin-Meadow Page 10
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
McNeill, D. Speech and gesture integration. In: Iverson, JM.; Goldin-Meadow, S., editors. The nature
and functions of gesture in children's communications. San Francisco: Jossey-Bass; 1998. p.
11-28.in the
New Directions for Child Development
series, No. 79.
McNeill, D. Hand and mind: What gestures reveal about thought. Chicago: The University of Chicago
Press; 1992.
Moores, DF. Nonvocal systems of verbal behavior. In: Schiefelbusch, RL.; Lloyd, LL., editors.
Language perspectives: Acquisition, retardation, and intervention. Baltimore: University Park
Press; 1974.
Morford JP, Goldin-Meadow S. From here to there and now to then: The development of displaced
reference in homesign and English. Child Development. 1997; 68:420–435. [PubMed: 9249958]
Newport, EL.; Meier, R. The acquisition of American Sign Language. In: Slobin, DI., editor. The
cross-linguistic study of language acquisition. Vol. Vol. 1. Hillsdale, N.J.: Erlbaum; 1985.
Özcalıskan S, Goldin-Meadow S. Gesture is at the cutting edge of early language development.
Cognition. 2005; 96 B01-113.
Özyürek A, Kita S. Expressing manner and path in English and Turkish: Differences in speech,
gesture, and conceptualization. Proceedings of the Cognitive Science Society. 1999; 21:507–512.
Özyürek A, Kita S, Allen S, Furman R, Brown A. How does linguistic framing influence co-speech
gestures? Insights from crosslinguistic differences and similarities. Gesture. 2005; 5:216–241.
Papafragou A, Massey J, Gleitman L. When English proposes what Greek presupposes: The cross-
linguistic encoding of motion events. Cognition. 2006; 98:B75–B98. [PubMed: 16043167]
Phillips S, Goldin-Meadow S, Miller P. Enacting stories, seeing worlds: Similarities and differences in
the cross-cultural narrative development of linguistically isolated deaf children. Human
Development. 2001; 44:311–336.
Sandler W, Meir I, Padden C, Aronoff M. The emergence of grammar: Systematic structure in a new
language. PNAS. 2005; 102:2661–2665. [PubMed: 15699343]
Shatz, M. On mechanisms of language acquisition: Can features of the communicative environment
account for development?. In: Wanner, E.; Gleitman, LR., editors. Language acquisition: The state
of the art. New York: Cambridge University Press; 1982. p. 102-127.
Singleton J, Morford J, Goldin-Meadow S. Once is not enough: Standards of well-formedness in
manual communication created over three different timespans. Language. 1993; 69:683–715.
Slobin, DI. From “thought and language” to “thinking for speaking.”. In: Gumperz, JJ.; Levinson, SC.,
editors. Rethinking linguistic relativity. Cambridge: Cambridge University Press; 1996. p. 97-114.
Talmy, L. Lexicalization patterns: Semantic structure in lexical forms. In: Shopen, T., editor.
Language typology and syntactic description, Vol. III: Grammatical categories and the lexicon.
Cambridge: Cambridge University Press; 1985. p. 57-149.
Tervoort BT. Esoteric symbolism in the communication behavior of young deaf children. American
Annals of the Deaf. 1961; 106:436–480.
Goldin-Meadow Page 11
Enfance
. Author manuscript; available in PMC 2013 March 22.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
... Last but not least, cognitive load has been investigated in relation to human motion. It has been shown that animations are more beneficial than static presentation, whenever a non-static activity is presented, affecting positively available resources and WM (Cook & Yip, 2012;Goldin-Meadow, 2010;Risko & Gilbert, 2016). It has been suggested that CLT should evolve from a cognitive theory to embodied cognition. ...
Thesis
This thesis has been conducted to analyze the cognitive load of train travelers in one of the most visited train station in Paris, Île-de-France: Saint-Michel Notre Dame. In order to anticipate the risk associated to overcrowding and security hazard in this megacity, we investigated how travelers' expertise modulates cognitive load during information processing. Through four experiments, we investigated variations in travelers' cognitive load in an ecological context, in both field and a validated virtual experiment. Cognitive load was assessed through physiological, subjective and behavioral aspects. Learning effects associated to cognitive load such as split-attention, instructional design, modality effect, redundancy effect and expertise reversal effect, were also discussed in our empirical studies for optimal cognitive load evaluation in train travelers. Travelers¿ cognitive load was evaluated through different environmental vagary levels, ranging from no vagary to successive vagaries situations. Our empirical studies allowed us to put in light variations in cognitive load between the different levels of expertise in travelers, with a higher cognitive load in novice or occasional travelers than in expert or regular travelers, in no vagary context. An expertise reversal effect, where experts expressed a higher cognitive load than novices, arises with increase in environmental vagary level. Novice travelers, however, showed no significant difference in cognitive load level, with varying environmental vagary level. We discussed how reducing the gap between experts and novices could encourage expert travelers to be more aware of their surrounding environment in moment of no vagary as well as non-optimal situations, to reduce the risk of abrupt rise in cognitive load. This thesis represents a mix between fundamental and applied research, to unravel the mechanism underlying cognitive load variations in a real-life context.
... Alongside this interest in the emergence of language in infancy, there has also been an increasing interest in the role of gesture in the development of language, both in relation to the human child (Arbib, 2012;Corballis, 2003;Tomasello, 2008) and at the level of human evolution (McNeill, 2016). This scholarship has supported a shift of focus from words uttered by mouths towards the gesturing of hands in early childhood language research (Goldin-Meadow, 2010). While I will sidestep the evolutionary quest for linguistic origins that so often haunts the theorising of early language development, I seek to reveal ways in which linguistic boundaries are tested in relation to the animal as an antidote to the human exceptionalism permeating infant language studies. ...
Article
Full-text available
This paper reflects on a slow-motion video clip of the hands of three young children as they play with toys in the sand tray. It foregrounds sand and toys that are handled, as well as hands that grasp and relinquish things. Through this movement of hands that tug and pull at things, it explores how things animate bodies, and how this produces the felt-sense of other desiring bodies. As hands tender things, they are animated by what they touch, and simultaneously things are animated through the give and take of pulls and pushes of desire expressed as kinetic force. The slowed film of hands in motion draws our attention from words, towards a (re)cognition of a sensed intelligence which is not pre-language, but is produced before language, as well as with language. Arguing that child development theories are inextricably bound up in narratives of human exceptionalism founded in language and moralism, I will make the case for reinstating sense as a mode of attention in order to counter a lack that is perceived until children learn language. By troubling the boundaries that we draw between the animal and the human, there is much to learn from very young children when we seriously attend to their capacity for sensory ways of knowing that are so often eclipsed by the dominance of language.
... Hand gestures play an important role in conveying meaning, and its comprehension has attracted researchers from fields of psychology and neuroscience (for reviews, see Andric and Small, 2012;Gentilucci and Corballis, 2006;Goldin-Meadow, 2010;Goldin-Meadow and Alibali, 2013;. However, the investigation has been challenging because of the complexity of gestures: the meaning of gestures can depend on speech (e.g., iconic and metaphoric gestures) or can be independent of speech (e.g., emblems), and hand gestures can describe concrete meanings (e.g., iconic gestures) or abstract meanings (e.g., metaphoric gestures) (e.g., Montgomery et al., 2007;Straube et al., 2010;Villarreal et al., 2008), and social or non-social meanings (e.g., Saggar et al., 2014). ...
Article
Gestures play an important role in face-to-face communication and have been increasingly studied via functional magnetic resonance imaging. Although a large amount of data has been provided to describe the neural substrates of gesture comprehension, these findings have never been quantitatively summarized and the conclusion is still unclear. This activation likelihood estimation meta-analysis investigated the brain networks underpinning gesture comprehension while considering the impact of gesture type (co-speech gestures vs. speech-independent gestures) and task demand (implicit vs. explicit) on the brain activation of gesture comprehension. The meta-analysis of 31 papers showed that as hand actions, gestures involve a perceptual-motor network important for action recognition. As meaningful symbols, gestures involve a semantic network for conceptual processing. Finally, during face-to-face interactions, gestures involve a network for social emotive processes. Our finding also indicated that gesture type and task demand influence the involvement of the brain networks during gesture comprehension. The results highlight the complexity of gesture comprehension, and suggest that future research is necessary to clarify the dynamic interactions among these networks. Copyright © 2015. Published by Elsevier Ltd.
Article
Individuals with childhood apraxia of speech often exhibit greater difficulty with expressive language than with receptive language. As a result, they may benefit from alternative modes of communication. Here, we present a patient with childhood apraxia of speech who used pointing as a means of communication at age 2 ¼ years and self-made gestures at age 3½, when he had severe difficulties speaking in spite of probable normal comprehension abilities. His original gestures included not only word-level expressions, but also sentence-length ones. For example, when expressing “I am going to bed,” he pointed his index finger at himself (meaning I ) and then put both his hands together near his ear ( sleep ). When trying to convey the meaning of “I enjoyed the meal and am leaving,” he covered his mouth with his right hand ( delicious ), then joined both of his hands in front of himself ( finish ) and finally waved his hands ( goodbye ). These original gestures and pointing peaked at the age of 4 and then subsided and completely disappeared by the age of 7, when he was able to make himself understood to some extent with spoken words. The present case demonstrates an adaptive strategy for communication that might be an inherent competence for human beings.
Book
Full-text available
Writing research article to a peer reviewed publication is a complex process and involves daunting communication with the referees, co-authors and editors. Publication writing is more challenging than the usual communicative expression yet; prolific writers often pass through the processes without too much difficulty. Prolific writers draw on many writing strategies and one of the strategies is by highlighting the research gap. Most of the time, the research gaps highlighted are those related to the intended research niche and the intended study. While the strategy has been used and taught in many research writing instances, the strategy has been reported to be unpopular amongst the non-native English research writers. Although many non-native English writers are aware of the importance of the research gap, not much is known on how this strategy is being practiced. In view of the underutilization of this strategy and limited studies on the strategy in non-native context, this paper investigates the use of this strategy in 150 research articles introductions in Computer Science disciplines written by academicians in Malaysian Universities. The finding of this study confirmed that indicating research gap as a strategy is underutilized by the research articles written in the corpus. In addition, this paper also described four various ways on how this strategy is commonly used by the non-native writers. The confirmation and authentic examples may be useful in the teaching and learning of research article writing.
Article
Full-text available
When learners self-explain, they try to make sense of new information. Although research has shown that bodily actions and written notes are an important part of learning, previous analyses of self-explanations rarely take into account written and nonverbal data produced spontaneously. In this paper, the extent to which interpretations of self-explanations are influenced by the systematic consideration of such data is investigated. The video recordings of 33 undergraduate students, who learned with worked-out examples dealing with complex numbers, were categorized successively including three different data bases: a) verbal data, b) verbal and written data, and c) verbal, written and nonverbal data. Results reveal that including written data (notes) and nonverbal data (gestures and actions) leads to a more accurate analysis of self-explanations than an analysis solely based on verbal data. This influence is even stronger for the categorization of self-explanations as adequate or inadequate.
Book
Full-text available
There is extensive research on the recognition of individual identity, typically using static images (e.g. photographs). However, in the last 20 years, research has considered how successful recognition can be achieved in more naturalistic situations, using information from dynamic faces and bodies. In this chapter, we review behavioural work research that explores the role of motion in the recognition of identity from faces and bodies. In addition to the behavioural work we will also review the brain based evidence that has attempted to establish the neural correlates of person recognition. The theoretical implications of this work, and whether motion should be thought of an additional cue to identity or is integral to the underlying representation of a familiar person, are discussed in detail. Finally, we suggest dynamic information available from the face and body may help us integrate identity information about person using different environmental cues.
Article
Full-text available
La nature des liens entre la parole et les gestes co-verbaux est depuis longtemps étudiée sans qu'un réel consensus n'apparaisse. Nous passons en revue et discutons l'ensemble des approches s'étant interrogées sur la question. Il en ressort que, contrairement au point de vue défendu par McNeill (2005), les interactions entre les deux modalités ne se manifestent pas uniquement au sein d'un système de communication global. Elles peuvent également se produire au moment de la planification ou de l'exécution motrice du comportement de communication, et peuvent être facilitatrices ou compétitives (Feyereisen, 2007). Nous discutons les implications pour une prise en charge plus efficace des patients souffrant de troubles du langage, tels que les patients aphasiques ou atteints de la maladie d'Alzheimer. Talking and gesturing: Relationships between spoken language and coverbal gesture ABSTRACT The issue of the relationship between speech and co-verbal gestures has been the object of numerous studies without any consensus being reached. This paper investigates current and past approaches with regards to this issue. It appears that, contrary to McNeill's point of view (2005), interactions between language and co-verbal gestures do not occur solely within the global communication system. Indeed, such interactions can also occur during motor planification and execution of communicative behaviors, and can lead to either facilitation or competition between both modes of communication (Feyereisen, 2007). We discuss the implications for more effective management of patients with language disorders, such as aphasia or patients with Alzheimer's disease.
Article
Full-text available
The contribution of the sensory-motor system to the semantic processing of language stimuli is still controversial. To address the issue, the present article focuses on the impact of motor contexts (i.e., comprehenders' motor behaviors, motor-training experiences, and motor expertise) on the semantic processing of action-related language and reviews the relevant behavioral and neuroimaging findings. The existing evidence shows that although motor contexts can influence the semantic processing of action-related concepts, the mechanism of the contextual influences is still far from clear. Future investigations will be needed to clarify (1) whether motor contexts only modulate activity in motor regions, (2) whether the contextual influences are specific to the semantic features of language stimuli, and (3) what factors can determine the facilitatory or inhibitory contextual influences on the semantic processing of action-related language.
Article
Full-text available
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
Full-text available
Adding gesture to spoken instructions makes those instructions more effective. The question we ask here is why. A group of 49 third and fourth grade children were given instruction in mathematical equivalence with gesture or without it. Children given in- struction that included a correct problem-solving strategy in gesture were signifi- cantly more likely to produce that strategy in their own gestures during the same in- struction period than children not exposed to the strategy in gesture. Those children were then significantly more likely to succeed on a posttest than children who did not produce the strategy in gesture. Gesture during instruction encourages children to produce gestures of their own, which, in turn, leads to learning. Children may be able to use their hands to change their minds.
Chapter
Every child languages, like adult languages, make use of complex sentences with redundant elements. In addition, both addult and child languages systematic devices for reducing redudancy in those complex sentences. However, child languages, unlike adult languages, appear to have constraints on the site in their complex sentences at which redundancy is expressed and reduced.
Article
We previously reported that deaf children of hearing parents can develop a gestural communication system with some of the observed properties of early child language. In the present study, this phenomenon of gesture creation was replicated in four deaf children aged 1-4 to 3-1 at the time of the first interview. Each child, despite his atypical language-learning conditions (in particular his lack of usable conventional linguistic input, either oral or manual), developed a gesture system comparable in semantic content and structure (specifically, constructional ordering of elements, differential probabilities of production of elements, and recursive concatenation of semantic relations) to the gestural systems of the six deaf children of hearing parents in our original study and comparable as well to the spoken and sign systems of children acquiring conventional languages under typical learning conditions. This phenomenon suggests that the human child has strong biases to communicate in language-like ways. Nevertheless, it is possible that the deaf child's hearing parents, and not the child himself, were responsible for the emergence of the child's structured (yet idiosyncratic) gesture system. To investigate this possibility, we considered three possible parental influences on the child's sentence structures. First, we entertained the hypothesis that the children's sign sentences were merely imitations (perhaps even uncomprehending imitations) of a hearing adult's immediately preceding gestures. Second, we considered the possibility that the regularities underlying the deaf children's structured sign sentences were induced from their hearing parents' gestures taken in toto. Finally, we considered the possibility that the deaf children's sign sentences had been shaped by their parents' responses to those sentences. We found no evidence to support any of these hypotheses. The data reported in this series of studies confirm that deaf children lacking a conventional linguistic input can develop a gestural communication system that shows some of the structural regularities characteristic of early child language. The results suggest that communication with a number of language-like properties can develop in a markedly atypical language-learning environment, even without a tutor's modeling or shaping the structural aspects of the communication. The data are consistent with the hypothesis that the deaf child himself plays a seminal role in the emergence of the structural aspects of these communication systems.
Article
Natural languages are characterized by standards of well-formedness. These internal standards are likely to be, at least in part, a product of a consensus achieved among the users of a language over time. Nevertheless, it is possible that an individual, attempting to invent symbols to communicate de novo, might generate a system of symbols that is similarly characterized by internal standards of well-formedness. In these studies, we explore this possibility by comparing (1) a conventional sign language used by a community of signers and passed down from generation to generation with (2) gestures invented by a deaf child over a period of years and (3) gestures invented by nonsigning hearing individuals on the spot. Thus, we compare communication in the manual modality created over three different timespans-historical, ontogenetic, and microgenetic-focusing on the extent to which the gestures become codified and adhere to internal standards in each of these timespans. Our findings suggest that an individual can introduce standards of well-formedness into a self-generated gesture system, but that gradual development over a period of time is necessary for such standards to be constructed.