‘Prosodic Disambiguation of Syntactic Structure: For the Speaker or for the Addressee?’

Department of Psychology, State University of New York at Stony Brook, Stony Brook, NY 11794-2500, United States.
Cognitive Psychology (Impact Factor: 5.06). 04/2005; 50(2):194-231. DOI: 10.1016/j.cogpsych.2004.08.002
Source: PubMed


Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so "for" their addressees (the audience design hypothesis) or "for" themselves, as a by-product of planning and articulating utterances. Three experiments addressed these issues. In Experiments 1 and 3, speakers followed pictorial guides to spontaneously instruct addressees to move objects. Critical instructions (e.g., "Put the dog in the basket on the star") were syntactically ambiguous, and the referential situation supported either one or both interpretations. Speakers reliably produced disambiguating cues to syntactic ambiguity whether the situation was ambiguous or not. However, Experiment 2 suggested that most speakers were not yet aware of whether the situation was ambiguous by the time they began to speak, and so adapting to addressees' particular needs may not have been feasible in Experiment 1. Experiment 3 examined individual speakers' awareness of situational ambiguity and the extent to which they signaled structure, with or without addressees present. Speakers tended to produce prosodic cues to syntactic boundaries regardless of their addressees' needs in particular situations. Such cues did prove helpful to addressees, who correctly interpreted speakers' instructions virtually all the time. In fact, even when speakers produced syntactically ambiguous utterances in situations that supported both interpretations, eye-tracking data showed that 40% of the time addressees did not even consider the non-intended objects. We discuss the standards needed for a convincing test of the audience design hypothesis.

Full-text preview

Available from:
  • Source
    • "These effects of prosody emerge quickly during online sentence comprehension, suggesting that they involve a robust property of the human parser (Marslen- Wilson et al., 1992; Warren et al., 1995; Nagel et al., 1996; Pynte and Prieur, 1996; Kjelgaard and Speer, 1999; Snedeker and Trueswell, 2003; Weber et al., 2006). Naive speakers systematically vary their prosody depending on the syntactic structure of sentences and naive listeners can use this variation to disambiguate utterances that—though containing the same sequence of words—differ in that they are mapped from sentences with different syntactic structures (Nespor and Vogel, 1986, 2007; Snedeker and Trueswell, 2003; Kraljic and Brennan, 2005; Schafer et al., 2005). These studies indicate that users of spoken language share implicit knowledge about the relationship between prosody and syntax and that they can use both during speech production and comprehension. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In everyday life, speech is accompanied by gestures. In the present study, two experiments tested the possibility that spontaneous gestures accompanying speech carry prosodic information. Experiment 1 showed that gestures provide prosodic information, as adults are able to perceive the congruency between low-pass filtered-thus unintelligible-speech and the gestures of the speaker. Experiment 2 shows that in the case of ambiguous sentences (i.e., sentences with two alternative meanings depending on their prosody) mismatched prosody and gestures lead participants to choose more often the meaning signaled by gestures. Our results demonstrate that the prosody that characterizes speech is not a modality specific phenomenon: it is also perceived in the spontaneous gestures that accompany speech. We draw the conclusion that spontaneous gestures and speech form a single communication system where the suprasegmental aspects of spoken language are mapped to the motor-programs responsible for the production of both speech sounds and hand gestures.
    Full-text · Article · Jul 2014 · Frontiers in Psychology
  • Source
    • "One domain in which the similarities between language and music have led to specific proposals of shared mechanisms is that of pitch perception. Pitch is a core component of spoken language, helping to disambiguate syntactic structures [17-19] and to convey both pragmatic and semantic meaning [20,21]. In music, relative pitch changes convey melodic structure, whether played on instruments or sung by voice. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct - either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.
    Full-text · Article · Aug 2013 · PLoS ONE
  • Source
    • "In spite of the strong interpretation biases of ambiguous sentences, most research on prosodic disambiguation assumes that the two possible meanings of a syntactically ambiguous sentence are equally plausible (Lehiste, 1973; Price, Ostendorf, Shattuck-Hufnagel, & Fong, 1991; Fox Tree & Meijer, 2000; Kraljic & Brennan, 2005; Millotte, Wales, & Christophe, 2007). We know of only one prior study to investigate the influence of interpretation biases on disambiguating prosody. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Syntactically ambiguous sentences are frequently strongly biased toward one meaning over another [see, e.g., Tanenhaus and Trueswell (1995)]. This interpretation bias influences listeners' use of disambiguating prosody [Wales and Toner (1979)]. The current study investigated the effect on production. In experiment 1, the default interpretation of a heterogeneous set of 18 syntactically ambiguous sentences was investigated in 40 participants, who completed a question-and-answer task designed to identify intended meaning without making participants aware of potential ambiguity. Results were that 90% of the participants interpreted 11 of the sentences in just one way. There was a weaker interpretation bias for the remaining 7 sentences. In experiment 2, ten speakers were provided with and taught the alternate meanings of the 18 sentences from experiment 1, and then asked to disambiguate the meanings using prosody. Temporal and F0 measures indicated that while all speakers differentiated between meanings in production, only sentences with weak interpretation biases were consistently prosodically disambiguated. Prosodic cues to structure were applied inconsistently to differentiate meaning in sentences with strong interpretation biases. We conclude that disambiguating prosody is grammaticalized only when required by the interpretative norms of the speech community.
    Full-text · Article · May 2013 · The Journal of the Acoustical Society of America
Show more