Article

Training and Timing Local Scalar Enrichments under Global Pragmatic Pressures

Authors:
If you want to read the PDF, try requesting it from the authors.

Abstract

Elementary sentences containing the quantificational determiner some seem to be ambiguous between a ‘weak’ existential meaning ∃ and a ‘strengthened’ some but not all meaning ∃+. The strengthened meaning is commonly assumed to be the output of a general enrichment mechanism, call it G G (for ‘global’), that applies to the weak meaning of the sentence: G G(∃) = ∃+. The application of G G has been shown to come with a processing cost (e.g. Bott & Noveck 2004). We used a self-paced reading task together with offline comprehension questions to investigate the interpretation of sentences containing some when embedded inside a disjunction, a position that G G cannot access. Our findings suggest (i) that the strengthened meaning ∃+ is available in embedded positions, suggesting that a mechanism of local strengthening L L must be available: L L(∃) = ∃+, (ii) that local enrichment can be facilitated by global pragmatic pressures (Chierchia et al. 2008; Mayr & Romoli 2014), (iii) that subjects can be quickly trained to systematically prefer one of G G or L L to the other, (iv) that application of L L, like the application of G G, comes with a processing cost. We highlight consequences of our findings for debates about the characterization of enrichment mechanisms, focusing on the relation between G G and L L.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The question we would like to ask is why there should be this preference for conjunctive strengthened meaning. As pointed out by Chemla (2009b), this preference is puzzling since there is no general preference for exhaustification (see Geurts and Pouscoulous 2009;Chemla and Spector 2011;Potts et al. 2015;Chemla et al. 2016;Franke et al. 2016 for quantitative information about the distribution of strengthened meanings in the scope of various operators). Furthermore, there is evidence that exhaustification is costly in certain matrix positions (for reviews, see e.g., Noveck and Reboul 2008;Katsos and Cummins 2010;Chemla and Singh 2014a, b), as well as in certain embedded positions (Chemla et al. 2016). ...
... As pointed out by Chemla (2009b), this preference is puzzling since there is no general preference for exhaustification (see Geurts and Pouscoulous 2009;Chemla and Spector 2011;Potts et al. 2015;Chemla et al. 2016;Franke et al. 2016 for quantitative information about the distribution of strengthened meanings in the scope of various operators). Furthermore, there is evidence that exhaustification is costly in certain matrix positions (for reviews, see e.g., Noveck and Reboul 2008;Katsos and Cummins 2010;Chemla and Singh 2014a, b), as well as in certain embedded positions (Chemla et al. 2016). It is thus surprising that adults and children should sometimes prefer to exhaustify twice, both in matrix and in embedded positions, and that they do so without the cost normally associated with exhaustification (see Chemla and Bott 2014 for complexity measures of adult free choice inferences). ...
Article
Full-text available
We present evidence that preschool children oftentimes understand disjunctive sentences as if they were conjunctive. The result holds for matrix disjunctions as well as disjunctions embedded under every. At the same time, there is evidence in the literature that children understand or as inclusive disjunction in downward-entailing contexts. We propose to explain this seemingly conflicting pattern of results by assuming that the child knows the inclusive disjunction semantics of or, and that the conjunctive inference is a scalar implicature. We make two assumptions about implicature computation in the child: (i) that children access only a proper subset of the adult alternatives (specifically, they do not access the lexicon when generating alternatives), and (ii) that children possess the adult capacity to strengthen sentences with implicatures. As a consequence, children are expected to sometimes not compute any implicatures at all, but in other cases they are expected to compute an implicature that is different from the adult implicature. We argue that the child’s conjunctive strengthening of disjunctive sentences realizes the latter possibility: the adult infers that the conjunction is false but the child infers that the conjunction is true. This behaviour is predicted when our assumptions about child development are coupled with the assumption that a covert exhaustive operator is responsible for strengthening in both the child and the adult. Specifically, children’s conjunctive strengthening is predicted to follow from the same mechanism used by adults to compute conjunctive free choice implicatures in response to disjunctive permission sentences (recursive exhaustification). We furthermore argue that this parallel between the child and the adult extends to disambiguation preferences. In particular, we present evidence that children prefer to strengthen disjunctions to conjunctions, in matrix and embedded positions (under every); this result mirrors previous findings that adults prefer to compute free choice, at the root and under every. We propose a disambiguation strategy that explains the preference for conjunctive strengthening – by both the child and the adult – even though there is no general preference for exhaustification. Specifically, we propose that the preference for a conjunctive strengthening follows from a pragmatic preference for a complete answer to the Question Under Discussion.
... Singh (2019) RT observes that for scales of quantificational determiners and of logical operators, the strengthened meaning seems to incur higher processing costs than the non-strengthened meaning; for scales of numerals and so-called free-choice implicatures, it is the other way round (e.g., Noveck and Posada, 2003;Bott and Noveck, 2004;Breheny et al., 2006;Chemla, 2009;Huang and Snedeker, 2009;Marty et al., 2013;Chemla and Bott, 2014;cp. Chemla and Singh, 2014;Crnič et al., 2015;Chemla et al., 2016;van Tiel and Schaeken, 2017; for discussion). These observations might be taken to suggest that for some implicatures the computation of the strengthened meaning of a sentence is an additional, non-mandatory process, but for other implicatures it is not. ...
Article
Full-text available
In this review we provide a discussion of the concept of alternatives and its role in linguistic and psycholinguistic theorizing in the context of the contributions that have appeared in the Frontiers Research Topic The Role of Alternatives in Language. We are discussing the linguistic phenomena for which alternatives have been argued to play a paramount role: negation, counterfactual sentences, scalar implicatures and exhaustivity, focus, contrastive topics, and sentences with bare plurals and with definite plurals. We review in how far alternatives are relevant for these phenomena and how this relevance has been captured by theoretical linguistic accounts. Regarding processing, we discuss the mental activation of alternatives: its mandatory vs. optional nature, its time course. We also address the methodological issue of how experimental studies operationalize alternatives. Finally, we explore the phenomenon of individual variation, which increasingly attracts attention in linguistics. In sum, this review gives an inclusive and broad discussion of alternatives by bringing together different research strands whose findings and theoretical proposals can advance our knowledge of alternatives in inspiring cross-fertilization.
... Singh (2019) RT observes that for scales of quantificational determiners and of logical operators, the strengthened meaning seems to incur higher processing costs than the non-strengthened meaning; for scales of numerals and so-called free-choice implicatures, it is the other way round (e.g., Noveck and Posada, 2003;Bott and Noveck, 2004;Breheny et al., 2006;Chemla, 2009;Huang and Snedeker, 2009;Marty et al., 2013;Chemla and Bott, 2014;cp. Chemla and Singh, 2014;Crnič et al., 2015;Chemla et al., 2016;van Tiel and Schaeken, 2017; for discussion). These observations might be taken to suggest that for some implicatures the computation of the strengthened meaning of a sentence is an additional, non-mandatory process, but for other implicatures it is not. ...
Article
Full-text available
In this review we provide a discussion of the concept of alternatives and its role in linguistic and psycholinguistic theorizing in the context of the contributions that have appeared in the Frontiers Research Topic The Role of Alternatives in Language. We are discussing the linguistic phenomena for which alternatives have been argued to play a paramount role: negation, counterfactual sentences, scalar implicatures and exhaustivity, focus, contrastive topics, and sentences with bare plurals and with definite plurals. We review in how far alternatives are relevant for these phenomena and how this relevance has been captured by theoretical linguistic accounts. Regarding processing, we discuss the mental activation of alternatives: its mandatory vs. optional nature, its time course. We also address the methodological issue of how experimental studies operationalize alternatives. Finally, we explore the phenomenon of individual variation, which increasingly attracts attention in linguistics. In sum, this review gives an inclusive and broad discussion of alternatives by bringing together different research strands whose findings and theoretical proposals can advance our knowledge of alternatives in inspiring cross-fertilization.
... Experiments 2 and 3 revealed no correlation between participants' spontaneous rate of implicature derivation and their spontaneous grammaticality judgments of intervention sentences. In Experiment 4, we drew on previous experimental work revealing that people can be trained to derive implicatures or not to derive implicatures (see, for example, Noveck and Posada 2003;Bott and Noveck 2004;Chemla et al. 2017). We investigated whether training participants either to derive or not to derive implicatures would influence the strength of intervention effects, comparing the performance of a group that received training consistent with implicature derivation, with that of a group that received training inconsistent with implicature derivation. ...
Thesis
Full-text available
Il a été démontré que plusieurs phénomènes linguistiques corrèlent avec certaines propriétés logiques de la phrase dans laquelle ils se trouvent, telles que les implications logiques de la phrase, ou les relations logiques entre la phrase et certaines phrases alternatives. Dans cette thèse, nous explorons (A) la question d’un éventuel rôle causal de ces propriétés logiques vis-à-vis de tels phénomènes linguistiques, et (B) la question de la manière dont ces propriétés logiques sont calculées. Il y a dans l’ensemble deux options concernant (B) : (i) ces propriétés logiques pourraient être calculées par un système formel n’ayant pas accès aux connaissances générales, appelons ce système formel la grammaire (Fox et Hackl 2006, entre autres), ou bien (ii) ces propriétés logiques pourraient être calculées de manière post-grammaticale. Nous abordons ces questions dans deux cas : la légitimation des éléments à polarité négative et la dérivation des implicatures scalaires. Nos conclusions s’inscrivent dans la lignée d’un grand nombre de résultats attestant d’un rôle causal de propriétés logiques au sein de ces deux phénomènes. Par ailleurs, nos résultats suggèrent que les propriétés logiques corrélées aux NPIs font intervenir des calculs post-grammaticaux, tandis que les propriétés logiques intervenant dans les implicatures scalaires sont calculées au niveau grammatical.
... Scalar implicature has long been one of the most studied phenomena in experimental pragmatics-indeed, the study of scalar implicature played a seminal role in the emergence of 'experimental pragmatics' as a field. Much research on the computation of scalar implicatures relies on behavioural measures, including explicit judgements about the meanings of various utterances under various conditions (Bott & Noveck, 2004;Bott, Bailey, & Grodner, 2012;Chemla & Spector, 2011;Chevallier et al., 2008;De Neys & Schaeken, 2007;Degen, 2015;Dieussaert, Verkerk, Gillard, & Schaeken, 2011;Doran, Baker, McNabb, Larson, & Ward, 2009;Feeney, Scafton, Duckworth, & Handley, 2004;Geurts & Pouscoulous, 2009;Goodman & Stuhlmüller, 2013;Marty, Chemla, & Spector, 2013; van Tiel, van Miltenburg, Zevakhina, & Geurts, 2016; among others), reaction and reading time measures (Bergen & Grodner, 2012;Bezuidenhout & Cutting, 2002;Bott & Noveck, 2004;Breheny, Katsos, & Williams, 2006;Chemla, Cummins, & Singh, 2017;Noveck & Posada, 2003;Politzer-Ahles & Husband, 2018; among others), measures of eye movements (Breheny, Ferguson, & Katsos, 2012, 2013Degen & Tanenhaus, 2015;Grodner, Klein, Carbary, & Tanenhaus, 2010;Huang & Snedeker, 2009; among others) and mouse tracking (Tomlinson, Bailey, & Bott, 2013). Recent reviews of these literatures are available in, for example, Chemla and Singh (2014a,b) and Sauerland and Schumacher (2016). ...
Article
One of the most widely studied phenomena in neuropragmatics—the study of how the brain derives context‐ and speaker‐based aspects of meaning—is scalar implicature. A scalar implicature is the interpretation of a proposition like Some of the students failed as meaning that a stronger proposition (Not all of the students failed) is not true. While scalar implicatures have been a significant object of study for decades, in recent years there has been an explosion of experiments investigating them using neuroscientific methods, particularly electroencephalography. Much of this research aims to identify neural substrates of comprehending scalar implicatures. Here, I review the extant findings and argue that the most of these studies have not directly observed neural correlates of scalar implicatures; rather, they have mostly observed downstream and/or domain‐general processes that happen to be related to implicatures, but not uniquely so. I argue that an instrumental approach to neuroscience—one that treats brain components not as objects of research in of themselves, but as tools for learning about pragmatics—would be a valuable addition to this emerging field.
... The experiment was run on a web-based platform using Ibex Farm (Drummond, 2013). Web-based testing was used because it allowed us to expand our participant pool by recruiting heritage speakers across Germany, and because this method has been found to yield reliable results in previous psycholinguistics studies (Chemla et al., 2016;Dillon et al., 2014;Enochson and Culbertson, 2015;Gibson et al., 2011;Sprouse, 2011;Wagers and Phillips, 2014). ...
Article
Previous work has shown that heritage grammars are often simplified compared to their monolingual counterparts, especially in domains in which the societally-dominant language makes fewer distinctions than the heritage language. We investigated whether linguistic simplification extended to the anaphoric system of Turkish heritage speakers living in Germany. Whereas the Turkish monolingual grammar features a three-way distinction between reflexives (kendi), pronouns (o), and syntactically-unconstrained anaphors (kendisi), German only distinguishes between two categories, pronouns and reflexives. We examined whether heritage speakers simplified the Turkish anaphor system by assimilating the syntactically unconstrained anaphor kendisi to either of the two categories attested in the societally-dominant language, German. Speakers' sensitivity to grammatical distinctions in comprehension was assessed using an offline antecedent selection task and an online self-paced reading task. Our results showed that heritage
... The experiment was run on a web-based platform using Ibex Farm (Drummond, 2013). Web-based testing was used because it allowed us to expand our participant pool by recruiting heritage speakers across Germany, and because this method has been found to yield reliable results in previous psycholinguistics studies (Chemla et al., 2016;Dillon et al., 2014;Enochson and Culbertson, 2015;Gibson et al., 2011;Sprouse, 2011;Wagers and Phillips, 2014). ...
Preprint
Previous work has shown that heritage grammars are often simplified compared to their monolingual counterparts, especially in domains in which the societally-dominant language makes fewer distinctions than the heritage language. We investigated whether linguistic simplification extended to the anaphoric system of Turkish heritage speakers living in Germany. Whereas the Turkish monolingual grammar features a three-way distinction between reflexives (kendi), pronouns (o), and syntactically-unconstrained anaphors (kendisi), German only distinguishes between two categories, pronouns and reflexives. We examined whether heritage speakers simplified the Turkish anaphor system by assimilating the syntactically unconstrained anaphor kendisi to either of the two categories attested in the societally-dominant language, German. Speakers’ sensitivity to grammatical distinctions in comprehension was assessed using an offline antecedent selection task and an online self-paced reading task. Our results showed that heritage speakers retain the three-way anaphoric distinctions of the monolingual grammar but there were also differences between the results of the offline and the online tasks. We suggest that processing paradigms are a useful complement to judgment tasks when studying how heritage speakers use grammatical distinctions involving optionality, as online measures can reveal distinctions that are allowed, even if dispreferred by comprehenders.
... Interpreting our results in relation to this theory is different to dual route theories being discussed here. We will return to this point in greater depth below. 2 Some work on factors which impact on local enrichment include Chemla et al. (2017) and Sun and Breheny (2018c). 3 More details can be found in Bergen et al. (2016). ...
Article
Full-text available
Several recent studies have shown that different scalar terms are liable to give rise to scalar inferences at different rates (Doran et al., 2009, 2012; van Tiel et al., 2016). A number of potential factors have been explored to account for such Scalar Diversity. These factors can be seen as methodological in origin, or as motivated by widely discussed analyses of scalar inferences. Such factors allow us to explain some of the variation, but they leave much of it unexplained. In this paper, we explore two new potential factors. One is methodologically motivated, related to the choice of items in previous studies. The second is motivated by theoretical approaches which go beyond the standard Gricean approach to pragmatic effects. In particular, we consider dual route theories which allow for scalar inferences to be explained either using ‘global’ pragmatic derivations, like those set out in standard Gricean theory, or using local adjustments to interpretation. We focus on one such theory, based on the Bayesian Rational Speech Act approach (RSA-LU, Bergen et al., 2016). We show that RSA-LU predicts that a scalar term’s liability to certain kinds of local enrichment will explain some Scalar Diversity. In three experiments, we show that both proposed factors are active in the scalar diversity effect. We conclude with a discussion of the grammatical approach to local effects and show that our results provide better evidence for dual route approaches to scalar effects.
... Experiments 2 and 3 revealed no correlation between participants' spontaneous rate of implicature derivation and their spontaneous grammaticality judgments of intervention sentences. In Experiment 4, we drew on previous experimental work revealing that people can be trained to derive implicatures or not to derive implicatures (see, for example, Noveck & Posada 2003;Bott & Noveck 2004;Chemla et al. 2017). We investigated whether training participants either to derive or not to derive implicatures would influence the strength of intervention effects, comparing the performance of a group that received training consistent with implicature derivation, with that of a group that received training inconsistent with implicature derivation. ...
Article
Full-text available
This paper reports on five experiments investigating intervention effects in negative polarity item (NPI) licensing. Such intervention effects involve the unexpected ungrammaticality of sentences that contain an 'intervener', such as a universal quantifier, in between the NPI and its licensor. For example, the licensing of the NPI 'any 'in the sentence *'Monkey didn’t give every lion any chocolate 'is disrupted by intervention. Interveners also happen to be items that trigger scalar implicatures in environments in which NPIs are licensed (Chierchia 2004; 2013). A natural hypothesis, initially proposed in Chierchia (2004), is that there is a link between the two phenomena. In this paper, we investigate whether intervention effects arise when scalar implicatures are derived.
... The experiment was run remotely on a web-based platform using the Ibex Farm software (Drummond; http://spellout.net/ibexfarm). Web-based testing was used because it allowed us to expand the number of participants, and because this method has been found to yield reliable results in previous psycholinguistics studies (Chemla, Cummins, & Singh, 2016;Dillon, Clifton, & Frazier, 2014;Enochson, & Culbertson, 2014;Gibson, Piantadosi, & Fedorenko, 2011;Sprouse, 2011;Wagers & Phillips, 2014). For German native speakers, each session lasted approximately 30 min. ...
Article
Full-text available
Second language speakers often struggle to apply grammatical constraints such as subject–verb agreement. One hypothesis for this difficulty is that it results from problems suppressing syntactically unlicensed constituents in working memory. We investigated which properties of these constituents make them more likely to elicit errors: their grammatical distance to the subject head or their linear distance to the verb. We used double modifier constructions (e.g., the smell of the stables of the farmers ), where the errors of native speakers are modulated by the linguistic relationships between the nouns in the subject phrase: second plural nouns, which are syntactically and semantically closer to the subject head, elicit more errors than third plural nouns, which are linearly closer to the verb (2nd-3rd-noun asymmetry). In order to dissociate between grammatical and linear distance, we compared embedded and coordinated modifiers, which were linearly identical but differed in grammatical distance. Using an attraction paradigm, we showed that German native speakers and proficient Russian speakers of German exhibited similar attraction rates and that their errors displayed a 2nd-3rd-noun asymmetry, which was more pronounced in embedded than in coordinated constructions. We suggest that both native and second language learners prioritize linguistic structure over linear distance in their agreement computations.
... Using visual world paradigm, researchers have found that the scalar implicature can be incrementally processed, i.e., prior to the offset of the test audios (Breheny et al., 2006(Breheny et al., , 2013Grodner et al., 2010;Degen and Tanenhaus, 2015;Foppolo and Marelli, 2017), although under certain experimental settings the processing could be delayed (Huang and Snedeker, 2009). Using complex statements where the scalar quantifiers are embedded under other words such as the universal quantifier each, researchers have found that both the global reading (Geurts and Pouscoulous, 2009) and the local reading (Clifton and Dube, 2010;Chemla and Spector, 2011;Chemla et al., 2017) are possible to be constructed. The first cluster of studies seems support the hybrid account. ...
Article
Full-text available
Accounts based on the pragmatic maxim of quantity make different predictions about the computation of scalar versus ignorance inferences. These different predictions are evaluated in two eye-tracking experiments using a visual world paradigm to assess the on-line computation of inferences. The test sentences contained disjunction phrases, which engender both kinds of inferences. The first experiment documented that both inferences are computed immediately upon encountering the disjunctive connective, at nearly identical temporal locations. The second experiment was designed to determine whether or not there exists an intermediate stage at which the truth of the corresponding conjunction phrase is ignored. No such stage was found.
... In our view, the observed local effects constitute a probematic evidence only for pragmatic accounts à la Simons, according to which "presuppositions are not attached to atomic clauses, but are inferences derivable from the utterance as a whole, given the conversational situation"(Simons 2001:16). It is important to stress that the detection of local effects can be compatible with other pragmatic accounts of presuppositions (e.g.Schlenker 2007Schlenker , 2008 and, more generally, with other kinds of pragmatic processing like scalar implicatures(Chemla and Singh 2014;Chemla et al. 2017). ...
Article
Full-text available
The present study investigates the processing of presupposition accommodation. In particular, it concerns the processing costs and the time-course of accommodation as compared to presupposition satisfaction. Data collected in a self-paced word-by-word reading times experiment support three results. First, independently on the presupposition trigger in use, accommodation is costlier than satisfaction. Second, presupposition accommodation takes places immediately just as the trigger becomes available and proceeds incrementally during the sentence processing. Third, accommodated information is harder to be recalled. The results offer evidence for the on-line processing of presuppositions and, consistently with the traditional semantic framework, support the idea that, presuppositions are semantic properties encoded in the lexical meaning of the presupposition triggers.
Article
Full-text available
Like many languages, European French has a contrapositive response option (Si) to reject the negative content of a question and to express accord with the questioner’s implicit affirmative. Consider the question “Barack does not eat meat?” (in French) where the response Si indicates that he does. Inspired by Gricean analyses, we view Si as an expression that includes a pragmatic component. Based on extant studies on pragmatic inference, we predicted that the Si response ought to appear cognitively costly compared to felicitous Oui and Non answers. We created an original task that enjoins a participant to remove a box’s cover (while searching for a candy) before hearing a puppet’s question. In the critical Negative-Si (NS) condition, the participant finds the candy in, say, a white box (when two boxes are under consideration) and the interlocutor-puppet’s negative question is It is not in the white box? Besides rates of accurate responses, our main dependent variable was Response Reaction Times (RRT’s), viz. the time to naturally voice an answer (Si in this case). Controls were the Affirmative-Oui (AO), Affirmative-Non (AN), and Negative-Non (NN) conditions. Importantly, the puppet began each trial with one of three kinds of prior belief, a) by declaring that the candy is surely in, or; b) surely not in, the to-be-presented box or; c) by saying “I don’t know where it is.” These were included to determine whether answerers consider the questioner’s prior epistemic state when responding. Experiment 1 compared 6-year-olds to adults and found that i) proficient uses of Si are costly with respect to the other three conditions and that; ii) answers in the wake of a “I don’t know where it is” prompt slowdowns when compared to the other two declarations. Both findings are consistent with our pre-registered predictions. Four-year-olds, investigated in Experiment 2, pattern almost identically with the 6-year-olds, with one major exception. Their fastest response occurs when answering Si, leading to a unique developmental effect. Our account for this finding is that four-year-olds rely on a minimally semantic representation of Si, which encodes disagreement between the negative content of the question and the facts. We propose that there are pragmatic processes intrinsic to Si – which ultimately signal agreement with the questioner’s implicit affirmative – and that mastering these requires greater maturity.
Article
Full-text available
The computation of scalar implicatures is sometimes costly relative to basic meanings. Among the costly computations are those that involve strengthening “some” to “not all” and strengthening inclusive disjunction to exclusive disjunction. The opposite is true for some other cases of strengthening, where the strengthened meaning is less costly than its corresponding basic meaning. These include conjunctive strengthenings of disjunctive sentences (e.g., free-choice inferences) and exactly-readings of numerals. Assuming that these are indeed all instances of strengthening via implicature/exhaustification, the puzzle is to explain why strengthening sometimes increases costs while at other times it decreases costs. I develop a theory of processing costs that makes no reference to the strengthening mechanism or to other aspects of the derivation of the sentence's form/meaning. Instead, costs are determined by domain-general considerations of the grammar's output, and in particular by aspects of the meanings of ambiguous sentences and particular ways they update the context. Specifically, I propose that when the hearer has to disambiguate between a sentence's basic and strengthened meaning, the processing cost of any particular choice is a function of (i) a measure of the semantic complexity of the chosen meaning and (ii) a measure of how much relevant uncertainty it leaves behind in the context. I measure semantic complexity with Boolean Complexity in the propositional case and with semantic automata in the quantificational case, both of which give a domain-general measure of the minimal representational complexity needed to express the given meaning. I measure relevant uncertainty with the information-theoretic notion of entropy; this domain-general measure formalizes how ‘far' the meaning is from giving a complete answer to the question under discussion, and hence gives an indication of how much representational complexity is yet to come. Processing costs thus follow from domain-general considerations of current and anticipated representational complexity. The results might also speak to functional motivations for having strengthening mechanisms in the first place. Specifically, exhaustification allows language users to use simpler forms than would be available without it to both resolve relevant uncertainties and convey complex meanings.
Article
Full-text available
Previous studies have shown that multilingual speakers are influenced by their native (L1) and non-native (L2) grammars when learning a new language. But, so far, these studies have mostly used untimed metalinguistic tasks. Here we examine whether multilinguals’ prior grammars also affect their sensitivity to morphosyntactic constraints during processing. We use speeded judgment and self-paced reading tasks to examine the comprehension of German possessive pronouns. To investigate whether native and non-native grammars differentially affect participants’ performance, we compare two groups of non-native German speakers with inverse L1–L2 distributions: a group with L1 Spanish – L2 English, and a group with L1 English – L2 Spanish. We show that the reading profiles of both groups are modulated by their L1 grammar, with L2 proficiency selectively affecting participants’ judgment accuracy but not their reading times. We propose that reading comprehension is mainly influenced by multilinguals’ native grammar, but that knowledge of an L2 grammar can further increase sensitivity to morphosyntactic violations in an additional language.
Article
Full-text available
In Magri 2009a, I argue that a sentence such as '#Some Italians come from a warm country' sounds odd because it triggers the scalar implicature that not all Italians come from a warm country, which mismatches with the piece of common knowledge that all Italians come from the same country. If this proposal is on the right track, then oddness can be used as a diagnostic for scalar implicatures. In this paper, I use this diagnostic to provide one more argument that scalar implicatures are computed not only at the matrix level but also in embedded position. The argument is based on a puzzling pattern of oddness in downward entailing environments. Some apparently unrelated facts about restrictions on temporal modification with individual-level predicates are shown to fit into the pattern. http://dx.doi.org/10.3765/sp.4.6 BibTeX info
Article
Full-text available
We present experimental evidence showing that there is considerable variation between the rates at which scalar expressions from different lexical scales give rise to upper-bounded construals. We investigated two factors that might explain the variation between scalar expressions: first, the availability of the lexical scales, which we measured on the basis of association strength, grammatical class, word frequencies and semantic relatedness, and, secondly, the distinctness of the scalemates, which we operationalized on the basis of semantic distance and boundedness. It was found that only the second factor had a significant effect on the rates of scalar inferences.
Article
Full-text available
Grice (1975) pointed out that the ignorance inferences normally drawn when disjunctive sentences are uttered are cancelled when it is presupposed that speakers are not going to provide all of the relevant information that they have available (e.g., in the context of a treasure hunt). This argues that ignorance inferences depend on the maxim of quantity for their derivation. Here it is argued that the situation with Scalar Implicatures is different. This is expected by the grammatical theory of Scalar Implicatures, but not by standard Gricean or neo-Gricean alternatives. http://dx.doi.org/10.3765/sp.7.5 BibTeX info
Article
Full-text available
Quantity implicatures are inferences triggered by an utterance based on what other utterances a speaker could have made instead. Using ideas and formalisms from game theory, I demonstrate that these inferences can be explained in a strictly Gricean sense as *rational behavior*. To this end, I offer a procedure for constructing the context of utterance insofar as it is relevant for quantity reasoning as a game between speaker and hearer. I then give a new solution concept that improves on classical equilibrium approaches in that it uniquely selects the desired "empirically correct" play in these interpretation games by a chain of back-and-forth reasoning about players' behavior. To make this formal approach more accessible to a wider audience, I give a simple algorithm with the help of which the model's solution can be computed without having to do heavy calculations of probabilities, expected utilities and the like. This rationalistic approach subsumes and improves on recent exhaustivity-based approaches. It makes correct and uniform predictions for quantity implicatures of various epistemic varieties, free choice readings of disjunctions, as well as a phenomenon tightly related to the latter, namely so-called "simplification of disjunctive antecedents". doi:10.3765/sp.4.1 BibTeX info
Article
Full-text available
For over a decade, the interpretation of scalar expressions under embedding has been a much debated issue, with proposed accounts ranging from strictly pragmatic, on one end of the spectrum, to lexico-syntactic, on the other. There has been some confusion as to what exactly the controversy is about, and we argue that what is at stake is the division of labour between pragmatic and truth-conditional mechanisms. All parties to the debate agree that upper-bounded construals of scalar expressions are variously caused by conversational implicatures and truth-conditional narrowing, but whereas Griceans argue that the former mechanism is the main cause, conventionalists point to the latter, assuming as a matter of course that the source of truth-conditional narrowing lies in linguistic convention; on this view, narrowing is either a lexical or a syntactic phenomenon. Since researchers’ introspective judgments tend to agree with the theories they advocate, a number of experimental studies have recently tried to shed light on this issue. In this paper, we review the experimental record, and argue that the extant data favour a pragmatic account. http://dx.doi.org/10.3765/sp.6.9 BibTeX info
Article
Full-text available
Scalar implicatures are inferences that arise when a weak expression is used instead of a stronger alternative. For example, when a speaker says, “Some of the children are in the classroom,” she often implies that not all of them are. Recent processing studies of scalar implicatures have argued that generating an implicature carries a cost. In this study we investigated this cost using a sentence verification task similar to that of Bott and Noveck (2004) combined with a response deadline procedure to estimate speed and accuracy independently. Experiment 1 compared implicit upper-bound interpretations (some [but not all]) with lower-bound interpretations (some [and possibly all]). Experiment 2 compared an implicit upper-bound meaning of some with the explicit upper-bound meaning of only some. Experiment 3 compared an implicit lower-bound meaning of some with the explicit lower-bound meaning of at least some. Sentences with implicatures required additional processing time that could not be attributed to retrieval probabilities or factors relating to semantic complexity. Our results provide evidence against several different types of processing models, including verification and nonverification default implicature models and cost-free contextual models. More generally, our data are the first to provide evidence of the costs associated with deriving implicatures per se.
Article
Full-text available
A framework for pragmatic analysis is proposed which treats discourse as a game, with context as a scoreboard organized around the questions under discussion by the interlocutors. The framework is intended to be coordinated with a dynamic compositional semantics. Accordingly, the context of utterance is modeled as a tuple of different types of information, and the questions therein — modeled, as is usual in formal semantics, as alternative sets of propositions — constrain the felicitous flow of discourse. A requirement of Relevance is satisfied by an utterance (whether an assertion, a question or a suggestion) iff it addresses the question under discussion. Finally, it is argued that the prosodic focus of an utterance canonically serves to reflect the question under discussion (at least in English), placing additional constraints on felicity in context. http://dx.doi.org/10.3765/sp.5.6 BibTeX info
Article
Full-text available
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond.
Article
Full-text available
Scalar implicatures are traditionally viewed as pragmatic inferences that result from a reasoning about speakers' communicative intentions (Grice 1989). This view has been challenged in recent years by theories that propose that scalar implicatures are a grammatical phenomenon. Such theories claim that scalar implicatures can be computed in embedded positions and enter into the recursive computation of meaning—something that is not expected under the traditional pragmatic view. Recently, Geurts and Pouscoulous (2009) presented an experimental study in which embedded scalar implicatures were not detected. Using a novel version of the truth-value judgment task, we provide evidence that subjects sometimes compute embedded scalar implicatures.
Article
Full-text available
Building on previous works which argued that scalar implicatures can be computed in embedded positions, this paper proposes a constraint on exhaustification (an economy condition) which restricts the conditions under which an exhaustivity operator can be licensed. We show that this economy condition allows us to derive a number of generalizations, such as, in particular, the ‘Implicature Focus Generalization’: scalar implicatures can be embedded under a downward-entailing operator only if the (relevant) scalar term bears pitch accent. Our economy condition also derives specific predictions regarding the licensing of so-called Hurford disjunctions.
Article
Full-text available
In this paper an approach to the exhaustive interpretation of answers is developed. It builds on a proposal brought forward by Groenendijk and Stokhof (1984). We will use the close connection between their approach and McCarthy’s (1980, 1986) predicate circumscription and describe exhaustive interpretation as an instance of interpretation in minimal models, well-known from work on counterfactuals (see for instance Lewis (1973)). It is shown that by combining this approach with independent developments in semantics/pragmatics one can overcome certain limitations of Groenenedijk and Stokhof’s (1984) proposal. In the last part of the paper we will provide a Gricean motivation for exhaustive interpretation building on work of Schulz (to appear) and van Rooij and Schulz (2004).
Article
Full-text available
Hurford’s Constraint (Hurford, Foundations of Language, 11, 409–411, 1974) states that a disjunction is infelicitous if its disjuncts stand in an entailment relation: #John was born in Paris or in France. Gazdar (Pragmatics, Academic Press, NY, 1979) observed that scalar implicatures can obviate the constraint. For instance, sentences of the form (A or B) or (Both Aand B) are felicitous due to the exclusivity implicature of the first disjunct: A or B implicates ‘not (A and B)’. Chierchia, Fox, and Spector (Handbook of semantics, 2008) use the obviation of Hurford’s Constraint in these cases to argue for a theory of local implicature. I present evidence indicating that the constraint needs to be modified in two ways. First, implicatures can obviate Hurford’s Constraint only in earlier disjuncts, not later ones: #(Both A and B) or (A or B). Second, the constraint rules out not only disjuncts that stand in an entailment relation, but also disjuncts that are even mutually consistent: #John is from Russia or Asia. I propose to make sense of these facts by providing an incremental evaluation procedure which checks that each new disjunct to the right is inconsistent with the information to its left, before the disjunct can be strengthened by local implicature.
Article
Full-text available
Predicates such as tall or to know Latin, which intuitively denote permanent properties, are called individual-level predicates. Many peculiar properties of this class of predicates have been noted in the literature. One such property is that we cannot say #John is sometimes tall. Here is a way to account for this property: this sentence sounds odd because it triggers the scalar implicature that the alternative John is always tall is false, which cannot be, given that, if John is sometimes tall, then he always is. This intuition faces two challenges. First: this scalar implicature has a weird nature, since it must be surprisingly robust (otherwise, it could be cancelled and the sentence rescued) and furthermore blind to the common knowledge that tallness is a permanent property (since this piece of common knowledge makes the two alternatives equivalent). Second: it is not clear how this intuition could be extended to other, more complicated properties of individual-level predicates. The goal of this paper is to defend the idea of an implicature-based theory of individual-level predicates by facing these two challenges. In the first part of the paper, I try to make sense of the weird nature of these special mismatching implicatures within the recent grammatical framework for scalar implicatures of Chierchia (Structures and beyond, 2004) and Fox (2007). In the second part of the paper, I show how this implicature-based line of reasoning can be extended to more complicated properties of individual-level predicates, such as restrictions on the interpretation of their bare plural subjects, noted in Carlson (Reference to kinds in English. Doctoral dissertation, University of Massachusetts at Amherst, 1977), Milsark (Linguistic Analysis 3.1: 1–29, 1977), and Fox (Natural Language Semantics 3: 283–341, 1995); restrictions on German word order, noted in Diesing (Indefinites, 1992); and restrictions on Q-adverbs, noted in Kratzer (The Generic Book, ed. Carlson and Pelletier, 125–175, 1995).
Article
Full-text available
When Tarzan asks Jane Do you like my friends? and Jane answers Some of them, her underinformative reply implicates Not all of them. This scalar inference arises when a less-than-maximally informative utterance implies the denial of a more informative proposition. Default Inference accounts (e.g., and ) argue that this inference is linked to lexical items (e.g., some) and is generated automatically and largely independently of context. Alternatively, Relevance theory (Sperber & Wilson, 1985/1995) treats such inferences as contextual and as arriving effortfully with deeper processing of utterances. We compare these accounts in four experiments that employ a sentence verification paradigm. We focus on underinformative sentences, such as Some elephants are mammals, because these are false with a scalar inference and true without it. Experiment 1 shows that participants are less accurate and take significantly longer to answer correctly when instructions call for a Some but not all interpretation rather than a Some and possibly all interpretation. Experiment 2, which modified the paradigm of Experiment 1 so that correct responses to both interpretations resulted in the same overt response, reports results that confirm those of the first Experiment. Experiment 3, which imposed no interpretations, reveals that those who employed a Some but not all reading to the underinformative items took longest to respond. Experiment 4 shows that the rate of scalar inferences increased as permitted response time did. These results argue against a Neo-Gricean account and in favor of Relevance theory.
Article
Full-text available
This article develops a Gricean account for the computation of scalar implicatures in cases where one scalar term is in the scope of another. It shows that a cross-product of two quantitative scales yields the appropriate scale for many such cases. One exception are cases involving disjunction. For these, I propose an analysis that makes use of a novel, partially ordered quantitative scale for disjunction and capitalizes on idea that implicatures may have different epistemic status.
Article
Full-text available
In terms of Groenendijk and Stokhof’s (1984) formalization of exhaustive interpretation, many conversational implicatures can be accounted for. In this paper we justify and generalize this approach. Our justification proceeds by relating their account via Halpern and Moses’ (1984) nonmonotonic theory of ‘only knowing’ to the Gricean maxims of Quality and the first sub-maxim of Quantity. The approach of Groenendijk and Stokhof (1984) is generalized such that it can also account for implicatures that are triggered in subclauses not entailed by the whole complex sentence.
Article
In this article I propose that there are unembedded disjunctions which receive a conjunctive interpretation via recursive implicature computation. This is unexpected: since disjunction competes with conjunction, implicature computation will usually result in the negation of the conjunctive alternative. I argue that certain disjunctions do not compete with a conjunctive alternative, due to a constraint on the assertability of scalar alternatives. As a result, recursive implicature computation can strengthen matrix or into and in these cases. I discuss the pragmatic factors driving recursive implicature computation, as well as the scope of the proposed assertability condition on alternatives.
Article
Inferences that result from exhaustification of a sentence S depend on the set of alternatives to S. In this paper, we present some inference patterns that are problematic for previous theories of alternatives and propose some structural constraints on the derivation of formal alternatives which derive the observations.
Article
In Part I, we have introduced two of the main approaches to scalar implicature: the Gricean approach and the grammatical approach. We have argued that although they rely on conceptually different views about the phenomenon, they share various insights, and we argued that their empirical differences were more subtle than what one may have expected. In this second part of this review paper, we will sample some experimental results with two goals in mind. First, we will exemplify some simplifications that are found in the literature and examine the consequences of these simplifications on the interpretability of experimental results. Second, we will suggest future directions that the experimental turn might consider exploring, directions that seem to us to have the potential to illuminate the richness of the competing theories and the potential to dissociate or improve them.
Article
(for Part I and Part II)There has been a recent ‘experimental turn’ in the study of scalar implicature, yielding important results concerning online processing and acquisition. This paper highlights some of these results and places them in the current theoretical context. We argue that there is sometimes a mismatch between theoretical and experimental studies, and we point out how some of these mismatches can be resolved. We furthermore highlight ways in which the current theoretical and experimental landscape is richer than is often assumed, and in light of this discussion, we offer some suggestions for what seem to us promising directions for the experimental turn to explore.The article is divided in two parts. Part I first presents the two dominant families of accounts of scalar implicature, the domain-general Gricean account and the domain-specific grammatical account. We try to separate the various components of these theories and connect them to relevant psycholinguistic predictions. Part II examines and reinterprets several prominent experimental results in light of the theoretical presentation proposed in the first part.
Conference Paper
This paper shows that both scalar implicatures and exhaustification of answers can be understood as the outcome of a pragmatic reasoning based on Gricean maxims. I offer a formalization of the Gricean reasoning that solves some of the problems (cf. Chierchia, 2002) faced by standard neo-Gricean accounts. I further show that positive and non-positive answers pattern very differently, in a way that can be predicted by stating carefully, for a given question-answer pair, what counts as an 'alternative answer'-this notion plays the same role as that of 'scalar alternative' in previous approaches. The general approach is very similar in spirit to van Rooij and Schulz (2004).
Article
Linguistic inferences have traditionally been studied and categorized in several categories, such as entailments, implicatures or presuppositions. This typology is mostly based on traditional linguistic means, such as introspective judgments about phrases occurring in different constructions, in different conversational contexts. More recently, the processing properties of these inferences have also been studied (see, e.g., recent work showing that scalar implicatures is a costly phenomenon). Our focus is on free choice permission, a phenomenon by which conjunctive inferences are unexpectedly added to disjunctive sentences. For instance, a sentence such as "Mary is allowed to eat an ice-cream or a cake" is normally understood as granting permission both for eating an ice-cream and for eating a cake. We provide data from four processing studies, which show that, contrary to arguments coming from the theoretical literature, free choice inferences are different from scalar implicatures.
Article
The semantics of association with focus and the pragmatic conditions governing the appropriateness of focus in discourse are usually taken to depend on focus alternatives. According to a common view, these alternatives are generated by a permissive process. This permissive view has been challenged by Michael Wagner, who has noted that certain alternatives are systematically excluded from consideration. Wagner describes a more restrictive view, on which only contrastive alternatives are relevant for association with focus and for the appropriateness of focus in discourse. I use recent work on the role of contradiction to show that the standard, permissive view derives the same results as the contrast-based view for the basic cases. These basic cases involve a contradiction that prevents us from using them to distinguish the two approaches. I show that when this contradiction is eliminated, evidence of non-contrastive alternatives emerges, supporting the permissive standard view over the restrictive contrast-based one.
Article
A sentence such as ‘John has four children’ can be interpreted as meaning either that John has at least four children (weak reading), or that John has exactly four children (strong reading). On the classical neo-Gricean view, this ambiguity is similar to the ambiguity generated by scalar terms such as ‘some’, for which both a weak reading (i.e., some or all) and a strong reading (i.e., some but not all) are available. On this view, the strong reading of numerals, just like the strong reading of ‘some’, is derived as a scalar implicature, taking the weak reading as semantically given. However, more recent studies have found substantial differences between the two phenomena. For instance, the syntactic distribution of the strong reading is not the same in both cases, and young children's performance in certain specific tasks has suggested that they acquire the strong reading of numerals before they acquire the strong reading of standard scalar items. Using a dual task approach, we provide evidence for another type of difference between numerals and standard scalar items. We show that tapping memory resources has opposite effects on bare numerals and on ‘some’. Under high cognitive load, participants report fewer implicatures for sentences involving ‘some’ (compared to low cognitive load conditions), but they report more strong readings for sentences involving bare numerals. We discuss the implications of this result for current theoretical debates regarding the semantics and pragmatics of numerals.
Article
This article presents a unified theory of polarity-sensitive items (PSIs) based on the notion of domain widening. PSIs include negative polarity items (like Italian mai ‘ever), universal free choice items (like Italian qualunque ‘any/whatever), and existential free choice items (like Italian uno qualunque ‘a whatever). The proposal is based on a ‘‘recursive, grammatically driven approach to scalar implicatures that breaks with the traditional view that scalar implicatures arise via post- grammatical pragmatic processes. The main claim is that scalar items optionally activate scalar alternatives that, when activated, are then recursively factored into meaning via an alternative sensitive operator similar to only. PSIs obligatorily activate domain alternatives that are factored into meaning in much the same way.
Article
Recently, several authors have argued that Gricean theories of scalar implicature computation are inadequate, and, as an alternative, one author has pro-posed a grammatical system for computing scalar implicatures. The present paper provides arguments, counter to the claims of these authors, that Gricean reason-ing can account for the implicatures of certain complex sentences and does not generate undesirable implicatures for others. Moreover, it is shown that a putative advantage of grammatical scalar implicature computation, that it informs a theory of intervention in negative polarity item licensing, is spurious. These arguments, plus general conceptual advantages of Gricean theory, lead to the conclusion that scalar implicature computation is not carried out in the grammar.
Article
This article reviews some cases in which the collaboration of theoretical pragmaticians and psychologists of language has been most fruitful for all parties. Linguists have benefited from experimental data confirming the psychological validity of their observations and providing theory-critical evidence in cases beyond the reach of reflective intuition, whereas psychologists have benefited from having a wealth of linguistic phenomena to study as well as multiple theories furnished by semantics and pragmatics. Focusing on the much-discussed question of whether scalar implicature is a default inference, we aim to showcase the benefits of the experimental approach to pragmatics.
Article
For the first time a uniform compositional derivation is given for quantified sentences containing exceptive constructions. The semantics of exceptives is primarily one of subtraction from the domain of a quantifier. The crucial semantic difference between the highly grammaticized but-phrases and free exceptives is that the former have the Uniqueness Condition as part of their lexical meaning whereas the latter are mere set subtractors. Several empirical differences between the two types of exceptives are shown to follow from this basic lexical difference.
Article
The notion of measurement plays a central role in human cognition. We measure people’s height, the weight of physical objects, the length of stretches of time, or the size of various collections of individuals. Measurements of height, weight, and the like are commonly thought of as mappings between objects and dense scales, while measurements of collections of individuals, as implemented for instance in counting, are assumed to involve discrete scales. It is also commonly assumed that natural language makes use of both types of scales and subsequently distinguishes between two types of measurements. This paper argues against the latter assumption. It argues that natural language semantics treats all measurements uniformly as mappings from objects (individuals or collections of individuals) to dense scales, hence the Universal Density of Measurement (UDM). If the arguments are successful, there are a variety of consequences for semantics and pragmatics, and more generally for the place of the linguistic system within an overall architecture of cognition.
Article
We present an argument for revising the theory of alternatives for Scalar Implicatures and for Association with Focus. We argue that in both cases the alternatives are determined in the same way, as a contextual restriction of the focus value of the sentence, which, in turn, is defined in structure-sensitive terms. We provide evidence that contextual restriction is subject to a constraint that prevents it from discriminating between alternatives when they stand in a particular logical relationship with the assertion or the prejacent, a relationship that we refer to as symmetry. Due to this constraint on contextual restriction, discriminating between alternatives in cases of symmetry becomes the task of focus values. This conclusion is incompatible with standard type-theoretic definitions of focus values, motivating our structure-sensitive definition instead. KeywordsAlternatives–Scalar implicature–Focus semantics–Contextual restriction–Relevance
Article
The existence of “local implicatures” has been the topic of much recent debate. The purpose of this paper is to contribute to this debate by asking what we can learn from three puzzles, namely, the cancellation of such implicatures by or both, their behavior in the complement clauses of negative factive verbs such as sorry, and their behavior in root and embedded questions. Two basic approaches to local implicatures have been advanced: a fully pragmatic account in which local implicatures result from conventional Gricean principles and a semantic account according to which the generation of implicatures is interwoven with compositional, grammatical mechanisms. We argue that the lesson to be learned from our three case studies is that some kind of approach along the latter, grammatical line is necessary to account for the data. KeywordsPragmatics–Scalar implicatures–Local implicatures–Strawson entailment
Article
Scalar implicatures depend on alternatives in order to avoid the symmetry problem. I argue for a structure-sensitive characterization of these alternatives: the alternatives for a structure are all those structures that are at most as complex as the original one. There have been claims in the literature that complexity is irrelevant for implicatures and that the relevant condition is the semantic notion of monotonicity. I provide new data that pose a challenge to the use of monotonicity and that support the structure-sensitive definition. I show that what appeared to be a problem for the complexity approach is overcome once an appropriate notion of complexity is adopted, and that upon closer inspection, the argument in favor of monotonicity turns out to be an argument against it and in favor of the complexity approach.
Article
Two eye-tracking experiments investigated processing of VP-NP attachment ambiguities. Experiment 1 tested sentences in which there was an initial bias toward VP attachment. Readers experienced more difficulty when semantic information disambiguated the sentences to NP attachment than when it disambiguated them to VP attachment or when it was consistent with either analysis. Experiment 2 tested sentences in which there was no initial bias toward either VP or NP attachment. Readers experienced more difficulty when semantic information disambiguated the sentences to NP attachment or VP attachment than when it was consistent with either analysis. We argue that these results challenge theories that assume a competition mechanism, such as constraint-based lexicalist accounts (e.g., MacDonald, Pearlmutter, & Seidenberg, 1994; McRae, Spivey-Knowlton, & Seidenberg, 1998; Spivey-Knowlton & Sedivy, 1995) and fixed-choice two-stage models (e.g., Frazier, 1987). We interpret the results in terms of the unrestricted race model (cf. Traxler, Pickering, & Clifton, 1998).
Article
Eye movements were recorded as subjects read sentences containing temporary structural ambiguities. In accord with the garden-path theory of sentence comprehension, shorter reading times were found for sentences conforming to certain independently motivated parsing strategies (late closure and minimal attachment) than for comparable sentences which violate these strategies. Further, longer fixation durations were associated with the very first fixation in the region of the sentence which disambiguated the sentence, suggesting that the human sentence-parsing mechanism operates in a rather systematic fashion, immediately computing the structural consequences of fixated material for the analysis of preceding material. The pattern of regressive eye movements did not conform to the view that the parsing mechanism automatically returns to the beginning of the sentence to revise an incorrect analysis of linguistic material nor did it support the view that the parsing mechanism systematically backtracks through the sentence until the source of the erroneous analysis is located. Rather, the pattern of regressions indicated that the parsing mechanism typically engages in selective reanalysis, exploiting whatever information it has available about the type of error it has committed to guide its reanalysis attempts. Finally, it is emphasized that an understanding of the parser's revision procedures is essential to an explanation of why certain linguistic structures cannot be successfully parsed by humans.
Article
The Gricean theory of conversational implicature has always been plagued by data suggesting that what would seem to be conversational inferences may occur within the scope of operators like believe, for example; which for bona fide implicatures should be an impossibility. Concentrating my attention on scalar implicatures, I argue that, for the most part, such observations can be accounted for within a Gricean framework, and without resorting to local pragmatic inferences of any kind. However, there remains a small class of marked cases that cannot be treated as conversational implicatures, and they do require a local mode of pragmatic interpretation.
Article
I provide an overview of four current theories of scalar implicature: the pragmatic (or Gricean), the lexical, a combined pragmatic + lexical, and the grammatical theory. The empirical focus are global and local, but also intermediate implicatures. I argue that the grammatical theory is conceptually less well motivated than even the pragmatic + lexical theory, and that the grammatical theory therefore requires strong empirical support. I then focus on a novel empirical phenomenon – intermediate implicatures – which provides empirical support for the grammatical theory. I conclude that it seems necessary to adopt the grammatical theory.