Content uploaded by Jonathan Ginzburg
Author content
All content in this area was uploaded by Jonathan Ginzburg on Nov 16, 2015
Content may be subject to copyright.
TTR for Natural Language Semantics 2
Robin Cooper, Jonathan Ginzburg
To cite this version:
Robin Cooper, Jonathan Ginzburg. TTR for Natural Language Semantics 2. Handbook of
Contemporary Semantic Theory, second edition, 2015. <hal-01138034>
HAL Id: hal-01138034
https://hal.archives-ouvertes.fr/hal-01138034
Submitted on 3 Apr 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entific research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.
1
TTR for Natural Language Semantics?
2
Robin Cooper1and Jonathan Ginzburg2
3
1University of Gothenburg cooper@ling.gu.se4
2Universit´e Paris-Diderot and LabEx-EFL, Sorbonne Paris-Cit´e5
yonatan.ginzburg@univ-paris-diderot.fr6
?This work was supported in part by Vetenskapsr˚adet project 2009-1569, Semantic
analysis of interaction and coordination in dialogue (SAICD), by the Lab(oratory
of )Ex(cellence)-EFL (ANR/CGI), and by the Disfluency, Exclamations, and
Laughter in Dialogue (DUEL) project within the projets franco-allemand en sci-
ences humaines et sociales funded by the ANR and the DFG. We are grateful for
comments to the participants in three courses we taught in which we presented
a version of this material: Type Theory with Records for Natural Language Se-
mantics, NASSLLI, Austin, Texas, 18th – 22nd June, 2012; An introduction to
semantics using type theory with records, ESSLLI, Opole, Poland, 13th – 17th
Aug, 2012; and Semantics using type theory with records, Gothenburg, 10th –
12th June, 2013. We are grateful to Liz Coppock for comments on an earlier draft
of this chapter. Finally, we would like to thank Chris Fox for his very penetrating
and careful comments on the first submitted draft.
A draft chapter for the Wiley-Blackwell Handbook of Contemporary Semantics —
second edition, edited by Shalom Lappin and Chris Fox. This draft formatted on
3rd April 2015.
Page: 1 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
2 Robin Cooper and Jonathan Ginzburg
1 Introduction7
Given the state of the art, a simple actual conversation such as (1)2still con-8
stitutes a significant challenge to formal grammar of just about any theoretical9
flavour.10
(1)11
John: which one do you think it is?
Try F1 F1 again and we'll get
Sarah: Shift and F1?
Sue: It's, No.
John: No, just F1 F1
Sue: It isn't that.
John: F1
Right and that tells us
Sue: It's Shift F7
Disfluencies
Non sentential utterances
Self-answering
Partial comprehension
Multilogue
12
As we note in the diagram above, this little dialogue involves a variety13
of theoretically difficult phenomena: it involves three rather than two parti-14
cipants, is hence a multi(-party dia)logue; it features disfluencies, a variety of15
2The conversation occurs in the block G4K of the British National Corpus (BNC).
Henceforth, the notation ‘(BNC,xyz)’ refers to the block xyz from the BNC.
Page: 2 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 3
types of non-sentential utterances,partial comprehension, and self answering.16
Making sense of all these phenomena in a systematic way is a challenge un-17
dertaken in the TTR–based dialogue framework KoS (Ginzburg, 2012). While18
we will not have the space to develop a detailed analysis of this example, by19
the end of the paper we will have sketched a toolbox on the basis of which20
disfluencies, non-sentential utterances, partial comprehension self, answering,21
and multilogue can be explicated. A key ingredient to this is a theory of the22
structure and evolution of dialogue game-boards(DGBs), the publicised com-23
ponent of the conversationalists’ information states. This, in turn, presupposes24
both means of developing semantic and grammatical ontologies to explicate25
notions such as propositions, questions, and utterances.26
There are, nonetheless, a number of well established paradigms for doing27
just that and the obvious question to ask is: why develop a distinct framework?28
We will illustrate throughout the paper intrinsic problems for frameworks29
such as possible worlds semantics and typed-feature structure (TFS)–based30
approaches:31
•Semantic ontology: Why not a possible worlds–based approach? There32
are well known problems for this strategy that revolve around its coarseness33
of grain. These are often ignored (folk assumption: ‘. . . the attitudes are34
difficult and primarily a philosophical problem . . . ’) Whether or not this is35
true we point to the problems posed by negation which cannot be brushed36
off so easily.37
•syntax-semantics interface: Why is a TFS-based approach to a syntax-38
semantics interface, as in frameworks such as Head-driven Phrase Struc-39
ture Grammar (HPSG) (Sag et al. (2003)) and in Sign-based Construction40
Grammar (Michaelis (2009)) insufficient? Here again, there are well known41
problems (lack of proper binding, functions) and these can be solved in42
standard λ-calculus based approaches. We will point to issues that are43
difficult to the latter such as clarification interaction.44
Our claim is that TTR enables a uniform theory of grammar, semantics,45
and interaction that can tackle such problems, while allowing one to main-46
tain past insights (emanating from Montague Semantics and Discourse Rep-47
resentation Theory) and also, we think, future directions (e.g. probabilistic48
semantics).49
This article is structured as follows: the basics of TTR are described in50
section 2. Subsequently, in sections 3–5 we use this to sketch fundamental51
notions of grammar, semantic ontology, and dialogical interaction. These are52
eventually illustrated in more detail in sections 6–8, which deal with meta-53
communicative interaction, negation, quantification, and, more briefly, non54
sentential utterances and disfluencies.55
Page: 3 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
4 Robin Cooper and Jonathan Ginzburg
2 A theory of types and situations56
2.1 Type theory and perception57
In classical model theoretic semantics (Montague, 1973, 1974) there is an un-58
derlying type theory which presents an ontology of basic classes of objects59
such as, in Montague’s type theory, entities, truth values, possible worlds and60
total functions between these objects. Here we will make use of a rich type61
theory inspired by the work of Martin-L¨of (1984) and much subsequent work62
on this kind of type theory in computer science. For a recent example relat-63
ing to natural language see Luo (2011). Ranta (this volume) gives important64
background on Martin-L¨of’s type theory.65
In a rich type theory of the kind we are considering there are not only types66
for basic ontological categories but also types corresponding to categories of67
objects such as Tree or types of situations such as Hugging of a dog by a boy. A68
fundamental notion of this kind of type theory is that of a judgement that an69
object (or situation) ais of type T, in symbols, a:T. In our view judgements70
are involved in perception. In perceiving an object we assign it a type. The71
type corresponds to what Gibson (1986) (and following him in their theory72
of situation semantics, Barwise & Perry, 1983) would call an invariance. In73
order to perceive objects as being of certain types, agents must be attuned74
to this invariance or type. We take this to mean that the type corresponds75
to a certain pattern of neural activation in the agent’s brain. Thus the types76
to which a human is attuned may be quite different from those to which an77
insect is attuned. A bee landing on a tree does not, presumably, perceive the78
tree in terms of the same type Tree that we are attuned to.79
2.2 TTR: Type theory with records80
The particular type theory we will discuss here is TTR which is particular81
variant of Type Theory with Records. The most recent published reference82
which gives details is Cooper (2012). An earlier treatment is given in Cooper83
(2005b), and Cooper (2005c) discusses its relation to various semantic theories.84
Here we will give a less detailed formal treatment of the type theory than in85
the first two of these references. We start by characterizing a system of basic86
types as a pair consisting of a non-empty set of types, Type, and a function,87
A, whose domain is Type and which assigns to each type in Type a (possibly88
empty) set which does not overlap with Type. We say that ais of type T(in89
Type), a:T,according to hType, Aijust in case a∈A(T). In general we90
will think of basic types as corresponding to basic ontological categories. The91
basic type we will use in this section is Ind for individuals.92
We will use complex types for types of situations, inspired by the notion of93
situation in Barwise & Perry (1983). The simplest complex type of situation94
is constructed from a predicate together with some appropriate arguments to95
the predicate. Consider, for example, the type of situation where a boy called96
Page: 4 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 5
Bill (whom we will represent by b) hugs a dog called Dinah, (represented97
by d). The type of situation in which Bill hugs Dinah will be constructed98
from the predicate ‘hug’ together with the arguments band d. This type is99
represented in symbols as hug(b,d). Here we are treating ‘hug’ as a predicate100
which has arity hInd,Indi, that is, it requires two individuals as arguments.101
Sometimes we may allow predicates to have more than one arity, that is they102
may allow different configurations of arguments. In this case we say that103
the predicate is polymorphic.3Types like this which are constructed with104
predicates we will call ptypes. A system of types containing ptypes, that is,105
asystem of complex types, will be an extension of a system of basic types106
hBType, Ai,hType,BType,PType,hA, F ii where PType is a set of ptypes107
constructed from a particular set of predicates and arities associated with108
them by combining them with all possible arguments of appropriate types109
according to the type system and Fis a function whose domain is PType110
which assigns a (possibly empty) set of situations to each ptype. The set Type111
includes both BType and PType.112
This gives us a system of types which will allow us types of situations113
where particular individuals are related to each other. However, we want to114
be able to characterize more general types of situation than this, for example,115
the type of situations where some boy hugs a dog, that is, the type of any116
“boy hugs dog” situation. There are a number of ways to characterize such117
more general types in type theory. In TTR we use record types. The type of118
situation where a boy hugs a dog could be the record type in (2).119
(2)
x : Ind
cboy : boy(x)
y : Ind
cdog : dog(y)
e : hug(x,y)
120
This record type consists of five fields each of which consists of a label (such121
as ‘x’ or ‘cdog’) and a type (such as Ind or ‘dog(y)’). Each field is an ordered122
pair of a label and a type and a record type is a set of such fields each of123
which have a distinct label. We use labels like ‘x’ and ‘y’ for fields introducing124
individuals and labels like ‘cpred ’ for fields introducing types which are ptypes125
with the predicate pred representing constraints or conditions (hence ‘c’) on126
objects in other fields. We will often use the label ‘e’ for the type representing127
the main event, such as hugging.128
A record of this type is a set of fields (i.e. order is unimportant) with labels129
and objects such that no two fields have the same label, there is a field with130
each of the labels in the record type and the object in the field is of the type131
in the corresponding field in the record type. Note that there can be more132
fields in the record with labels not mentioned in the record type. A record of133
the type in (2), that is, a witness for this type, will be one of the form in (3).134
3This introduces one kind of polymorphism into the system. We will also introduce
some polymorphism in the typing.
Page: 5 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
6 Robin Cooper and Jonathan Ginzburg
(3)
x = a
cboy =s1
y = b
cdog =s2
e = s3
.
.
.
135
where:136
a:Ind137
s1: boy(a)138
b:Ind139
s2: dog(b)140
s3: hug(a,b)141
If the type (2) is non-empty there will be a boy and a dog such that the142
boy hugs the dog. Thus (2) could be used to represent the content of a boy143
hugs a dog. That is, we use it to play the role of a proposition in other theories.144
(Later we will introduce a more complex notion of proposition which builds145
on such types.)146
Let rbe a record of the form (3). We will refer to the objects in the fields147
using the notation r.` where `is some label in the record. Thus r.x will be a,148
r.cboy will be s1and so on. We will allow records to be objects in fields. Thus149
we can have records within records as in (4).150
(4)
f=
f=ff=a
gg =b
g=c
g=h=g=a
h=d
151
We can extend the dot notation above to refer to paths in a record, that is152
sequences of labels which will lead from the top of a record down a value153
within the record. Let rbe (4). Then we can use paths to denote various154
parts of the record as in (5).155
(5)156
a. r.f =
f=ff=a
gg =b
g=c
157
b. r.g.h =g=a
h=d
158
c. r.f.f.ff=a159
Technically, we have cheated a little in the presentation of record types.160
‘boy(x)’, ‘dog(y)’ and ‘hug(x,y)’ are not technically ptypes since ‘x’ and ‘y’ are161
labels, not individuals as required by the arities of these predicates. What we162
mean by this notation is the ptype we can construct by substituting whatever163
individuals occur in the ‘x’ and ‘y’ fields of the record we are checking to see164
Page: 6 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 7
whether it belongs to the type. Thus the ptypes will be different depending165
on which record you are checking. The official notation for this record type166
makes this more explicit by introducing functions from individuals to ptypes167
and pairing them with a list of path names indicating where in the record one168
should look for the arguments to the functions, as in (6).4
169
(6)
x : Ind
cboy :hλv:Ind . boy(v), hxii
y : Ind
cdog :hλv:Ind . dog(v), hyii
e : hλv1:Ind λv2:Ind . hug(v1,v2),
hx,yii
170
There is good reason to use this more complex notation when we deal with171
more complex record types which have record types embedded within them.172
However, for the most part we will use the simpler notation as it is easier to173
read. Functions from objects to types, dependent types, will play an important174
role in what we have to say below.175
In record types we will frequently make use of manifest fields 5A manifest176
field `=a:Tis a convenient notation for `:Tawhere Tais a singleton type177
whose only witness is a. Singleton types are introduced by the clauses in (7).178
(7)179
a. If a:Tthen Tais a type.180
b. b:Taiff b=a181
2.3 Subtyping182
The notion of subtype in TTR plays a central inferential role within the183
system. T1is a subtype of T2(T1vT2) just in case for all assignments to184
basic types it is the case that if a:T1then a:T2. For more discussion of this185
notion see Cooper (2012).186
2.4 Function types187
We introduce function types as in (8).188
(8)189
a. If T1and T2are types, then so are (T1→T2) and (T1→cT2)190
b. f: (T1→T2) iff fis a function with domain {a|a:T1}and range191
included in {a|a:T2}192
c. f: (T1→cT2) iff f: (T1→T2) and there is some a:T1such that if193
b:T2then f(b) = a194
4Here we use the λ-notation for functions which is discussed in Section 2.4.
5This notion was introduced in Coquand et al. (2004).
Page: 7 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
8 Robin Cooper and Jonathan Ginzburg
This means that fis a total function from objects of type T1to objects of type195
T2. In (8c) fis required to be a constant function. A function is associated196
with a graph, that is, a set of ordered pairs, as in the classical set theoretical197
model of a function. As in set theory we let functions be identified by the198
graphs, that is, for functions f1,f2, if graph(f1) = graph(f2) then f1=f2.199
We also require that for each graph whose domain (i.e. left projection) is the200
set of witnesses of a type and whose range (i.e. right projection) is included in201
the set of witnesses of another type there is a function which has this graph.202
This makes the existence of a function of type (T1→T2) correspond to a203
universal quantification, “for everything of type T1there is something of type204
T2”. Finally we stipulate that types (T1→T2) and T1are incompatible. That205
is, you cannot have something which belongs to a function type and the type206
which characterizes the domain of the function. As a consequence, functions207
cannot apply to themselves. This is one way of avoiding paradoxes which can208
arise when we allow functions to apply to themselves.209
We introduce a notation for functions which is borrowed from the λ-210
calculus as used by Montague (1973). We let functions be identified by sets211
of ordered pairs as in the classical set theoretic construction of functions. Let212
O[v] be the notation for some object of our type theory which uses the variable213
vand let Tbe a type. Then the function λv :T . O[v] is to be the function214
identified by {hv, O[v]i | v:T}. For example, the function λv:Ind . run(v) is215
identified by the set of ordered pairs {hv, run(v)i | v:Ind}.216
Note that if fis the function λv:Ind . run(v) and a:Ind then f(a) (the217
result of applying fto a) is ‘run(a)’. Our definition of function-argument218
application guarantees what is called β-equivalence in the λ-calculus. We al-219
low both function types and dependent record types and we allow dependent220
record types to be arguments to functions. We have to be careful when con-221
sidering what the result of applying a function to a dependent record type222
should be. Consider the simple example in (9).223
(9) λv0:RecType (c0:v0)224
What should be the result of applying this function to the record type in (10)?225
226
(10) x : Ind
c1:hλv1:Ind(dog(v1)), hxii
227
Given normal assumptions about function application the result would be228
(11).229
(11) c0:x : Ind
c1:hλv1:Ind (dog(v1)), hxii (incorrect!)230
But this would be incorrect. In fact it is not a well-formed record type since231
‘x’ is not a path in it. Instead the result should be (12).232
(12) c0:x : Ind
c1:hλv1:Ind (dog(v1)), hc0.xii
233
Page: 8 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 9
Here the path from the top of the record type is specified. However, in the234
abbreviatory notation we write just ‘x’ when the label is used as an argument235
and interpret this as the path from the top of the record type to the field236
labelled ‘x’ in the local record type. Thus we will write (13)237
(13) x : Ind
c1: dog(x)
238
(where the ‘x’ in ‘dog(x)’ signifies the path consisting of just the single label239
‘x’) and (14)240
(14) c0:x : Ind
c1: dog(x)
241
(where the ‘x’ in ‘dog(x)’ signifies the path from the top of the record type242
down to ‘x’ in the local record type, that is, ‘c0.x’).6
243
Note that this adjustment of paths is only required when a record type is244
being substituted into a position that lies on a path within a resulting record245
type. It will not, for example, apply in a case where a record type is to be246
substituted for an argument to a predicate such as when applying the function247
(15)248
(15) λv0:RecType (c0:appear(v0))249
to (16)250
(16)
x : Ind
c1:hλv :Ind (dog(v)), hxii
c2:hλv :Ind (approach(v)), hxii
251
where the position of v0is in an “intensional context”, that is, as the argument252
to a predicate and there is no path to this position in the record type resulting253
from applying the function. Here the result of the application is (17)254
(17)
c0: appear(
x : Ind
c1:hλv :Ind (dog(v)), hxii
c2:hλv :Ind (approach(v)), hxii
)
255
with no adjustment necessary to the paths representing the dependencies.7
256
(Note that ‘c0.x’ is not a path in this record type.)257
Suppose that we wish to represent a type which requires that there is some258
dog such that it appears to be approaching (that is a de re reading). In the259
abbreviatory notation we might be tempted to write (18)260
(18)
x : Ind
c1: dog(x)
c0: appear(c2: approach(x) )
(incorrect!)261
6This convention of representing the path from the top of the record type to the
“local” field by the final label on the path is new since Cooper (2012).
7This record corresponds to the interpretation of it appears that a dog is approach-
ing.
Page: 9 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
10 Robin Cooper and Jonathan Ginzburg
corresponding to (19).262
(19)
x : Ind
c1:hλv:Ind (dog(v)), hxii
c0: appear(c2:hλv:Ind (approach(v)), hxii )
(incorrect!)263
This is, however, incorrect since it refers to a path ‘x’ in the type which is the264
argument to ‘appear’ which does not exist. Instead we need to refer to the265
path ‘x’ in the record type containing the field labelled ‘c0’ as in (20).266
(20)
x : Ind
c1:hλv:Ind (dog(v)), hxii
c0:hλv:Ind (appear(c2: approach(v))), hxii
267
In the abbreviatory notation we will use ‘⇑’ to indicate that the path referred268
to is in the “next higher” record type8as in (21).269
(21)
x : Ind
c1: dog(x)
c0: appear(c2: approach(⇑x) )
270
2.5 Complex types correspondings to propositional connectives271
We introduce complex types corresponding to propositional connectives by272
the clauses in (22).273
(22)274
a. If T1and T2are types then so are (T1∧T2), (T1∨T2) and ¬T275
b. a: (T1∧T2) iff a:T1and a:T2
276
c. a: (T1∨T2) iff a:T1or a:T2
277
d. a:¬T1iff there is some type T2which is incompatible with T1such278
that a:T2
279
T1is incompatible with T2just in case there is no assignment to basic types280
such that there is some asuch that a:T1and a:T2. That is, it is impossible281
for any object to belong to both types. This is a non-classical treatment of282
negation which we will discuss in Section 7.1.283
Occasionally we will need types which are possibly infinite joins of types284
in order to characterize certain function types. We will represent these using285
a subscripted W. Thus if T1and T2are types, then (23) is a type.286
(23)287
_
XvT1
(X→T2)288
Witnessing conditions for (23) are defined by (24).289
(24) f:_
XvT1
(X→T2) iff f: (T→T2) for some type Tsuch that TvT1.290
8This notation is new since Cooper (2012).
Page: 10 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 11
As we have record types in our system we will be able to form meets, joins291
and negations of these types just like any other. When we form the meet of292
two record types, T1∧T2there is always a record type T3which is equivalent293
to T1∧T2in the sense that no matter what we assign to our basic types294
anything which is of T1∧T2will be of type T3and vice versa. T3is defined295
using the merge operator ∧
.. Thus, T1∧
.T2is the merge of the two types T1,T2.296
If at least one of the two types is not a record type it is identical with the297
meet T1∧T2. The basic idea of merge for record types is illustrated by the298
examples in (25).299
(25)300
a. f:T1∧
.g:T2=f:T1
g:T2
301
b. f:T1∧
.f:T2=f:T1∧
.T2
302
(For a full definition which makes clear what the result is of merging any303
two arbitrary types, see Cooper, 2012.) Merge corresponds to unification in304
feature based systems such as HPSG.305
In addition to merge we also introduce asymmetric merge,T1∧
.T2. This306
is defined like ordinary merge except that in the case where one of the types307
is not a record type T1∧
.T2=T2. This notion (which is discussed in detail308
in Cooper, in prep) is related to that of priority unification (Shieber, 1986).309
2.6 Set and list types310
We introduce set and list types as in (26).311
(26)312
a. If Tis a type then {T}and [T] are types313
b. A:{T}just in case Ais a set and for any a∈A,a:T314
c. L: [T] just in case Lis a list and any member, a, of Lis such that315
a:T316
We will also introduce a type Poset (T) which can be regarded as (27)317
(27)
set : {T}
rel : {left : T
right : T}
cpo : po(rel,set)
318
where a: po(R, S ) iff a=hR, Siand Ris a partial order on S, that is, Ris a319
set of pairs of members of S(coded as records with ‘left’ and ‘right’ fields as320
above) and Ris reflexive or irreflexive, antisymmetric and transitive.321
If a:T,P:Poset(T) and a6∈ P.set, then a⊕P:Poset(T) where a⊕Pis322
the record r:Poset(T) such that the clauses in (28) hold.323
(28)324
a. r.set=P.set ∪ {a}325
Page: 11 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
12 Robin Cooper and Jonathan Ginzburg
b. r.rel= P.rel∪{left = a
right = x|x∈P.set}326
c. r.cpo=hr.rel, r.seti327
2.7 The string theory of events328
So far we have talked about situations or events in terms of ptypes or record329
types which have ptypes in some of their fields. This gives us a rather static330
view of events and does not give an analysis of the changes that take place331
as an event unfolds. A single type is rather like a snapshot of an event at one332
point in its development. In an important series of papers including Fernando333
(2004, 2006, 2008, 2009); ?), Tim Fernando has proposed that events should334
be analyzed in terms of strings of snapshots or observations. In TTR we335
adapt these ideas by introducing regular types: types of strings of objects336
corresponding to the kinds of strings you find in regular languages in formal337
language theory (Hopcroft & Ullman, 1979; Partee et al., 1990). (29) is an338
account of the two main kinds of regular types that we will use here where339
a_brepresents the concatenation of two objects aand b.340
(29)341
a. if T1,T2∈Type, then T1_T2∈Type342
a:T1_T2iff a=x_y,x:T1and y:T2
343
b. if T∈Type then T+∈Type.344
a:T+iff a=x_
1. . ._xn,n > 0 and for i, 1 ≤i≤n,xi:T345
T1_T2is the type of strings where something of type T1is concatenated with346
something of type T2.T+is the type of non-empty strings of objects of type347
T. Suppose for example that we want to represent the type a game of fetch as348
a game played between a human, a, and a dog, b, involving a stick, c, in which349
the human picks up the stick, attracts the attention of the dog, and throws350
the stick, whereupon the dog runs after the stick and picks it up, returning it351
to the human, after which the cycle can start from the beginning. The type352
of this event would be (30).353
(30) (pick up(a,c)_attract attention(a,b)_throw(a,c)_run after(b,c)_
354
pick up(b,c)_return(b,c,a))+
355
2.8 Inference from partial observation of events356
An important fact about our perception of events is that we can predict the357
type of the whole event when we have only perceived part of the event. Thus if358
we see a human and a dog playing with a stick and we see the human pick up359
the stick and attract the dog’s attention we might well predict that the type360
of the whole event is one of playing fetch. We can represent this prediction by361
a function, as in (31).362
Page: 12 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 13
(31) λr:
x : Ind
chuman : human(x)
y : Ind
cdog : dog(y)
z : Ind
cstick : stick(z)
e : pick up(x,z)_attract attention(x,y)
363
(e : play fetch(r.x,r.y) )364
Notice that this function is what we have called a dependent type, that is, it365
takes an object (in this case the observed situation) and returns a type (in this366
case the type of the predicted situation). Notice that this ability to predict367
types of situations on the basis of partial observations is not particular to368
humans. The dog realizes what is going on and probably starts to run before369
the human has actually thrown the stick. However, in the Section 3 we will370
suggest that humans build on this ability in their perception and analysis of371
speech events.372
Page: 13 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
14 Robin Cooper and Jonathan Ginzburg
3 Grammar in TTR373
In Section 2 we suggested that an important capability that agents have is the374
prediction of the type of a complete event on the basis of a partial observation375
of an event. We suggested that functions from observed situations to predicted376
situation type (a kind of dependent type) can be used in modelling this, taking377
the example of the game of fetch. Very similar inferences are involved in the378
perception of linguistic events, though there are also some important differ-379
ences. In the case of the game of fetch the predicted type is a type of situation380
which you could in principle perceive completely. In the example we gave you381
are inferring the nature of the event as it will develop later in time. The case382
of linguistic perception is rather more abstract. We are inferring types which383
may hold simultaneously with what we have observed and the predicted event384
types may be of events that are not directly perceivable. Thus we are able to385
perceive events belonging to phonological or phonetic types but from these386
we infer types relating to syntactic and semantic structure whose instances387
are not directly perceivable. It is this kind of reasoning about abstract ob-388
jects which seems so important to human linguistic ability. Nevertheless the389
fundamental mechanism is the same: we are mapping from an observation to390
a type of something unobserved.391
Grammar rules involve a prediction on the basis of a string of linguistic392
events. Thus they are functions of the form (32).393
(32) λs :T_
1. . ._Tn(T)394
where the Tiand Tare sign types, which, as we will see below, are types which395
have both a directly perceivable and a non-directly perceivable component.396
Thus grammar rules are functions from strings of linguistic events to a type of397
a single linguistic event. An example would be the observation of a string con-398
sisting of a noun-phrase event followed by a verb-phrase event and predicting399
that there is a sentence event, that is, what is normally written in linguistic400
formalisms as the phrase-structure rule S →NP VP.401
Sign types correspond to the notion of sign in HPSG (Sag et al., 2003).402
The type Sign could be thought of as (33).9
403
(33)
s-event : SEvent
synsem :
cat : Cat
constits : {Sign}
cont : Cont
404
Here we use ‘synsem’ (“syntax and semantics”) as a field corresponding to405
both syntactic and semantic information, although this, and also what follows406
below, could be adjusted to fit more closely with other versions of HPSG.407
However, for technical reasons having to do with recursion (ultimately signs408
9For more detailed discussion of the grammar discussed here and below see Cooper
(2012).
Page: 14 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 15
may be contained within signs), we have to define Sign as a basic type which409
meets the condition (34).410
(34) r:Sign iff r:
s-event : SEvent
synsem :
cat : Cat
constits : {Sign}
cont : Cont
411
We have introduced three new types here: SEvent, the type of speech events;412
Cat, the type of categories and Cont, the type of semantic contents. We will413
take each of these in turn and return to the ‘constits’-field (for “constituents”)414
in synsem later.415
A minimal solution for the type SEvent is (35).416
(35)
phon : Phon
s-time : TimeInt
uttat : uttered at(phon, s-time)
417
Here we have introduced the types Phon, phonology, and TimeInt, time in-418
terval, which we will further specify below. A more detailed type for SEvent419
might be (36).420
(36)
e-time : TimeInt
e-loc : Loc
sp : Ind
au : Ind
phon : Phon
e : utter(sp,phon,au,e-time,e-loc)
421
where we have in addition fields for event location, speaker and audience. This422
corresponds more closely to the kind of information we normally associate with423
speech act theory (Searle, 1969). However, this type may be too restrictive:424
more than one person may be in the audience; more than one speaker may425
collaborate on a single speech event, as is shown by work on split utterances426
(Purver et al., 2010). For present purposes it will be sufficient to use the427
simpler type (35) for speech events.428
We will take the type Phon to be the type of a non-empty string of phon-429
eme utterances, that is Phoneme+. We could use phonetic symbols to repres-430
ent types of individual phoneme utterances. For example u:hwould mean431
that uis an utterance of the phoneme h(the phoneme being modelled as a432
TTR type). u:h_æy would mean that uis an utterance of the phoneme433
string which we denote in orthography by ‘hi’. It is not our intention to give434
a detailed account of phonology here and we will represent this string type435
using the orthography as hi. Note that hi is a subtype of Phon.436
We define the type TimeInt, for time interval, to be (37).437
(37)
start : Time
end : Time
c<: start<end
438
Page: 15 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
16 Robin Cooper and Jonathan Ginzburg
where Time is a basic type whose witnesses are time points and <is a pre-439
dicate (here used in infix notation) which requires that its first argument is440
ordered before its second argument.441
The ‘constits’-field in synsem if for the set of constituents (including all442
constituents, not just daughters (immediate constituents)).443
In Section 5 we will extend the definition of Sign to include a field for a444
dialogue game board.445
Page: 16 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 17
4 A theory of abstract entities446
An ontology including abstract entities—including entities such as proposi-447
tions, questions, and outcomes is a necessary ingredient for accounts of il-448
locutionary acts such as assertion, querying, and commanding, as well as of449
attitude reports. Building on a conception articulated 30 years earlier by Aus-450
tin (1961), Barwise & Etchemendy (1987) developed a theory of propositions451
in which a proposition is a structured object prop(s, σ), individuated in terms452
of a situation sand a situation type σ. Given the ‘:’ relation between situations453
and their types there is a a straightforward notion of truth and falsity:454
(38)455
a. prop(s, σ) is true iff s:σ(s is of type σ).456
b. prop(s, σ) is false iff s6:σ(s is not of type σ).457
A detailed such ontology extending the original situation semantics ontology458
was developed in Ginzburg & Sag (2000). This approach has subsequently459
been developed in TTR in works such as Ginzburg (2011, 2012). We start by460
discussing how to add propositions into TTR.461
For many purposes the type theory already developed has entities that462
could be identified with Austinian propositions, an identification frequently463
assumed in past work in type theory via the slogan propositions as types.464
Cooper (2005b) develops the former in which a proposition pis taken to465
be a record type. A witness for this type is a situation. On this strategy, a466
witness is not directly included in the semantic representation. Indeed, re-467
cord types are competitive in such a role: they are sufficiently fine-grained468
to distinguish identity statements that involve distinct constituents. (39a)469
would correspond to the record type in (39c), whereas (39b) to the record470
type in (39d)). Moreover, in this set up substitutivity of co-referentials (39e)471
and cross-linguistic equivalents ((39e), the Hebrew equivalent of (39a)) can be472
enforced:473
(39)474
a. Enescu is identical with himself.475
b. Poulenc is identical with himself.476
c. hc : Identical(enesco, enesco)i
477
d. hc : Identical(poulenc, poulenc)i
478
e. He is identical with himself.479
f. Enesku zehe leacmo.480
A situational witness for the record type could also be deduced to explicate481
cases of event anaphora, as in (40); indeed, a similar strategy is invoked when482
in an analysis of nominal anaphora in Ginzburg (2012):483
(40)484
a. A: Jo and Mo got married yesterday. It was a wonderful occasion.485
Page: 17 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
18 Robin Cooper and Jonathan Ginzburg
b. A: Jo’s arriving next week. B: No, that’s happening in about a month.486
Nonetheless, here we develop an explicitly Austinian approach, where the487
situational witness is directly included in the semantic representation. The ori-488
ginal Austinian conception was that sis a situation deictically indicated by a489
speaker making an assertion10—teasing out the semantic difference between490
implicit and explicit witnesses is a difficult semantic task. The Austinian ap-491
proach is important for negation (see section 7.1). Explicitly Austinian pro-492
positions can also play a role in characterizing the communicative process: in493
section 6 we will show that locutionary propositions individuated in terms of494
an utterance event u0as well as to its grammatical type Tu0allows one to495
simultaneously define update and clarification potential for utterances. In this496
case, there are potentially many instances of distinct locutionary propositions,497
which need to be differentiated on the basis of the utterance token—minimally498
any two utterances classified as being of the same type by the grammar.499
Assuming we adopt an explicitly Austinian approach, then on the current500
account the type of propositions is the record type (41a). The correspondence501
with the situation semantics conception is quite direct. We can define truth502
conditions as in (41b).503
(41)504
a. Prop =def sit : Rec
sit-type : RecType†
505
b. A proposition p=sit = s0
sit-type = ST0is true iff s0:S T0
506
Here the type RecType†is a basic type which denotes the type of records types507
closed under meet, join and negation. That is, we require:508
(1) if T:RecType, then T:RecType†
509
(2) if T1, T2:RecType†, then T1∧T2,T1∨T2,¬T1:RecType†
510
(3) Nothing is of type RecType†except as required above.511
If p:Prop and p.sit-type is T1∧T2(T1∨T2,¬T) we say that pis the conjunc-512
tion (disjunction) of sit = p.sit
sit-type = T1and sit = p.sit
sit-type = T2(the negation513
of sit = p.sit
sit-type = T). This means that Austinian propositions are not closed514
under conjunction and disjunction. You can only form the conjunction and515
disjunction of Austinian propositions which have the same situation. If p1and516
p2are Austinian propositions such that p1.sit = p2.sit, we say that p1entails517
p2just in case p1.sit-type vp2.sit-type.518
A subtype of Prop that will be important below is the type of locutionary519
propositions LocProp. Locutionary propositions are Austinian propositions520
about utterances. LocProp is defined as follows:521
10 One could also construe sas evidence (a body of knowledge, a database) which
provides support (or otherwise) for the type σ.
Page: 18 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 19
LocProp=def sit : Sign
sit-type : RecType†
522
4.1 Questions523
Given the existence of Austinian-like propositions and a theory of λ-abstraction524
given to us by existence of functional types, it is relatively straightforward to525
develop a theory of questions as propositional abstracts in TTR. Extensive526
motivation for the view of questions as propositional abstracts is provided in527
Ginzburg (1995); Ginzburg & Sag (2000)—TTR contributes to this by provid-528
ing an improved notion of simultaneous, restricted abstraction, as we will see529
shortly.530
A (basic, non-compound) question will be a function from records into531
propositions. As such, questions are automatically part of the type theoretic532
ontology. Let us start by considering some very simple examples of interrog-533
atives and their TTR representations. (42) exemplifies the denotations (con-534
tents) we can assign to a unary and a binary wh-interrogative. We use rds here535
to represent the record that models the described situation in the context. The536
meaning of the interrogative would be a function defined on contexts which537
provide the described situation and which return as contents the functions538
given in (42). The unary question ranges over instantiations by persons of the539
proposition “xruns in situation rds ”. The binary question ranges over pairs540
of persons xand things ythat instantiate the proposition “xtouches yin541
situation rds ”:542
(42)543
a. who ran 7→544
λr:x:Ind
rest:person(x)(sit = rds
sit-type = c:run(r.x))545
b. who touched what 7→546
λr:
x:Ind
rest1:person(x)
y:Ind
rest2:thing(y)
(sit = rds
sit-type = c:touch(r.x,r.y))547
What of polar questions? Ginzburg & Sag (2000) proposed that these are548
0-ary abstracts, though the technical apparatus involved in explicating this549
notion in their framework based on non-well-founded set theory was quite550
complex. TTR, however, offers a simple way to explicate 0-ary abstraction.551
If we think of a unary abstract as involving a domain type with one field for552
an individual and a binary abstract as one whose domain type contains two553
such fields etc, then by analogy the domain type of a 0-ary type would simply554
be the empty record type [] (that is, the type Rec of records).11 This makes555
a 0-ary abstract a constant function from the universe of all records . (43)556
exemplifies this:557
11 This is the type all records satisfy, since it places no contraints on them.
Page: 19 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
20 Robin Cooper and Jonathan Ginzburg
(43)558
Did Bo run 7→559
λr:Rec(sit = rds
sit-type = c : run(bo) )560
The fact that questions individually are part of the type theoretic world561
is not the end of the story. For various linguistic tasks (e.g. specifying the se-562
lectional requirements of verbs like ‘ask’, ‘wonder’, and ‘investigate’), and563
for various dialogical tasks (e.g. the formulation of various conversational564
rules) one needs to appeal to a type Question (see the chapter on questions,565
Wi´sniewski (this volume).). This means that we need to have a characteriza-566
tion of this type within TTR. One such characterization is given in Ginzburg567
(2012); a more recent and, arguably, more constructive proposal can be found568
in Ginzburg et al. (2014a). Here we offer a somewhat simpler characteriz-569
ation.The domain of a question (polar or wh) is always characterized by a570
subtype of RecType. Thus we define the type Question by (44).571
(44) Question =def _
XvRecType
(X→Prop)572
The type of polar questions, PolQuestion, is given in (45).573
(45) PolQuestion =def (Rec→cProp)574
That is, polar questions are constant functions from situations (records) to575
propositions as discussed in Ginzburg (2012).576
Answerhood is one of the essential testing grounds for a theory of ques-577
tions. Abstracts can be used to underspecify answerhood. This is important578
given that NL requires a variety of answerhood notions, not merely exhaust-579
ive answerhood or notions straightforwardly definable from it. Moreover, as580
with questions, answerhood needs to be explicable within type theory. This581
is because answerhood figures as a constituent relation of the lexical entries582
of resolutive verbs12 and in rules regulating felicitous responses in dialogue583
management (see section 5.). For current purposes this means that we need584
to be able to define notions of answerhood as types.585
There are a number of notions of answerhood that are of importance to586
dialogue. One relates to coherence: any speaker of a given language can re-587
cognize, independently of domain knowledge and of the goals underlying an588
interaction, that certain propositions are about or directly concern a given589
question. We will call this Aboutness. The simplest notion of answerhood we590
can define on the basis of an abstract is one we will call, following Ginzburg591
& Sag (2000), simple answerhood. In order to this we will use the following592
notion:593
A proposition pis an instantiation of a question qjust in case there594
is some rin the domain of qsuch that q(r) = p595
12 For more detailed discussion see Ginzburg & Sag (2000, Chapter 3, section 3.2;
Chapter 8, section 8.3.).
Page: 20 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 21
(46) αis a simple answer to qiff αis an instantiation of qor the negation596
of an instantiation of q.597
Given these definitions it is straightforward to show:598
(47)599
a. If qis an n-ary question of type (T→Prop) and αis a simple answer600
to qthen there is some r:Tsuch that αis q(r) or ¬q(r).601
b. In particular, if qis the polar question λr:[](p) and αis a simple602
answer to qthen αis either por ¬p.603
Simple answerhood covers a fair amount of ground. But it clearly un-604
derdetermines aboutness. On the polar front, it leaves out the whole gamut605
of answers to polar questions that are weaker than por ¬psuch as condi-606
tional answers ‘If r, then p’ (e.g. 48a) or weakly modalized answers ‘prob-607
ably/possibly/maybe/possibly not p’ (e.g. (48b)). As far as wh-questions go,608
it leaves out quantificational answers (48c–g), as well as disjunctive answers.609
These missing class of propositions, are pervasive in actual linguistic use:610
(48)611
a. Christopher: Can I have some ice-cream then?612
Dorothy: you can do if there is any. (BNC, KBW)613
b. Anon: Are you voting for Tory?614
Denise: I might. (BNC, KBU, slightly modified)615
c. Dorothy: What did grandma have to catch?616
Christopher: A bus. (BNC, KBW, slightly modified)617
d. Rhiannon: How much tape have you used up?618
Chris: About half of one side. (BNC, KBM)619
e. Dorothy: What do you want on this?620
Andrew: I would like some yogurt please. (BNC, KBW, slightly mod-621
ified)622
f. Elinor: Where are you going to hide it?623
Tim: Somewhere you can’t have it.(BNC, KBW)624
g. Christopher: Where is the box?625
Dorothy: Near the window. (BNC, KBW)626
One straightforward way to enrich simple answerhood is to consider the627
relation that emerges by closing simple answerhood under disjunction. Gin-628
zburg (1995); Ginzburg & Sag (2000) show that aboutness as defined in (49)629
seems to encompass the various classes of propositions exemplified in (48).630
(49) pis About qiff pentails a disjunction of simple answers to q.631
Answerhood in the ‘aboutness’ sense is clearly distinct from a highly re-632
stricted notion of answerhood, that of being a proposition that resolves or633
constitutes exhaustive information about a question. This latter sense of an-634
swerhood, which is restricted to true propositions, has been explored in great635
Page: 21 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
22 Robin Cooper and Jonathan Ginzburg
detail in the formal semantics literature, since it is a key ingredient in ex-636
plicating the behaviour of interrogatives embedded by resolutive predicates637
such as ‘know’, ‘tell’ and ‘discover’. We will not discuss this here but refer the638
reader to Ginzburg (2012).639
Many queries are responded to with a query. A large proportion of these640
are clarification requests, to be discussed in section 6. But in addition to these,641
there are query responses whose content directly addresses the question posed,642
as exemplified in (50):643
(50)644
a. A: Who murdered Smith? B: Who was in town?645
b. A: Who is going to win the race? B: Who is going to participate?646
c. Carol: Right, what do you want for your dinner?647
Chris: What do you (pause) suggest? (BNC, KBJ)648
d. Chris: Where’s mummy?649
Emma: Mm?650
Chris: Mummy?651
Emma: What do you want her for? (BNC, KBJ)652
There has been much work on relations among questions within the frame-653
work of Inferential Erotetic Logic (IEL) (see e.g. Wi´sniewski (2001, 2003) and654
Wi´sniewski (this volume)), yielding notions of q(uestion)–implication. From655
this a natural hypothesis can be made about such query responses, as in656
(51a). A related proposal, first articulated by Carlson (1983), is that they are657
constrained by the semantic relations of dependence, or its converse influence.658
(51) a. q2can be used to respond to q1if q2influences q1.659
b. q2influences q1iff any proposition psuch that pResolves q2, also sat-660
isfies pentails rsuch that ris About q1.661
Its intuitive rationale is this: discussion of q2will necessarily bring about662
the provision of information about q1. The actual characterization of query re-663
sponses is complex, both empirically and theoretically. For a detailed account664
using TTR see Lupkowski & Ginzburg (2014).665
Page: 22 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 23
5 Interaction on dialogue gameboards666
On the approach developed in KoS the analysis of dialogue is formulated at a667
level of information states, one per conversational participant. Each informa-668
tion state consists of two ‘parts’, a private part and the dialogue gameboard669
that represents information that arises from publicized interactions. For re-670
cent psycholinguistic evidence supporting this partition see Brown-Schmidt671
et al. (2008).672
Information states are records of the type given in (52a). For now we673
focus on the dialogue gameboard, various aspects of which are exploited in674
the toolbox used to account for the phenomena exemplified in our initial ex-675
ample from the BNC. The type of dialogue gameboards is given in (52b). The676
spkr,addr fields allow one to track turn ownership. Facts represents conversa-677
tionally shared assumptions. Moves and Pending represent, respectively, lists678
of moves that have been grounded or are as yet ungrounded. QUD tracks the679
questions currently under discussion.680
(52)681
a. TotalInformationState (TIS ) =def dialoguegameboard : DGBType
private : Private
682
b. DGBType =def
683
spkr : Ind
addr : Ind
utt-time : TimeInt
c-utt : addressing(spkr,addr,utt-time)
Facts : {Prop}
Pending : [LocProp]
Moves : [LocProp]
QUD : poset(Question)
684
Our job as dialogue analysts is to construct a theory that will explain685
how conversational interactions lead to observed conversational states of type686
DGBType. Let us consider how an initial conversational state looks, that is687
the state as the first utterance of the dialogue is made. Initially no moves have688
been made and no issues introduced, so a dialogue gameboard will be of the689
type in (53):690
(53)
spkr : Ind
addr : Ind
utt-time : TimeInt
c-utt : addressing(spkr,addr,utt-time)
Facts={} :{Prop}
Pending=[] : [LocProp]
Moves=[] : [LocProp]
QUD={} : poset(Question)
691
Page: 23 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
24 Robin Cooper and Jonathan Ginzburg
This allows us to construct a type corresponding to a lexical entry for a692
greeting word such as ‘hi’, as in (54). Here we assume that the definition of693
the type Sign in Section 3 has been modified to include a field for a dialogue694
game board:695
Sign =def
s-event : SEvent
synsem : cat : Cat
cont : cont
dgb : DGBType
696
This represents an extension of the Saussurean notion of sign where we not697
only take account of the signifier (‘s-event’) and the signified (‘synsem’) but698
also the context in which the signification takes place (here represented by699
‘dgb’).700
(54) Sign∧
.
701
s-event:phon:hi
s-time:TimeInt
synsem:
cat=interj:Cat
cont=
sit =rds
sit-type=e:greet(⇑dgb.spkr,
⇑dgb.addr,⇑dgb.utt-time)
:Prop
dgb:
spkr:Ind
addr:Ind
utt-time=s-event.s-time:TimeInt
moves=[]:[Prop]
qud={}:poset(Question)
702
Here, as before in our discussion of questions, rds is the described situation as703
determined by the context. The use of ‘⇑’ in the ‘sit-type’-field is a convenient704
informal notation for paths occurring in a record type embedded within a705
larger record type but not lying on a path in that record type. It indicates706
that the path is to be found in the next higher record type. It clears up an707
ambiguity that arises because we are using the notation that does not make708
explicit the dependent types that are being used as discussed on p. 6.709
How do we specify the effect of a conversational move? The basic units710
of change are mappings between dialogue gameboards that specify how one711
gameboard configuration can be modified into another on the basis of dialogue712
moves. We call a mapping between DGB types a conversational rule. The types713
specifying its domain and its range we dub, respectively, the pre(conditions)714
and the effects, both of which are supertypes of the type DGBType. A con-715
versational rule that enables us to explain the effect a greeting, the initial716
conversational move, has on the DGB is given in (55). It is a record type717
which contains two fields. The ‘pre(condition)’-field is for a dialogue game-718
board of a certain type and the ‘effects’-field provides a type for the updated719
gameboard. The precondition in this example requires that both Moves and720
Page: 24 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 25
QUD are empty; the sole effect is to push the proposition associated with hi721
onto the list in the ‘moves’-field.722
(55)723
pre:DGBType∧
.
spkr : Ind
addr : Ind
utt-time : TimeInt
moves=[] : [Prop]
qud={} : poset(Question)
effects=
moves=[
sit =rds
sit-type=e:greet(pre.spkr,
pre.addr,pre.utt-time)
|pre.moves]:[Prop]
:RecType
724
The form for update rules proposed here is thus725
(56) pre : T1
effects=T2:RecType
726
An agent who believes that they have a current state sof type T1, that is,727
whose hypothesis about the current state is that it belongs to type Tsuch728
that TvT1, can use sto anchor T2to obtain T2[s] and then use asymmetric729
merge to obtain a type for the new state: T∧
.T2[s].730
The rule (57) says that given a question qand ASK(A,B,q) being the731
LatestMove, one can update QUD with qas QUD–maximal.732
(57) Ask QUD–incrementation733
734
ques:Question
moves-tail:[Prop]
pre:DGBType∧
.
spkr:Ind
addr:Ind
moves=[
sit = rds
sit-type = e:ask(pre.spkr,
pre.addr,ques)
|moves-tail] : [Prop]
qud:poset(Question)
effects=q=ques⊕pre.qud : poset(Question):RecType
735
Next we introduce the rule QSPEC. QSPEC can be thought of as a ‘rel-736
evance maxim’: it characterizes the contextual background of reactive queries737
and assertions. (58) says that if qis QUD–maximal, then subsequent to this738
the next move is constrained to be q–specific (Ginzburg, 2012), that is, either739
about q(a partial answer) or a question on which qdepends. Moreover, this740
move can be held by either of the speech event participants. The constraint741
in (58) involves merging a constraint concerning the information about QUD742
and Moves with a constraint entitled TurnUnderSpec, which merely specifies743
Page: 25 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
26 Robin Cooper and Jonathan Ginzburg
that the speaker and addressee of the effects are distinct and drawn from the744
set consisting of the initial speaker and addressee:745
(58) a. QSPEC746
pre : qud. = Dq, QE: poset(Question)
effects : TurnUnderspec ∧
.
r : Prop ∨Question
R: IllocRel
LatestMove = R(spkr,addr,r) : IllocProp
c1 : About(r,q) ∨Depend(q,r)
747
b. TurnUnderspec =
PrevAud = npre.spkr,pre.addro:nIndo
spkr : Ind
c1 : member(spkr, PrevAud)
addr : Ind
c2: member(addr, PrevAud)
∧addr 6= spkr
748
QSPEC involves factoring out turn taking from the assumption that A749
asking qinvolves B answering it. In other words, the fact that A has asked750
qleaves underspecified who is to address q(first or at all). This is justified751
by self-answering data such as the initial example we considered in the intro-752
duction (1), as well as (59a,b), where the querier can or indeed needs to keep753
the turn, as well as multi-party cases such as (59c) where the turn is multiply754
distributed:755
(59)756
a. Vicki: When is, when is Easter? March, April? (BNC, KC2)757
b. Brian: you could encourage, what’s his name? Neil. (BNC, KSR)758
c. A: Who should we invite? B: Perhaps Noam. C: Martinu. D: Bedrich.759
Explicating the possibility of self-answering is one of the requirements for760
dealing with our initial example (1).761
Page: 26 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 27
6 Unifying metacommunicative and illocutionary762
interaction763
Establishing that the most recent move has been understood to the satisfac-764
tion of the conversationalists, has come to be known as grounding, following765
extensive empirical work by Herb Clark and his collaborators (Clark & Schae-766
fer (1989); Clark & Wilkes-Gibbs (1986); Clark (1996)). One concrete task for767
a theory of dialogue is to account for the potential for and meaning of acknow-768
ledgement phrases, as in (60), either once the the utterance is completed, as769
in (60a), or concurrently with the utterance as in (60b):770
(60)771
a. Tommy: So Dalmally I should safely say was my first schooling. Even772
though I was about eight and a half. Anon 1: Mm. Now your father773
was the the stocker at Tormore is that right ? (British National Corpus774
(BNC, K7D)775
b. A: Move the train .. .776
B: Aha777
A:. . . from Avon . . .778
B: Right779
A:. . . to Danville. (Adapted from the Trains corpus, Allen et al. (1995))780
An additional task is to characterize the range of (potential) presupposi-781
tions emerging in the aftermath of an utterance, whose subject matter con-782
cerns both content and form. This is exemplified in the constructed examples783
in (61):784
(61)785
a. A: Did Mark send you a love letter?786
b. B: No, though it’s interesting that you refer to Mark/my787
brother/our friend788
c. B: No, though it’s interesting that you mention sending789
d. B: No, though it’s interesting that you ask a question containing790
seven words.791
e. B: No, though it’s interesting that the final two words you just792
uttered start with ‘l’793
Developing a semantic theory that can fully accommodate the challenges of794
grounding is far from straightforward. A more radical challenge, nonetheless, is795
to explicate what goes on when an addressee cannot ground her interlocutor’s796
utterance. We suggest that this is more radical because it ultimately leads to797
seemingly radical conclusions of an intrinsic semantic indeterminacy: in such798
a situation the public context is no longer identical for the interlocutors—799
the original speaker can carry on, blissfully unaware that a problem exists,800
utilizing a ‘grounded context’, whereas if the original addressee takes over801
the context is shifted to one which underwrites a clarification request. This802
Page: 27 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
28 Robin Cooper and Jonathan Ginzburg
potential context–splitting is illustrated in (62), originally discussed in (Gin-803
zburg (1997)). The data in (62) illustrates that the contextual possibilities for804
resolving the fragment ‘Bo?’ are distinct for the original speaker A and the805
original addressee B. Whereas there is one common possibility, the short an-806
swer reading, only B has the two clarification request readings, whereas only807
A has a self-correction reading, albeit one that probably requires an further808
elaboration:809
(62)810
a. A: Who does Bo admire? B: Bo?811
Reading 1 (short answer): Does Bo admire Bo?812
Reading 2 (clausal confirmation): Are you asking who BO (of all813
people) admires?;814
Reading 2 (intended content ): Who do you mean ‘Bo’ ?)815
b. A: Who does Bo admire? Bo?816
Reading 1 (short answer): Does Bo admire Bo?817
Reading 2 (self correction): Did I say ‘Bo’?818
Clarification requests can take many forms, as illustrated in (63):819
(63)820
a. A: Did Bo leave?821
b. Wot: B: Eh? / What? / Pardon?822
c. Explicit (exp) : B: What did you say? / Did you say ‘Bo’ /What823
do you mean ‘leave’ ?824
d. Literal reprise (lit): B: Did BO leave? / Did Bo LEAVE?825
e. Wh-substituted Reprise (sub): B: Did WHO leave? / Did Bo826
WHAT?827
f. Reprise sluice (slu): B: Who? / What? / Where?828
g. Reprise Fragments (RF): B: Bo? / Leave?829
h. Gap: B: Did Bo . . . ?830
i. Filler (fil): A: Did Bo . . . B: Win? (Table I from Purver (2006))831
Now, as (64a) indicates, a priori ANY sub-utterance is clarifiable, includ-832
ing function words like ‘the’, as in (64c). While the potential for repetition-833
oriented clarification interaction clearly applies to all utterances and their834
parts, it is an open question whether this is true for semantically/pragmatically835
oriented CRification. For empirical studies on this see Healey et al. (2003);836
Purver et al. (2003, 2006).837
(64)838
a. Who rearranged the plug behind the table?839
b. Who? / rearranged?/ the plug? / behind? / the table?840
c. A: Is that the shark? B: The? B: Well OK, a. (based on an example841
in the film Jaws.)842
Page: 28 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 29
Integrating metacommunicative interaction into the DGB involves two ad-843
ditions to the picture we have had so far, one minor and one major. The minor844
addition, drawing on an early insight of Conversation Analysis (see the notion845
of side sequence, Schegloff (2007)), is that repair can involve ‘putting aside’846
an utterance for a while, a while during which the utterance is repaired. The847
‘pending’-field in the dialogue gameboard is used for this. Note that this field848
contains a list of locutionary propositions. Most work on (dialogue) context to849
date involves reasoning and representation solely on a semantic/logical level.850
But if we wish to explicate metacommunicative interaction, then we cannot851
limit ourselves in this way.852
If p:LocProp, the relationship between p.sit and p.sit-type can be utilized853
in providing an analysis of grounding/CRification conditions:854
(65)855
a. Grounding: pis true: the utterance type fully classifies the utterance856
token.857
b. CRification: pis false, either because p.sit-type is weak (e.g. incom-858
plete word recognition) or because uis incompletely specified (e.g.859
incomplete contextual resolution—problems with reference resolution860
or sense disambiguation).861
In principle one could have a theory of CRification based on generating all862
available CRs an utterance could give rise to. But in practice, as the data in863
(64) showed us, there are simply too many to be associated in a ‘precompiled’864
form with a given utterance type.865
Instead, repetition and meaning–oriented CRs can be specified by means866
of a uniform class of conversational rules, dubbed Clarification Context Update867
Rules (CCURs) in Ginzburg (2012). Each CCUR specifies an accommodated868
MaxQUD built up from a sub-utterance u1 of the target utterance, the max-869
imal element of Pending (MaxPending). Common to all CCURs is a license870
to follow up MaxPending with an utterance which is co-propositional with871
MaxQUD. (66) is a simplified formulation of one CCUR, Parameter identific-872
ation, which allows Bto raise the issue about A’s sub-utterance u:what did873
A mean by u?:874
Page: 29 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
30 Robin Cooper and Jonathan Ginzburg
(66) Parameter identification:875
max pending : LocProp
rst pending : [LocProp]
u : Sign
cu: member(u,max pending.sit.synsem.constits)
latest move : LocProp
rst moves : [LocProp]
pre : DGBType∧
.spkr Ind
pending : [max pending|rst pending] : [LocProp]
moves : [latest move|rst moves] : [LocProp]
qud : [Question]
effects : qud=[q|pre.qud] : [Question]
876
where qis λr:cont:Cont(e : mean(⇑pre.spkr,⇑pre.u,r.cont) )877
Parameter Identification (66) underpins CRs such as (67b–67c) as follow-878
ups to (67a). We can also deal with corrections, as in (67d), since they address879
the issue of what A meant by u.880
(67) a. A: Is Bo here?881
b. B: Who do you mean ‘Bo’?882
c. B: Bo? (= Who is ‘Bo’?)883
d. B: You mean Jo.884
We have now explicated the basis for partial comprehension in dialogue,885
relating to the requirements from the initial example (1).886
Page: 30 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 31
7 Traditional semantic concerns in a dialogue887
perspective888
In this section we will discuss two traditional concerns in semantics, negation889
and quantification, and show that we get a rather different view of them when890
we consider dialogue phenomena relating to them.891
7.1 Negation892
The classical view of negation is that it is a truth functional connective that893
maps true to false and false to true. In intuitionistic approaches as standardly894
used in type theory, negative propositions, ¬p, are regarded as the type of895
refutations of p. This leads intuitionistic logic to abandon the principle of896
bivalence, that propositions are either true or false. On the intuitionistic view897
it is possible that a proposition pneither has a proof nor a refutation. Thus898
such a proposition is neither true nor false.899
In this section, which contains revised material from Cooper & Ginzburg900
(2011a,b), we will suggest an alternative view: that negation is used to pick901
out a negative situation type. It is crucial for this proposal to work that we902
are able to distinguish between positive and negative types in a way that is903
not possible on the standard approaches to “truth-value flipping” negation.904
Consider the uses of no in the (made-up) dialogue in (68) and the glosses905
given after them in square brackets.906
(68)907
Child approaches socket with nail908
Parent: No. [“Don’t put the nail in the socket.”]
Do(#n’t) you want to be electrocuted?
Child: No. [“I don’t want to be electrocuted.”]
Parent: No. [“You don’t want to be electrocuted.”]
909
The first use of no does not relate back to any previous linguistic utterance but910
rather to an event which is in progress. The parent has observed the first part911
of the event and predicted a likely conclusion (as in the example of the game912
of fetch discussed in Section 2). The parent wishes to prevent the completion913
of the event, that is, make sure that the predicted complete event type is not914
realized. We claim that the central part of the meaning of negation has to915
do with the non-realization of some positive situation type (represented by916
a negative situation type), rather than a switching of truth values as on the917
classical logical view of negation. We see this again in the second use of no in918
response to the parent’s query whether the type child-wants-to-be-electrocuted919
is realized. The child’s negative response asserts that the type is not realized.920
The third utterance of no agrees with the previous assertion, namely this921
asserts agreement that the type is (or should be) empty. A naive application922
of the classical view of negation as a flipping of truth values might say that no923
always changes the truth-value of the previous assertion. This would make the924
Page: 31 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
32 Robin Cooper and Jonathan Ginzburg
wrong prediction here, making the parent disagree with the child. Our view925
that negation has to do with a negative situation type means that it will be926
used to disagree with a positive assertion and agree with a negative assertion,927
which seems to be how negation works in most, if not all, natural languages.928
Another important fact about this dialogue is the choice of the parent’s929
question. The positive question is appropriate whereas the negative question930
would be very strange, suggesting that the child should want to be electro-931
cuted. The classical view of negation as truth value flip has led to a view that932
positive and negative questions are equivalent (Hamblin, 1973; Groenendijk933
& Stokhof, 1997). This derives from a view of the contents of questions as934
the sets of propositions corresponding to their answers. While positive and935
negative questions do seem to have the same possible answers it appears that936
the content of the question should involve something more than the set of937
answers. The distinction between positive and negative questions was noted938
for embedded questions by Hoepelmann (1983) who noted the examples in939
(69).940
(69)941
a. The child wonders whether 2 is even.942
943
b. The child wonders whether 2 isn’t even. (There is evidence that 2 is944
even)945
Hoepelmann’s observation is that the same kind of inference as we noticed946
with the negative version of the parent’s question about electrocution. That947
is, there is a suggestion that there is reason to believe the positive, that the948
type is realized. This kind of inference is not limited to negative questions but949
seems to be associated with negation in general. Fillmore (1985) notes the950
examples in (70).951
(70)952
a. Her father doesn’t have any teeth953
b. #Her husband doesn’t have any walnut shells954
c. Your drawing of the teacher has no nose/#noses955
d. The statue’s left foot has no #toe/toes956
The examples marked with # sound strange because they are contrary to957
our expectations. We in general expect that people have teeth but not walnut958
shells, a nose but not several noses and several toes but not just a single toe.959
Fillmore discusses this in terms of frames. We would discuss this in terms960
of resources we have available. We can, however, create the expectations by961
raising issues for discussion within the dialogue thus creating the necessary962
resources locally as in (71).963
(71) A: My husband keeps walnut shells in the bedroom.
B: Millie’s lucky in that respect. Her husband doesn’t have any walnut
shells.
964
Page: 32 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 33
This discussion points to a need to distinguish between positive and neg-965
ative propositions based on positive and negative situation types. We have966
given the two reasons in (72) for this:967
(72)968
a. The content of no is different depending on whether it is used in a969
response to a negative or positive proposition970
b. The raising of a contrary expectation occurs only with negative asser-971
tions or questions972
A third reason which has been discussed in the literature (recently by Farkas973
& Roelofsen ms) is that some languages have different words for yes depending974
on positive and negative propositions. This is illustrated in (73).975
(73)976
a. A: Marie est une bonne ´etudiante
Marie is a good student
B: Oui / #Si.
Yes / Yes (she is)
977
b. A: Marie n’est pas une bonne ´etudiante
Marie isn’t a good student
B: #Oui / Si.
Yes / Yes (she is)
978
In French the word oui is used to agree with a positive proposition and the979
word si is used to disagree with a negative proposition. Similar words exists980
in other languages such as German (ja/doch ) and Swedish (ja/jo).981
How do we know that the distinction between positive and negative pro-982
positions is a semantic distinction rather than a syntactic distinction de-983
pending on how the propositions are introduced? There are lots of ways of984
making a negative sentence, by using various negative words such as not,985
no,none,nothing. In French you have there are discontinuous constructions986
ne. . . pas/point/rien corresponding to “not/not at all/nothing”. However, in987
these constructions the ne can be omitted. Thus both of the following are pos-988
sible: je n’en sais rien/ j’en sais rien (“I know nothing about it”). In Swedish989
there are two words for not which are stylistic variants: inte, ej The gener-990
alization that allows us to recognize all these morphemes or constructions as991
“negations” is the semantic property they share: namely that they introduce992
negative propositions.993
On the traditional truth-value flipping view of negation it is hard to make994
this semantic distinction. For example, in a possible worlds semantics a pro-995
position is regarded as a set of possible worlds – the set of worlds in which996
the proposition is true. On this view the negation of a proposition is the com-997
plement of that set of worlds belonging to the proposition. There is no way of998
distinguishing between “positive” and “negative” sets of possible worlds. How-999
ever, on a type theoretic approach the distinction can be made in a straight-1000
forward manner.1001
Page: 33 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
34 Robin Cooper and Jonathan Ginzburg
The account of negation we give here is slightly different to that given in1002
Cooper & Ginzburg (2011a,b) and as a consequence the definitions are slightly1003
more elegant and intuitive. We introduce negative types by the clause (74).1004
(74) If Tis a type then ¬Tis a type1005
Because types are intensional we can say that ¬Tis distinct not only from1006
Tbut also from any other type, without worrying that there might be an1007
equivalent type that has the same witnesses. Thus simply by introducing a1008
negative operation on types (represented by ¬) we distinguish negative types1009
from positive ones. We can also introduce types of negative types. For example,1010
we can introduce a type RecType¬such that T:RecType¬iff T=¬T0and1011
T0:RecType. We can then define a type RecType  whose witnesses are the1012
closure of the set of negated record types under negation (in a similar manner1013
to our definition of RecordType†on p. 18).1014
We can characterize witnesses for negative types by: a:¬Tiff there is1015
some T0such that a:T0and T0precludes T. We say that T0precludes Tiff1016
either (75a) or (75b) holds.1017
(75)1018
a. T=¬T0
1019
b. or, T , T 0are non-negative and there is no asuch that a:Tand a:T0
1020
for any models assigning witnesses to basic types and ptypes1021
It follows from these two definitions that (1) a:¬¬Tiff a:Tand that (2)1022
a:T∨ ¬Tis not necessary (amay not be of type Tand there may not be1023
any type which precludes Teither). Thus this negation is a hybrid of classical1024
and intuitionistic negation in that (1) normally holds for classical negation1025
but not intuitionistic whereas (2), that is failure of the law of the excluded1026
middle, normally holds for intuitionistic negation but not classical negation.1027
Nothing in these definitions accounts for the fact that a:¬Tseems to1028
require an expectation that a:T. One way to do this is to refine the clause1029
that defines witnesses for negative types:a:¬Tiff there is some T0such that1030
a:T0and T0precludes Tand there is some expectation that a:T. There is1031
some question in our minds of whether this addition should be included here1032
or in some theory of when agents are likely to make judgements. What does1033
it mean for there to “be some expectation”? We would like to relate this to1034
the kind of functions we used to predict completions of events and which we1035
also used for grammar rules, that is to dependent types. Breitholtz (2010);1036
Breitholtz & Cooper (2011) use dependent types to implement Aristotelian1037
enthymemes that is defeasible inference patterns. Such enthymemes could be1038
either general or local context-specific resources that we have available to1039
create expectations.1040
Finally, let us see how the techniques we have developed here could be1041
combined with Austinian propositions.1042
The type of negative Austinian propositions can be defined as (76).1043
Page: 34 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 35
(76) sit : Rec
sit-type : RecType 
1044
The type of positive Austinian propositions can be defined as (77).1045
(77) sit : Rec
sit-type : RecType
1046
Thus we have a clear way of distinguishing negative and positive propositions.1047
7.2 Generalized quantifiers1048
Purver & Ginzburg (2004); Ginzburg & Purver (2012); Ginzburg (2012) in-1049
troduce the Reprise Content Hypothesis (RCH) given in (78).1050
(78)1051
a. RCH (weak) A fragment reprise question queries a part of the stand-1052
ard semantic content of the fragment being reprised.1053
b. RCH (strong) A fragment reprise question queries exactly the stand-1054
ard semantic content of the fragment being reprised.1055
They use this to motivate a particular view of the semantics of quantified1056
noun-phrases which is based on witness sets rather than families of sets as1057
in the classical treatment. Cooper (2010, 2013) argues for combining a more1058
classical treatment with their approach. We summarize the argument here.1059
In terms of TTR, a type corresponding to a “quantified proposition” can1060
be regarded as (79).1061
(79)
restr : Ppty
scope : Ppty
cq:q(restr,scope)
1062
The third field represents a quantificational ptype of the form q(restriction,scope)1063
an example of which would be (80).1064
(80) every(λr:x:Ind(c:dog(r.x)), λr:x:Ind (c:run(r.x)))1065
That is, ‘every’ is a predicate which holds between two properties, the property1066
of being a dog and the property of running. As an example, suppose we want1067
to represent the record type which is the content of an utterance of A thief1068
broke in here last night. For convenience we represent the property of being1069
a thief as thief and the property corresponding to broke in here last night as1070
bihln. Then the content of the sentence can be (81).1071
(81)
restr=‘thief ’ : Ppty
scope=‘bihln’ : Ppty
c∃:∃(restr,scope)
1072
We can relate this proposal back to classical generalized quantifier theory, as1073
represented in Barwise & Cooper (1981). Let the extension of a type T, [ˇT],1074
be the set {a|a:T}, the set of witnesses for the type. Let the P-extension1075
of a property P, [↓P], be the set in (82).1076
Page: 35 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
36 Robin Cooper and Jonathan Ginzburg
(82) {a| ∃r[r:x:Ind∧r.x = a∧[ˇP(r)] 6=∅]}1077
That is, the set of objects that have the property. We say that there is a con-1078
straint on models such that the type q(P1, P2) is non-empty iff the relation q∗
1079
holds between [↓P1] and [↓P2], where q∗is the relation between sets corres-1080
ponding to the quantifier in classical generalized quantifier theory. Examples1081
are given in (83).1082
(83)1083
a. some(P1,P2) is non-empty (that is, “true”) just in case [↓P1]∩[↓P2]6=∅1084
b. every(P1,P2) is non-empty just in case [↓P1]⊆[↓P2].1085
c. many(P1,P2) is non-empty just in case |[↓P1]∩[↓P2]|> n, where n1086
counts as many.1087
The alternative analysis of generalized quantifiers that Purver & Ginzburg1088
(2004); Ginzburg & Purver (2012); Ginzburg (2012) propose is based on the1089
notion of witness set from Barwise & Cooper (1981). Here we will relate this1090
notion to the notion of a witness for a type, that is something which is of1091
that type. We have not yet said exactly what it is that is of a quantifier1092
ptype q(P1, P2). One solution to this is to say that it is a witness set for the1093
quantifier, that is (84).13
1094
(84) a:q(P1, P2) iff q∗holds between [↓P1] and [↓P2] and a= [↓P1]∩[↓P2]1095
This definition relies on the fact that all natural language quantifier rela-1096
tions are conservative (Peters & Westerst˚ahl, 2006), a notion which we can1097
define as in (85).1098
(85) a quantifier qis conservative means q∗holds between [↓P1] and [↓P2]1099
just in case q∗holds between [↓P1] and [↓P1]∩[↓P2] (every person1100
runs iff every person is a person who runs)1101
Armed with this we can define the type of (potential) witness sets for a quan-1102
tifier relation qand a property P,q†(P), that is, witness sets in the sense of1103
Barwise and Cooper as in (86).1104
(86) a:q†(P) iff a⊆[↓P] and there is some set Xsuch that q∗holds1105
between [↓P] and X1106
Using these tools we present a modified version of Ginzburg and Purver’s1107
proposed analysis of most students left in (87), where the ‘q-params’-field1108
specifies quantifier parameters and the ‘cont’-field specifies the content of the1109
utterance.1110
(87) q-params : w:most†(student)
cont : cq=q-params.w:most(student,left)
1111
13 This appears to go against the intuition that we have introduced before that
ptypes are types of situations. Ultimately we might wish to say that a witness
for a quantifier type is a situation containing such a witness set, but we will not
pursue this here.
Page: 36 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 37
In Cooper (2010) we presented the two analyses as in competition with1112
each other, but we now think that there is advantage to be gained by put-1113
ting the two together. Our way of combining the two analyses predicts two1114
readings for the noun-phrase most students, a referential reading which makes1115
the witness set be a q-parameter in Purver and Ginzburg’s analysis and a1116
non-referential reading in which the witness set is incorporated in the content1117
of the NP. These are given in (88).1118
(88)1119
a. referential1120
q-params:restri=student:Ppty
wi:most†(q-params.restri)
cont=
λP :Ppty
(
scope=P:Ppty
cmost=⇑q-params.wi:most(⇑q-params.restri,
scope)
):Quant
1121
b. non-referential1122
q-params:Rec
cont=
λP :Ppty
(
restri=student:Ppty
wi:most†(restri)
scope=P:Ppty
cmost=wi:most(restri,scope)
):Quant
1123
Given these types, what can a clarification address? Our claim is that the1124
clarification must address something for which there is a path in the record1125
type. In addition there appears to be a syntactic constraint that clarifications1126
tend to be a “major constituent”, that is a noun-phrase or a sentence, rather1127
than a determiner or a noun. In a referential reading there are three paths1128
available: ‘q-params.restri’, ‘q-params.wi’ and ‘cont’. The first of these, the1129
restriction, is dispreferred for syntactic reasons since it is normally expressed1130
by a noun. This leaves the witness and the whole NP content as possible1131
clarifications. However, from the data it appears that the whole content can1132
be expressed focussing either on the restriction or the quantifier relation. For1133
non-referential readings only the whole content path is available.1134
In (89) we give one example of each kind of clarification from the data1135
that Purver and Ginzburg adduce.1136
(89)1137
a. Witness clarification1138
Page: 37 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
38 Robin Cooper and Jonathan Ginzburg
Unknown: And er they X-rayed me, and took a urine sample, took a
blood sample. Er, the doctor
Unknown: Chorlton?
Unknown: Chorlton, mhm, he examined me, erm, he, he said now
they were on about a slide hunclearion my heart. Mhm,
he couldn’t find it.
1139
BNC file KPY, sentences 1005–10081140
b. Content clarification with restriction focus1141
Terry: Richard hit the ball on the car.
Nick: What car?
Terry: The car that was going past.
1142
BNC file KR2, sentences 862–8641143
c. Content clarification with quantifier relation focus1144
Anon 2: Was it nice there?
Anon 1: Oh yes, lovely.
Anon 2: Mm.
Anon 1: It had twenty rooms in it.
Anon 2: Twenty rooms?
Anon 1: Yes.
Anon 2: How many people worked there?
1145
BNC file K6U, sentences 1493–14991146
Our conclusion is that a combination of the classical approach to gener-1147
alized quantifiers combined with a modification of the approach suggested by1148
Purver and Ginzburg, adding a field for the witness, provides correct predic-1149
tions about clarifications. This means that the strong version of the reprise1150
clarification hypothesis is consistent with our analysis, allbeit now with a more1151
complex interpretation of the clarification request than Purver and Ginzburg1152
proposed. The interpretation proposed here involves a combination of the1153
classical approach to generalized quantifiers and the witness approach sug-1154
gested by Purver and Ginzburg. The clarification itself, however, can address1155
different parts of the content of the clarification request.1156
Page: 38 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 39
8 Grammar in dialogue1157
8.1 Non Sentential Utterances1158
The basic strategy adopted in KoS to analyze non sentential utterances1159
(NSUs) is to specify construction types where the combinatorial operations1160
integrate the (surface) denotata of the fragments with elements of the DGB.1161
We have provided one example of this earlier in our lexical entry for ‘hi’, (54).1162
Another simple example is given in (90), a lexical entry for the word ‘yes’.1163
(90) Sign∧
.
s-event : phon : yes
qmax :PolQuestion
synsem : cat=adv ic : Cat
cont=qmax (rds ) : Prop
1164
Here qmax is a maximal element of dgb.qud which is of the type PolQues-1165
tion, exemplified in (43). Since qmax is of the type PolQuestion, it is a constant1166
function whose domain is the class of all records and its range is a proposition1167
p. Hence the content of this function applied to any record is p. Thus, ‘yes’1168
gets as its content the proposition p, intuitively affirming the issue ‘whether1169
p’ currently under discussion. See Fern´andez (2006); Ginzburg (2012) for a1170
detailed account of this and a wide range of other more complex NSU types.1171
8.2 Disfluencies1172
Disfluencies are ubiquitous and observable in all but the briefest conversational1173
interaction. Disfluencies have been studied by researchers in Conversational1174
Analysis (e.g., Schegloff et al. (1977)), in great detail by psycholinguists (e.g.,1175
Levelt (1983); Brennan & Schober (2001); Clark & Tree (2002)), and by com-1176
putational linguists working on speech applications (e.g., Shriberg (1994)). To1177
date, they have mostly been excluded from semantic analysis, primarily be-1178
cause they have been assumed to constitute low level ‘noise’, without semantic1179
import. In fact, disfluencies participate in semantic and pragmatic processes1180
such as anaphora, conversational implicature, and discourse particles, as il-1181
lustrated in (91).1182
(91)1183
a. Peter was + {well }he was ] fired. (Example from Heeman & Allen1184
(1999))1185
b. A: Because I, [ [ [ any, + anyone, ] + any friend, ] + anyone ] I give1186
my number to is welcome to call me (Example from the Switchboard1187
corpus, Godfrey et al. (1992)) (implicature: ‘It’s not just her friends1188
that are welcome to call her when A gives them her number’)1189
c. From yellow down to brown–NO–that’s red. (Example from Levelt1190
(1983))1191
Page: 39 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
40 Robin Cooper and Jonathan Ginzburg
In all three cases, the semantic process is dependent on the reparandum1192
(the phrase to be repaired) as the antecedent.1193
Hesitations, another manifestation of disfluency, provide a particularly nat-1194
ural example of self-addressed queries, queries where the intended responder1195
is the original querier:1196
(92)1197
a. Carol: Well it’s (pause) it’s (pause) er (pause) what’s his name? Bern-1198
ard Matthews’ turkey roast. (BNC, KBJ)1199
b. Steve: They’re pretty .. . um, how can I describe the Finns? They’re1200
quite an unusual crowd actually.1201
Since they can occur at just about any location in a given utterance and1202
their effect is local, disfluencies provide strong motivation for an incremental1203
semantics, that is, a semantics calculated on a word-by-word, left to right1204
fashion (see e.g. Steedman (1999); Kempson et al. (2000), and et al (this1205
volume)). Moreover, they require the content construction process to be non-1206
monotonic, since initial decisions can be overriden as a result of self-repair.1207
(Ginzburg et al. (2014b)) sketch how, given an incremental dialogue se-1208
mantics, accommodating disfluencies is a straightforward extension of the ac-1209
count discussed in section 6 for clarification interaction: the monitoring and1210
update/clarification cycle is modified to happen at the end of each word ut-1211
terance event, and in case of the need for repair, a repair question gets accom-1212
modated into QUD. Self–corrections are handled by a slight generalisation of1213
the rule (66), which just as with the rule QSPEC, underspecifies turn owner-1214
ship. Hesitations are handled by a CCUR that triggers the accommodation of1215
a question about the identity of the next utterance. Overt examples for such1216
accommodation is exemplified in (92).1217
Page: 40 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 41
9 Conclusions and future directions1218
In this paper we have presented a theory which encompasses both the analysis1219
of dialogue structure and the traditional concerns of formal semantics. Our1220
main claim is that the two should not be separated. We have used a rich type1221
theory (TTR – type theory with records) in order to achieve this. The main1222
advantages of TTR is that it presents a theory of types which are structured1223
in a similar way to feature structures as employed in feature-based approaches1224
to grammar while at the same time being a type theory including a theory1225
of functions corresponding to the λ-calculus which can be used for a highly1226
intensional theory of semantic interpretation. This type theory can be used to1227
formulate both compositional semantics and the theory of dialogue structure1228
embodied by KoS (Ginzburg, 2012). Among other things we have shown how1229
these tools can be used to create a theory of events (both non-linguistic and1230
linguistic) and thereby create a theory of grammar grounded in the percep-1231
tion of speech events. We have shown how these tools enable us to give an1232
account of the kind of abstract entities needed for semantic analysis, such as1233
propositions and questions. We have also shown how the same tools can be1234
used to given an account of dialogue gameboards and dialogic interaction.1235
We have exemplified that with respect to variety of phenomena one needs1236
to tackle in order to provide even a rudimentary analysis of an extract from1237
an actual British National Corpus, example (1), which we presented at the be-1238
ginning of the paper. While we cannot claim to have handled all the details of1239
this example we have nevertheless presented a theory which begins to provide1240
some of the pieces of the puzzle. In particular: non sentential utterances are1241
analyzed using a dialogue game-board driven context exemplified in sections1242
5 and 8.1. Disfluencies are handled using conversation rules of a similar form1243
to Clarification Requests and, more generally, to general conversational rules.1244
The possibility of answering one’s own question is a consequence of factoring1245
turn taking away from illocutionary specification, as in the conversational rule1246
QSPEC.Misunderstanding is accommodated by (i) associating different dia-1247
logue gameboards with the conversational participants, and (ii) characterizing1248
the grounding and clarification conditions of utterances using locutionary pro-1249
positions (propositions constructed from utterance types/tokens). Multilogue1250
involves scaling up of two-person conversational rules to include communal1251
grounding and acceptance, and multi-agent turn taking. (See Ginzburg &1252
Fern´andez (2005); Ginzburg (2012))1253
Beyond the treatment of real conversational interaction, we have looked1254
at a couple of traditional concerns of formal semantics from a dialogical per-1255
spective: negation and generalized quantification.1256
Some other areas which are currently being examined using these tools,1257
but which we have not discussed in this article are: quotation (Ginzburg &1258
Cooper, 2014)—where we argue for the use of utterance types and locutionary1259
propositions as denotations for quotative constructions; the semantics for spa-1260
tial descriptions and its relationship to robot perception and learning (Dobnik1261
Page: 41 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
42 Robin Cooper and Jonathan Ginzburg
et al., 2011, 2012; Dobnik & Cooper, 2013); grounding semantics in terms of1262
classifiers used for perception (Larsson, 2013); probabilistic semantics (Cooper1263
et al., 2014); and language acquisition (Larsson & Cooper, 2009; Ginzburg &1264
Moradlou, 2013).1265
Page: 42 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 43
References1266
et al, Ruth Kempson (this volume), Ellipsis.1267
Allen, James F., Lenhart K. Schubert, George Ferguson, Peter Heeman, Chung Hee1268
Hwang, Tsuneaki Kato, Marc Light, Nathaniel G. Martin, Bradford W. Miller,1269
Massimo Poesio, & David R. Traum (1995), The trains project: A case study in1270
building a conversational planning agent, Journal of Experimental and Theoret-1271
ical AI 7:7–48.1272
Artstein, Ron, Mark Core, David DeVault, Kallirroi Georgila, Elsi Kaiser, & Amanda1273
Stent (eds.) (2011), SemDial 2011 (Los Angelogue): Proceedings of the 15th1274
Workshop on the Semantics and Pragmatics of Dialogue.1275
Austin, John L. (1961), Truth, in James Urmson & Geoffrey J. Warnock (eds.),1276
Philosophical Papers, Oxford University Press, paper originally published in1277
1950.1278
Barwise, Jon & Robin Cooper (1981), Generalized quantifiers and natural language,1279
Linguistics and Philosophy 4(2):159–219.1280
Barwise, Jon & John Etchemendy (1987), The Liar, Oxford University Press, New1281
York.1282
Barwise, Jon & John Perry (1983), Situations and Attitudes, Bradford Books, MIT1283
Press, Cambridge, Mass.1284
Breitholtz, Ellen (2010), Clarification requests as enthymeme elicitors, in Aspects of1285
Semantics and Pragmatics of Dialogue. SemDial 2010, 14th Workshop on the1286
Semantics and Pragmatics of Dialogue ,.1287
Breitholtz, Ellen & Robin Cooper (2011), Enthymemes as rhetorical resources, in1288
Artstein et al. (2011).1289
Brennan, Susan E. & Michael F. Schober (2001), How listeners compensate for1290
disfluencies in spontaneous speech, Journal of Memory and Language 44:274–1291
296.1292
Brown-Schmidt, S., C. Gunlogson, & M.K. Tanenhaus (2008), Addressees distin-1293
guish shared from private information when interpreting questions during inter-1294
active conversation, Cognition 107(3):1122–1134.1295
Carlson, Lauri (1983), Dialogue Games, Synthese Language Library, D. Reidel,1296
Dordrecht.1297
Clark, Herb & Jean Fox Tree (2002), Using uh and um in spontaneous speech,1298
Cognition 84:73–111.1299
Clark, Herbert (1996), Using Language, Cambridge University Press, Cambridge.1300
Clark, Herbert H & Deanna Wilkes-Gibbs (1986), Referring as a collaborative pro-1301
cess, Cognition 22(1):1–39.1302
Clark, H.H. & E.F. Schaefer (1989), Contributing to discourse, Cognitive science1303
13(2):259–294.1304
Cooper, Robin (2005a), Austinian truth, attitudes and type theory, Research on1305
Language and Computation 3:333–362.1306
Cooper, Robin (2005b), Austinian truth, attitudes and type theory, Research on1307
Language and Computation 3(4):333–362.1308
Cooper, Robin (2005c), Records and record types in semantic theory, Journal of1309
Logic and Computation 15(2):99–112.1310
Cooper, Robin (2010), Generalized quantifiers and clarification content, in1311
Lupkowski & Purver (2010).1312
Page: 43 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
44 Robin Cooper and Jonathan Ginzburg
Cooper, Robin (2012), Type theory and semantics in flux, in Ruth Kempson, Nich-1313
olas Asher, & Tim Fernando (eds.), Handbook of the Philosophy of Science,1314
Elsevier BV, volume 14: Philosophy of Linguistics, (271–323), general editors:1315
Dov M. Gabbay, Paul Thagard and John Woods.1316
Cooper, Robin (2013), Clarification and Generalized Quantifiers, Dialogue and Dis-1317
course 4(1):1–25.1318
Cooper, Robin (in prep), Type theory and language: from perception to linguistic1319
communication, draft of book chapters available from https://sites.google.1320
com/site/typetheorywithrecords/drafts.1321
Cooper, Robin, Simon Dobnik, Shalom Lappin, & Staffan Larsson (2014), A prob-1322
abilistic rich type theory for semantic interpretation, in Proceedings of the first1323
EACL workshop on Natural Language Semantics and Type Theory, Gothenburg,1324
(72–79).1325
Cooper, Robin & Jonathan Ginzburg (2011a), Negation in dialogue, in Artstein1326
et al. (2011).1327
Cooper, Robin & Jonathan Ginzburg (2011b), Negative inquisitiveness and1328
alternatives-based negation, in Proceedings of the Amsterdam Colloquium, 2011.1329
Coquand, Thierry, Randy Pollack, & Makoto Takeyama (2004), A logical framework1330
with dependently typed records, Fundamenta Informaticae XX:1–22.1331
Dobnik, Simon & Robin Cooper (2013), Spatial descriptions in type theory with re-1332
cords, in Proceedings of IWCS 2013 Workshop on Computational Models of Spa-1333
tial Language Interpretation and Generation (CoSLI-3), Association for Com-1334
putational Linguistics, Potsdam, Germany, (1–6).1335
Dobnik, Simon, Robin Cooper, & Staffan Larsson (2012), Modelling language, ac-1336
tion and perception in type theory with records, in Denys Duchier & Yannick1337
Parmentier (eds.), Proceedings of the 7th International Workshop on Constraint1338
Solving and Language Processing (CSLP’12), Laboratory for Fundamental Com-1339
puter Science (LIFO), University of Orl´eans,, Orl´eans, France, (51–62).1340
Dobnik, Simon, Staffan Larsson, & Robin Cooper (2011), Toward perceptually1341
grounded formal semantics, in Workshop on Integrating Language and Vision1342
on 16 December 2011 at NIPS 2011 (Neural Information Processing Systems).1343
Farkas, Donka & Floris Roelofsen (ms), Polarity particles in an inquisitive discourse1344
model, Manuscript, University of California at Santa Cruz and ILLC, University1345
of Amsterdam.1346
Fern´andez, Raquel (2006), Non-Sentential Utterances in Dialogue: Classification,1347
Resolution and Use, Ph.D. thesis, King’s College, London.1348
Fernando, Tim (2004), A finite-state approach to events in natural language se-1349
mantics, Journal of Logic and Computation 14(1):79–92.1350
Fernando, Tim (2006), Situations as strings, Electronic Notes in Theoretical Com-1351
puter Science 165:23–36.1352
Fernando, Tim (2008), Finite-state descriptions for temporal semantics, in Harry1353
Bunt & Reinhart Muskens (eds.), Computing Meaning, Volume 3, Springer,1354
volume 83 of Studies in Linguistics and Philosophy, (347–368).1355
Fernando, Tim (2009), Situations in LTL as strings, Information and Computation1356
207(10):980–999, ISSN 0890-5401, doi:DOI:10.1016/j.ic.2008.11.003.1357
Fillmore, Charles J. (1985), Frames and the semantics of understanding, Quaderni1358
di Semantica 6(2):222–254.1359
Gibson, James J. (1986), The Ecological Approach to Visual Perception, Lawrence1360
Erlbaum Associates.1361
Page: 44 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 45
Ginzburg, Jonathan (1995), Resolving questions, I, Linguistics and Philosophy1362
18:459–527.1363
Ginzburg, Jonathan (1997), On some semantic consequences of turn taking, in1364
P. Dekker, M. Stokhof, & Y. Venema (eds.), Proceedings of the 11th Amster-1365
dam Colloquium on Formal Semantics and Logic, ILLC, Amsterdam, (145–150).1366
Ginzburg, Jonathan (2011), Situation semantics and the ontology of natural lan-1367
guage, in Klaus von Heusinger, Claudia Maierborn, & Paul Portner (eds.), The1368
Handbook of Semantics, Walter de Gruyter.1369
Ginzburg, Jonathan (2012), The Interactive Stance: Meaning for Conversation, Ox-1370
ford University Press, Oxford.1371
Ginzburg, Jonathan & Robin Cooper (2014), Quotation via dialogical interaction,1372
Journal of Logic, Language, and Information 23(3) 287–311.1373
Ginzburg, Jonathan, Robin Cooper, & Tim Fernando (2014a), Propositions, ques-1374
tions, and adjectives: a rich type theoretic approach, in Proceedings of the first1375
EACL workshop on Natural Language Semantics and Type Theory, Gothenburg.1376
Ginzburg, Jonathan & Raquel Fern´andez (2005), Scaling up to multilogue: some1377
benchmarks and principles, in Proceedings of the 43rd Meeting of the Association1378
for Computational Linguistics, Michigan, (231–238).1379
Ginzburg, Jonathan, Raquel Fern´andez, & David Schlangen (2014b), Disfluencies as1380
intra-utterance dialogue moves, Semantics and Pragmatics 7(9) 1–64.1381
Ginzburg, Jonathan & Sara Moradlou (2013), The earliest utterances in dialogue:1382
towards a formal theory of parent/child talk in interaction, in Raquel Fern´andez1383
& Amy Isard (eds.), Proceedings of SemDial 2013 (DialDam), University of Am-1384
sterdam.1385
Ginzburg, Jonathan & Matt Purver (2012), Quantfication, the reprise content hypo-1386
thesis, and type theory, in Staffan Larsson & Lars Borin (eds.), From Quantfic-1387
ation to Conversation: Festschrift for Robin Cooper on the occasion of his 65th1388
birthday, College Publications, volume 19 of Tributes.1389
Ginzburg, Jonathan & Ivan A. Sag (2000), Interrogative Investigations: the form,1390
meaning and use of English Interrogatives, number 123 in CSLI Lecture Notes,1391
CSLI Publications, Stanford: California.1392
Godfrey, John J., E. C. Holliman, & J. McDaniel (1992), Switchboard: Telephone1393
speech corpus for research and devlopment, in Proceedings of the IEEE Con-1394
ference on Acoustics, Speech, and Signal Processing, San Francisco, USA, (517–1395
520).1396
Groenendijk, Jeroen & Martin Stokhof (1997), Questions, in Johan van Benthem1397
& Alice ter Meulen (eds.), Handbook of Logic and Linguistics, North Holland,1398
Amsterdam.1399
Hamblin, C. L. (1973), Questions in Montague English, in Barbara Partee (ed.),1400
Montague Grammar, Academic Press, New York.1401
Healey, P.G.T., M. Purver, J. King, J. Ginzburg, & G. Mills (2003), Experimenting1402
with clarification in dialogue, in R. Alterman & D. Kirsh (eds.), Proceedings1403
of the 25th Annual Conference of the Cognitive Science Society, Mahwah, N.J.:1404
LEA, (539–544.).1405
Heeman, Peter A. & James F. Allen (1999), Speech repairs, intonational phrases1406
and discourse markers: Modeling speakers’ utterances in spoken dialogue, Com-1407
putational Linguistics 25(4):527–571.1408
Hoepelmann, Jacob (1983), On questions, in Ferenc Kiefer (ed.), Questions and1409
Answers, Reidel.1410
Page: 45 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
46 Robin Cooper and Jonathan Ginzburg
Hopcroft, John E. & Jeffrey D. Ullman (1979), Introduction to Automata Theory,1411
Languages and Computation, Addison-Wesley Publishing, Reading Massachu-1412
setts.1413
Kempson, Ruth, Wilfried Meyer-Viol, & Dov Gabbay (2000), Dynamic Syntax: The1414
Flow of Language Understanding, Blackwell, Oxford.1415
Larsson, Staffan (2013), Formal semantics for perceptual classification, Journal of1416
Logic and Computation doi:10.1093/logcom/ext059.1417
Larsson, Staffan & Robin Cooper (2009), Towards a formal view of corrective feed-1418
back, in Proceedings of the EACL 2009 Workshop on Cognitive Aspects of Com-1419
putational Language Acquisition, Athens.1420
Levelt, Willem J. (1983), Monitoring and self-repair in speech, Cognition 14(4):41–1421
104.1422
Luo, Zhaohui (2011), Contextual Analysis of Word Meanings in Type-Theoretical1423
Semantics, in Sylvain Pogodalla & Jean-Philippe Prost (eds.), Logical Aspects of1424
Computational Linguistics: 6th International Conference, LACL 2011, Springer,1425
number 6736 in Lecture Notes in Artificial Intelligence, (159–174).1426
Lupkowski, Pawe l & Jonathan Ginzburg (2014), Question answers, Computational1427
Linguistics (Under Review) .1428
Lupkowski, Pawe l & Matthew Purver (eds.) (2010), Aspects of Semantics and Prag-1429
matics of Dialogue. SemDial 2010, 14th Workshop on the Semantics and Prag-1430
matics of Dialogue, Polish Society for Cognitive Science, Pozna´n.1431
Martin-L¨of, Per (1984), Intuitionistic Type Theory, Bibliopolis, Naples.1432
Michaelis, Laura A. (2009), Sign-based construction grammar, in The Oxford Hand-1433
book of Linguistic Analysis, Oxford University Press.1434
Montague, Richard (1973), The Proper Treatment of Quantification in Ordinary1435
English, in Jaakko Hintikka, Julius Moravcsik, & Patrick Suppes (eds.), Ap-1436
proaches to Natural Language: Proceedings of the 1970 Stanford Workshop on1437
Grammar and Semantics, D. Reidel Publishing Company, Dordrecht, (247–270).1438
Montague, Richard (1974), Formal Philosophy: Selected Papers of Richard1439
Montague, Yale University Press, New Haven, ed. and with an introduction by1440
Richmond H. Thomason.1441
Partee, B.H., A.G.B. ter Meulen, & R.E. Wall (1990), Mathematical Methods in1442
Linguistics, Springer.1443
Peters, Stanley & Dag Westerst˚ahl (2006), Quantifiers in Language and Logics,1444
Oxford University Press.1445
Purver, M. (2006), Clarie: Handling clarification requests in a dialogue system, Re-1446
search on Language & Computation 4(2):259–288.1447
Purver, Matt & Jonathan Ginzburg (2004), Clarifying noun phrase semantics,1448
Journal of Semantics 21(3):283–339.1449
Purver, Matthew, Jonathan Ginzburg, & Patrick Healey (2006), Lexical categories1450
and clarificational potential, revised version under review.1451
Purver, Matthew, Eleni Gregoromichelaki, Wilfried Meyer-Viol, & Ronnie Cann1452
(2010), Splitting the Is and Crossing the Yous: Context, Speech Acts and Gram-1453
mar, in Lupkowski & Purver (2010), (43–50).1454
Purver, Matthew, Patrick G. T. Healey, James King, Jonathan Ginzburg, & Greg J.1455
Mills (2003), Answering clarification questions, in Proceedings of the 4th SIGdial1456
Workshop on Discourse and Dialogue, ACL, Sapporo.1457
Ranta, Aarne (this volume), Intuitionistic type theory and dependent types.1458
Page: 46 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40
TTR for Natural Language Semantics 47
Sag, Ivan A., Thomas Wasow, & Emily M. Bender (2003), Syntactic Theory: A1459
Formal Introduction, CSLI Publications, Stanford, 2nd edition.1460
Schegloff, Emanuel (2007), Sequence Organization in Interaction, Cambridge Uni-1461
versity Press, Cambridge.1462
Schegloff, Emanuel, Gail Jefferson, & Harvey Sacks (1977), The preference for self-1463
correction in the organization of repair in conversation, Language 53:361–382.1464
Searle, John R. (1969), Speech Acts: an Essay in the Philosophy of Language, Cam-1465
bridge University Press.1466
Shieber, Stuart (1986), An Introduction to Unification-Based Approaches to Gram-1467
mar, CSLI Publications, Stanford.1468
Shriberg, Elizabeth E. (1994), Preliminaries to a theory of speech disfluencies, Ph.D.1469
thesis, University of California at Berkeley, Berkeley, USA.1470
Steedman, Mark (1999), The Syntactic Process, Linguistic Inquiry Monographs, MIT1471
Press, Cambridge.1472
Wi´sniewski, Andrzej (2001), Questions and inferences, Logique et Analyse 173:5–43.1473
Wi´sniewski, Andrzej (2003), Erotetic search scenarios, Synthese 134:389–427.1474
Wi´sniewski, Andrzej (this volume), The semantics of questions.1475
Page: 47 job: rc-jg-ttrsem-final macro: handbook.cls date/time: 3-Apr-2015/11:40