ArticlePublisher preview available

Message Exchange Games in Strategic Contexts

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

When two people engage in a conversation, knowingly or unknowingly, they are playing a game. Players of such games have diverse objectives, or winning conditions: an applicant trying to convince her potential employer of her eligibility over that of a competitor, a prosecutor trying to convict a defendant, a politician trying to convince an electorate in a political debate, and so on. We argue that infinitary games offer a natural model for many structural characteristics of such conversations. We call such games message exchange games, and we compare them to existing game theoretic frameworks used in linguistics—for example, signaling games—and show that message exchange games are needed to handle non-cooperative conversation. In this paper, we concentrate on conversational games where players’ interests are opposed. We provide a taxonomy of conversations based on their winning conditions, and we investigate some essential features of winning conditions like consistency and what we call rhetorical cooperativity. We show that these features make our games decomposition sensitive, a property we define formally in the paper. We show that this property has far-reaching implications for the existence of winning strategies and their complexity. There is a class of winning conditions (decomposition invariant winning conditions) for which message exchange games are equivalent to Banach- Mazur games, which have been extensively studied and enjoy nice topological results. But decomposition sensitive goals are much more the norm and much more interesting linguistically and philosophically.
J Philos Logic (2017) 46:355–404
DOI 10.1007/s10992-016-9402-1
Message Exchange Games in Strategic Contexts
Nicholas Asher1·Soumya Paul2·Antoine Venant2
Received: 26 August 2014 / Accepted: 26 May 2016 / Published online: 20 June 2016
© Springer Science+Business Media Dordrecht 2016
Abstract When two people engage in a conversation, knowingly or unknowingly,
they are playing a game. Players of such games have diverse objectives, or winning
conditions: an applicant trying to convince her potential employer of her eligibility
over that of a competitor, a prosecutor trying to convict a defendant, a politician try-
ing to convince an electorate in a political debate, and so on. We argue that infinitary
games offer a natural model for many structural characteristics of such conversations.
We call such games message exchange games, and we compare them to existing game
theoretic frameworks used in linguistics—for example, signaling games—and show
that message exchange games are needed to handle non-cooperative conversation.
In this paper, we concentrate on conversational games where players’ interests are
opposed. We provide a taxonomy of conversations based on their winning conditions,
and we investigate some essential features of winning conditions like consistency
and what we call rhetorical cooperativity. We show that these features make our
games decomposition sensitive, a property we define formally in the paper. We show
that this property has far-reaching implications for the existence of winning strate-
gies and their complexity. There is a class of winning conditions (decomposition
invariant winning conditions) for which message exchange games are equivalent to
Banach- Mazur games, which have been extensively studied and enjoy nice topolog-
ical results. But decomposition sensitive goals are much more the norm and much
more interesting linguistically and philosophically.
Keywords Philosophy of language ·Dialogue ·Game theory ·Strategic reasoning
Nicholas Asher
nicholas.asher@irit.fr
1CNRS, IRIT, Toulouse, France
2Universit´
e de Toulouse 3, IRIT, Toulouse, France
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... Now awareness of the need to give up the uniformity assumption goes back at least to major figures such as Wittgenstein, who introduced the notion of a language game (Wittgenstein, 1953) and Bakhtin, who independently argued for relativizing linguistic activity to speech genres (Bakhtin, 1986). Neither Wittgenstein nor Bakhtin provided an explication of their notions, though subsequently various explications and technical terms have been introduced by a variety of scholars in Sociolinguistics (Hymes, 1972), in Pragmatics (Levinson, 1979;Allwood, 2000), in AI (Schank & Abelson, 1977), in dialogical semantics (Larsson, 2002;Ginzburg, 2012), and in game-theoretic pragmatics (Asher et al., 2017;Burnett, 2017). No one technical term has become established; we will mostly use the term conversational type, though this term, like the others, has its drawbacks. ...
... However, this work has not applied game theory to take a global perspective on conversations. One notable exception is work by Asher et al. (2017). The starting point of Asher et al. (2017) is the framework of Banach-Mazur games (Revalski, 2003). ...
... One notable exception is work by Asher et al. (2017). The starting point of Asher et al. (2017) is the framework of Banach-Mazur games (Revalski, 2003). They modify this framework to make it more amenable to analyzing certain kinds of natural language dialogue, the emergent framework being Message Exchange games. ...
Article
Full-text available
One of the success stories of formal semantics is explicating responsive moves like answers to questions. There is, however, a significant lacune concerning the characterization of initiating utterances, which are strongly tied to the conversational activity [language game (Wittgenstein), speech genre (Bakhtin)], or—our terminology—conversational type, one is engaged in. To date there has been no systematic proposal trying to account for the range of possible language games/speech genres/conversational types and their global structure. In particular, concerning the range of subject matter that can and needs to be discussed and by whom—ultimately a semantic analogue of Laplace’s demon. We suggest that the subject matter problem for conversational types is a central task for any semantic theory for conversation. This paper develops a theory of conversational types, which embedded in the theory of conversational interaction KoS, enables this problem to be tackled for a wide range of conversational types drawn from the British National Corpus classification of conversational domains. The theory we develop treats conversational types as first class, not metatheoretical entities, in contrast to explications of corresponding notions in game theoretic approaches. We demonstrate that this allows us to explicate the possibilities interlocutors have to refer to and seek clarification about the types of conversations they are engaged in.
... This body of work does not address, however the issue of strategic choice in conversation, which is the core issue underlying work in Game Theory. Asher et al. (2017) used game theoretic tools to develop a theory of strategic choice for dialogue. Although there are a variety of interesting insights captured in this approach, it is based on two assumptions that apply only to a restricted class of language games-games continue indefinitely and there exists a jury that assigns winning conditions to participants. ...
... The starting point of (Asher et al., 2017) is the framework of Banach-Mazurckiewicz games. They modify this framework to make it more amenable to analyzing certain kinds of NL dialogue, the emergent framework being BM messaging games. ...
... They modify this framework to make it more amenable to analyzing certain kinds of NL dialogue, the emergent framework being BM messaging games. Asher et al. (2017) argued that each dialogue potentially continues indefinitely and has a winner adjudged by a third party jury. This is useful for modelling political discourse between rival groups or individual contenders in the public domain. ...
Preprint
Full-text available
In this paper, we show that investigating the interaction of conversational type (often known as language game or speech genre) with the character types of the interlocutors is worthwhile. We present a method of calculating the decision making process for selecting dialogue moves that combines character type and conversational type. We also present a mathematical model that illustrate these factors' interactions in a quantitative way.
... Inspired by this earlier literature, Asher, Paul, and Venant (2017) provide a model of language in terms of a space of finite and infinite strings. Many of these strings are just non-meaningful sequences of words but the space also includes coherent and consistent strings that form meaningful texts and conversations. ...
... Many of these strings are just non-meaningful sequences of words but the space also includes coherent and consistent strings that form meaningful texts and conversations. Building on formal theories of textual and conversational meaning (Kamp and Reyle 1993;Asher 1993;De Groote 2006;Asher and Pogodalla 2010), Asher, Paul, and Venant (2017) use this subset of coherent and consistent texts and conversations to define the semantics and strategic consequences of conversational moves in terms of possible continuations. ...
Article
Full-text available
Transformer-based language models have been shown to be highly effective for several NLP tasks. In this article, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large versions, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model’s inferences in question answering. We then test this notion by observing a model’s behavior on answering questions about a story after performing two novel semantic interventions—deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (∼ 50% for deletion intervention, and ∼ 20% drop in accuracy for negation intervention). We then propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from ∼ 50% to ∼ 6%). We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models’ inability to deal with negation intervention or to capture the predicate–argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate–argument structure. While InstructGPT models do achieve very high performance on predicate–argument structure task, they fail to respond adequately to our deletion and negation interventions.
... Our idea is to apply truth conditional semantics to LLMs by representing models themselves as strings. Semanticists have used strings and continuation semantics (Reynolds, 1974) -in which the meaning of a string s is defined in terms of its possible continuations, the set of longer strings S that contain s-to investigate the meaning and strategic consequences of conversational moves (Asher et al., 2017), temporal expressions (Fernando, 2004), generalized quantifiers (Graf, 2019), and the "dynamic" formal semantics of (Kamp and Reyle, 1993;Asher, 1993)(De Groote, 2006Asher and Pogodalla, 2011). In our case, we will use strings to define models A s . ...
... So B is not effectively learnable.✷ Asher et al. (2017); Asher and Paul (2018) examine concepts of discourse consistency and textual and conversational coherence, which true, human-like conversational capacity requires. Using continuations in a game-theoretic setting, they show those concepts determine more complex Π 0 2 sets in the Borel Hierarchy; and intuitive measures of conversational success-like the fact that one player has more successful unrefuted attacks on an opponent's position than vice versa-determine Π 0 3 sets. ...
Preprint
Full-text available
With the advent of large language models (LLMs), the trend in NLP has been to train LLMs on vast amounts of data to solve diverse language understanding and generation tasks. The list of LLM successes is long and varied. Nevertheless, several recent papers provide empirical evidence that LLMs fail to capture important aspects of linguistic meaning. Focusing on universal quantification, we provide a theoretical foundation for these empirical findings by proving that LLMs cannot learn certain fundamental semantic properties including semantic entailment and consistency as they are defined in formal semantics. More generally, we show that LLMs are unable to learn concepts beyond the first level of the Borel Hierarchy, which imposes severe limits on the ability of LMs, both large and small, to capture many aspects of linguistic meaning. This means that LLMs will continue to operate without formal guarantees on tasks that require entailments and deep linguistic understanding.
... The Ginzburg et al. (2022) paper, mentioned above, also uses QUDs as a basis for analyzing the relevance of responses; building on Ginzburg (2012), the authors provide a formal analysis of both cooperative and uncooperative discourse or questionanswering. The QUD framework -and not other pragmatic models that may entail deception such as Asher et al. (2017); Asher and Paul (2018) -was chosen as a basis for computational bullshit detection for two main reasons: Firstly, the definition by Stokke and Fallis is a good starting point and is already based on QUDs. Secondly, in dealing with QUDs similarly to overt questions, we may leverage the large amount of NLP research in question related fields. ...
... The advantage of relying on QUDs as an established pragmatic model of communication is the model's relationship with (overt) questions. Linguistically, bullshit detection can also be based on other frameworks, which may be more apt to model uncooperative dialog (such as Asher et al., 2017;Asher and Paul, 2018). However, if it is possible to identify underlying questions and their answers in bullshitting texts, we may benefit from such NLP tasks as question answering or answer quality estimation. ...
Article
Fact checking and fake news detection has garnered increasing interest within the natural language processing (NLP) community in recent years, yet other aspects of misinformation remain unexplored. One such phenomenon is `bullshit', which different disciplines have tried to define since it first entered academic discussion nearly four decades ago. Fact checking bullshitters is useless, because factual reality typically plays no part in their assertions: Where liars deceive about content, bullshitters deceive about their goals. Bullshitting is misleading about language itself, which necessitates identifying the points at which pragmatic conventions are broken with deceptive intent. This paper aims to introduce bullshitology into the field of NLP by tying it to questions in a QUD-based definition, providing two approaches to bullshit annotation, and finally outlining which combinations of NLP methods will be helpful to classify which kinds of linguistic bullshit.
... Our idea is to apply truth conditional semantics to LLMs by representing models themselves as strings. Semanticists have used strings and continuation semantics (Reynolds, 1974) -in which the meaning of a string s is defined in terms of its possible continuations, the set of longer strings S that contain s-to investigate the meaning and strategic consequences of conversational moves (Asher et al., 2017), temporal expressions (Fernando, 2004), generalized quantifiers (Graf, 2019), and the "dynamic" formal semantics of (Kamp and Reyle, 1993;Asher, 1993)(De Groote, 2006Asher and Pogodalla, 2011). In our case, we will use strings to define models A s . ...
... But the relation between the distributional view and that of formal semantics is more than complementary. Inspired inter alia by (Reynolds 1974), (Asher, Paul, and Venant 2017) provides a model of language in terms of a space of finite and infinite strings. Many of these strings are just jumbles of words but the set also includes coherent and consistent strings that form meaningful texts and conversations. ...
Preprint
Full-text available
Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large version, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering. We then test this notion by observing a model's behavior on answering questions about a story after performing two novel semantic interventions -- deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (~50% for deletion intervention, and ~20% drop in accuracy for negation intervention). We then propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from ~50% to ~6%). We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models' inability to deal with negation intervention or to capture the predicate-argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate-argument structure. While InstructGPT models do achieve very high performance on predicate-argument structure task, they fail to respond adequately to our deletion and negation interventions.
... 21 Though we disagree about other details about conversational dynamics, examples of this general approach include , Asher et al. (2017), Asher and Paul (2018), and D'Ambrosio (n.d.). ...
... Because a full understanding of discourse goals usually requires modelling extended discourses and goals can be ranked not only by their final outcomes but by the different paths that the conversation can take to achieve these outcomes, recent work in this area models discourse goals as sets of full discourse structures-the structures in which the conversation "goes well" for a particular conversationalist. Asher et al. (2017) model a conversational goal as a subset of all possible conversations or discourse structures in the sequential game space of all possible discourse moves. The goal of making a conversation coherent, for example, is modelled as the set of all coherent discourse structures or, alternatively, as the set of all conversations or strings of discourse moves that generate such structures. ...
Chapter
Linguistics and philosophy, while being two closely-related fields, are often approached with very different methodologies and frameworks. Bringing together a team of interdisciplinary scholars, this pioneering book provides examples of how conversations between the two disciplines can lead to exciting developments in both fields, from both a historical and a current perspective. It identifies a number of key phenomena at the cutting edge of research within both fields, such as reporting and ascribing, describing and referring, narrating and structuring, locating in time and space, typologizing and ontologizing, determining and questioning, arguing and rejecting, and implying and (pre-)supposing. Each chapter takes on a phenomena and explores it through a set of questions which are posed and answered at the outset of each chapter. An accessible and engaging resource, it is essential reading for researchers and students in both disciplines, and will empower exciting and illuminating conversations for years to come.
Article
Linguistics and philosophy, while being two closely-related fields, are often approached with very different methodologies and frameworks. Bringing together a team of interdisciplinary scholars, this pioneering book provides examples of how conversations between the two disciplines can lead to exciting developments in both fields, from both a historical and a current perspective. It identifies a number of key phenomena at the cutting edge of research within both fields, such as reporting and ascribing, describing and referring, narrating and structuring, locating in time and space, typologizing and ontologizing, determining and questioning, arguing and rejecting, and implying and (pre-)supposing. Each chapter takes on a phenomena and explores it through a set of questions which are posed and answered at the outset of each chapter. An accessible and engaging resource, it is essential reading for researchers and students in both disciplines, and will empower exciting and illuminating conversations for years to come.
Article
Full-text available
While a semantics without differing “points of view” of different agents is a good first hypothesis for the analysis of the content of monologue, dialogues typically involve differing points of view from different agents. In particular one agent may not agree with what another agent asserts, or may have a different interpretation of an utterance from that of its author. An adequate semantics for dialogue should proceed by attributing to different dialogue agents separate views of the contents of their conversation. We model this, following others, by assigning each agent her own commitment slate. In this paper we bring out a complication with this approach that has gone so far unnoticed in formal semantics and the prior work we just mentioned, albeit it is well-known from epistemic game theory: commitment slates interact; agents typically commit to the fact that other agents make certain commitments. We thus formulate the semantics of dialogue moves and conversational goals in terms of nested, public commitments. We develop two semantics for nested commitments, one for a simple propositional language, the other for a full description language for the discourse structure of dialogues; and we show how one is an approximation of the other. We apply this formal setting to provide a unified account of different linguistic problems: the problem of ambiguity and the problem of acknowledgments and grounding. We also briefly discuss the problem of corrections and how to integrate them in our framework.
Article
This book is an introduction to finite model theory which stresses the computer science origins of the area. In addition to presenting the main techniques for analyzing logics over finite models, the book deals extensively with applications in databases, complexity theory, and formal languages, as well as other branches of computer science. It covers Ehrenfeucht-Fraïssé games, locality-based techniques, complexity analysis of logics, including the basics of descriptive complexity, second-order logic and its fragments, connections with finite automata, fixed point logics, finite variable logics, zero-one laws, and embedded finite models, and gives a brief tour of recently discovered applications of finite model theory. This book can be used both as an introduction to the subject, suitable for a one- or two-semester graduate course, or as reference for researchers who apply techniques from logic in computer science.
Chapter
We view a debate as a mechanism by which an uninformed decision maker (the listener) extracts information from two informed debaters, who hold contradicting positions regarding the right decision. During the debate, the debaters raise arguments and, based on these arguments, the listener reaches a conclusion. Using a simple example, we investigate the mechanism design problem of constructing rules of debate that maximize the probability that the listener reaches the right conclusion, subject to constraints on the form and length of the debate. It is shown that optimal debate rules have the property that the conclusion drawn by the listener is not necessarily the same as the conclusion he would have drawn, had he interpreted the information revealed to him or her during the debate literally. The optimal design of debate rules requires that the information elicited from a counterargument depends on the argument it counterargues. We also link our discussion with the pragmatics literature.