The Logical Handling of Threats, Rewards, Tips, and Warnings.
ABSTRACT Previous logic-based handling of arguments has mainly focused on explanation or justification in presence of inconsistency.
As a consequence, only one type of argument has been considered, namely the explanatory type; several argumentation frameworks
have been proposed for generating and evaluating explanatory arguments. However, recent investigations of argument-based negotiation
have emphasized other types of arguments, such as threats, rewards, tips, and warnings. In parallel, cognitive psychologists recently started studying the characteristics of these different types of arguments,
and the conditions under which they have their desired effect. Bringing together these two lines of research, we present in
this article some logical definitions as well as some criteria for evaluating each type of argument. Empirical findings from
cognitive psychology validate these formal results.
SourceAvailable from: Andrew J Stewart[Show abstract] [Hide abstract]
ABSTRACT: People often use conditional statements to describe configurations of agents, actions and valued consequences. In this paper we propose the existence of utility templates, a special subset of these configurations that exert strong constraints on how people interpret conditionals. We conducted an initial completion survey which identified four potential utility templates. Four experiments then examined characteristic effects of these templates: When a described novel situation is close enough to a pre-existing template, people interpret ambiguous information associated with that situation or reinterpret current information in such a way that their understanding of the novel situation fits the template. A process explanation of these effects is considered which allows for the principled generation of other templates, and offers a possible reformulation of the findings within the framework of relevance theory.Journal of Memory and Language 05/2013; 68(4):350–361. DOI:10.1016/j.jml.2013.01.002 · 2.65 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: People can reason about the preferences of other agents, and predict their behavior based on these preferences. Surprisingly, the psychology of reasoning has long neglected this fact, and focused instead on disinterested inferences, of which preferences are neither an input nor an output. This exclusive focus is untenable, though, as there is mounting evidence that reasoners take into account the preferences of others, at the expense of logic when logic and preferences point to different conclusions. This article summarizes the most recent account of how reasoners predict the behavior and attitude of other agents based on conditional rules describing actions and their consequences, and reports new experimental data about which assumptions reasoners retract when their predictions based on preferences turn out to be false.Synthese 01/2012; 185(S1). DOI:10.1007/s11229-011-9957-x · 0.64 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: Research on reasoning about consequential arguments has been an active but piecemeal enterprise. Previous research considered in depth some subclasses ofconsequential arguments, but further understanding of consequential arguments requires that we address their greater variety, avoiding the risk of over-generalisation from specific examples. Ideally we ought to be able to systematically generate the set of consequential arguments, and then engage in random sampling of stimuli within that set. The current article aims at making steps in that direction, using the theory of utility conditionals as a way to generate a large set of consequential arguments, and offering one study illustrating how the theory can be used for the random sampling of stimuli. Itis expected that further use of this method will bring more diversity to experimental research on consequential arguments, and more robustness to models of argumentation from consequences.Thinking and Reasoning 08/2012; 18(3):379-393. DOI:10.1080/13546783.2012.670751 · 1.12 Impact Factor