Conference Paper

The Logical Handling of Threats, Rewards, Tips, and Warnings.

DOI: 10.1007/978-3-540-75256-1_23 Conference: Symbolic and Quantitative Approaches to Reasoning with Uncertainty, 9th European Conference, ECSQARU 2007, Hammamet, Tunisia, October 31 - November 2, 2007, Proceedings
Source: DBLP

ABSTRACT Previous logic-based handling of arguments has mainly focused on explanation or justification in presence of inconsistency.
As a consequence, only one type of argument has been considered, namely the explanatory type; several argumentation frameworks
have been proposed for generating and evaluating explanatory arguments. However, recent investigations of argument-based negotiation
have emphasized other types of arguments, such as threats, rewards, tips, and warnings. In parallel, cognitive psychologists recently started studying the characteristics of these different types of arguments,
and the conditions under which they have their desired effect. Bringing together these two lines of research, we present in
this article some logical definitions as well as some criteria for evaluating each type of argument. Empirical findings from
cognitive psychology validate these formal results.

0 Bookmarks
 · 
50 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In multi-agent systems (MAS), negotiation provides a powerful metaphor for automating the allocation and reallocation of resources. Methods for automated negotiation in MAS include auction-based protocols and alternating offer bargaining protocols. Recently, argumentation-based negotiation has been accepted as a promising alternative to such approaches. Interest-based negotiation (IBN) is a form of argumentation-based negotiation in which agents exchange (1) information about their underlying goals; and (2) alternative ways to achieve these goals. However, the usefulness of IBN has been mostly established in the literature by appeal to intuition or by use of specific examples. In this paper, we propose a new formal model for reasoning about interest-based negotiation protocols. We demonstrate the usefulness of this framework by defining and analysing two different IBN protocols. In particular, we characterise conditions that guarantee their advantage (in the sense of expanding the set of individual rational deals) over the more classic proposal-based approaches to negotiation.
    Annals of Mathematics and Artificial Intelligence 04/2009; 55(3):253-276. · 0.20 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Research on reasoning about consequential arguments has been an active but piecemeal enterprise. Previous research considered in depth some subclasses ofconsequential arguments, but further understanding of consequential arguments requires that we address their greater variety, avoiding the risk of over-generalisation from specific examples. Ideally we ought to be able to systematically generate the set of consequential arguments, and then engage in random sampling of stimuli within that set. The current article aims at making steps in that direction, using the theory of utility conditionals as a way to generate a large set of consequential arguments, and offering one study illustrating how the theory can be used for the random sampling of stimuli. Itis expected that further use of this method will bring more diversity to experimental research on consequential arguments, and more robustness to models of argumentation from consequences.
    Thinking and Reasoning 01/2012; 18(3):379-393. · 1.12 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: People can reason about the preferences of other agents, and predict their behavior based on these preferences. Surprisingly, the psychology of reasoning has long neglected this fact, and focused instead on disinterested inferences, of which preferences are neither an input nor an output. This exclusive focus is untenable, though, as there is mounting evidence that reasoners take into account the preferences of others, at the expense of logic when logic and preferences point to different conclusions. This article summarizes the most recent account of how reasoners predict the behavior and attitude of other agents based on conditional rules describing actions and their consequences, and reports new experimental data about which assumptions reasoners retract when their predictions based on preferences turn out to be false.
    Synthese 01/2012; · 0.70 Impact Factor