Article

‘Why Didn’t You Allocate This Task to Them?’ Negotiation-Aware Task Allocation and Contrastive Explanation Generation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this work, we design an Artificially Intelligent Task Allocator (AITA) that proposes a task allocation for a team of humans. A key property of this allocation is that when an agent with imperfect knowledge (about their teammate's costs and/or the team's performance metric) contests the allocation with a counterfactual, a contrastive explanation can always be provided to showcase why the proposed allocation is better than the proposed counterfactual. For this, we consider a negotiation process that produces a negotiation-aware task allocation and, when contested, leverages a negotiation tree to provide a contrastive explanation. With human subject studies, we show that the proposed allocation indeed appears fair to a majority of participants and, when not, the explanations generated are judged as convincing and easy to comprehend.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We are interested in providing explanations that aid operators in interpreting solutions generated by complex multirobot systems that incorporate task allocation, scheduling, and motion planning into its decision making. Prior XAI work has addressed this challenge by introducing techniques for generating explanations for task allocation [36], [35], scheduling [24], [9], and motion planning [15] independently. However, recent work in the multi-robot community has shown that the close interdependency between these three subproblems (i.e., determining which robots should perform which tasks affect the timing/schedule of those tasks, and in turn, the motion plans required for their execution) is most effectively addressed by holistic solutions that consider all three challenges together [31], [27], [30]. ...
... A recent work in xMASE introduces CMAoE [36], a domain independent approach for providing tabular constrastive explanations by utilizing features of an objective-function for multi-agent systems. Another recent work introduces AITA [35], a negotiation-aware explicable task allocation framework which provides graphical contrastive explanations by providing minimal information from the human-agent preferences to refute the end-user's foil. Notably, unlike prior work [36], [35], we do not assume the system's plan represents the optimal solution to the problem. ...
... Another recent work introduces AITA [35], a negotiation-aware explicable task allocation framework which provides graphical contrastive explanations by providing minimal information from the human-agent preferences to refute the end-user's foil. Notably, unlike prior work [36], [35], we do not assume the system's plan represents the optimal solution to the problem. Instead, we consider the solution as optimal given the userprovided problem specification, and allow for the possibility that the problem specification may be impacted by human error. ...
Preprint
As the complexity of multi-robot systems grows to incorporate a greater number of robots, more complex tasks, and longer time horizons, the solutions to such problems often become too complex to be fully intelligible to human users. In this work, we introduce an approach for generating natural language explanations that justify the validity of the system's solution to the user, or else aid the user in correcting any errors that led to a suboptimal system solution. Toward this goal, we first contribute a generalizable formalism of contrastive explanations for multi-robot systems, and then introduce a holistic approach to generating contrastive explanations for multi-robot scenarios that selectively incorporates data from multi-robot task allocation, scheduling, and motion-planning to explain system behavior. Through user studies with human operators we demonstrate that our integrated contrastive explanation approach leads to significant improvements in user ability to identify and solve system errors, leading to significant improvements in overall multi-robot team performance.
... Nardi et al. [46] Voting ✓ ✓ ✓ Nizri et al. [48] Payoff Allocation ✓ ✓ Peters et al. [54] Voting ✓ ✓ ✓ ✓ Peters et al. [55] Voting ✓ ✓ ✓ Pozanco et al. [56] Scheduling ✓ ✓ ✓ Suryanarayana et al. [65] Voting ✓ ✓ ✓ Zahedi et al. [68] Task Allocation ✓ ✓ ✓ ✓ ✓ agent's individual stakes can be effective in helping her appreciate the impact of the solution from a selfish perspective and thus convince her. For example, convincing a housemate in a rent division setting that the decision is envy-free amounts to a normatively characterized explanation [35] while the comparison of the maximin (the solution that maximizes the minimum utility for every player thus resulting in the least disparity) solution to an arbitrary envy-free solution to demonstrate the lower disparity achieved by the former solution is an attributive explanation [19]. ...
... Zahedi et al. [68] compare the cost of a proposed allocation to the cost of the counterfactual allocation proposed by the participant. Ahani et al. [2] depict the change in employment score if the refugee allocation proposed by the algorithm needs to be changed. ...
... This is demonstrated by Cailloux and Endriss [13] who developed a formal framework for presenting arguments favoring a particular outcome. Zahedi et al. [68] present the case for the a suggested task allocation by demonstrating how a negotiation based on the counterfactual task allocation proposed by the participant can lead to a higher cost. Mosca and Such [43] base their explanation on the argumentation scheme used to obtain the optimal solution. ...
Chapter
Full-text available
Designing and implementing explainable systems is seen as the next step towards increasing user trust in, acceptance of and reliance on Artificial Intelligence (AI) systems. While explaining choices made by black-box algorithms such as machine learning and deep learning has occupied most of the limelight, systems that attempt to explain decisions (even simple ones) in the context of social choice are steadily catching up. In this paper, we provide a comprehensive survey of explainability in mechanism design, a domain characterized by economically motivated agents and often having no single choice that maximizes all individual utility functions. We discuss the main properties and goals of explainability in mechanism design, distinguishing them from those of Explainable AI in general. This discussion is followed by a thorough review of the challenges one may face when working on Explainable Mechanism Design and propose a few solution concepts to those.KeywordsExplainabilityMechanism designJustification
... XRL approaches often seek to reconcile the inference capacity or the mental model (Klein and Hoffman, 2008) of a user. Inference reconciliation involves answering investigatory questions from users such as "Why not action a instead of a′?" (Madumal et al., 2020;Miller, 2021;Zahedi et al., 2024), or "Why is this plan optimal?" (Khan et al., 2009;Hayes and Shah, 2017). Other instance-based methods seek to provide the user with an explanation to elucidate the important features or a reward decomposition to enable a user to better understand or predict individual actions of a sequential decision making agent (Topin and Veloso, 2019;Anderson et al., 2020;Das et al., 2023a) Model reconciliation approaches format explanations to adjust the human's mental model of the optimal plan to more accurately align it with the actual conceptual model of the agent (Chakraborti et al., 2017b;Sreedharan et al., 2019). ...
Article
Full-text available
Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the “first impression” of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent’s behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.
... The need for such an explanation is especially acute whenever these systems employ complex algorithms that are generally incomprehensible to the non-expert users, or whenever it is difficult for users to distinguish bad outcomes from good decisions (given the information available to the system). For example, when providing incorrect GPS directions [1], task division that is not preferred by the user [2] or actions picked based on incorrect modelling assumptions leading to a catastrophe (as in the case of the mortgage crisis [3]). In all of these examples, convincing the stakeholders that the decision is justified needs to consider two factors -the complexity of the algorithm and the dissatisfaction of the stakeholders -thus making the said process difficult. ...
Preprint
Full-text available
Designing and implementing explainable systems is seen as the next step towards increasing user trust in, acceptance of and reliance on Artificial Intelligence (AI) systems. While explaining choices made by black-box algorithms such as machine learning and deep learning has occupied most of the limelight, systems that attempt to explain decisions (even simple ones) in the context of social choice are steadily catching up. In this paper, we provide a comprehensive survey of explainability in mechanism design, a domain characterized by economically motivated agents and often having no single choice that maximizes all individual utility functions. We discuss the main properties and goals of explainability in mechanism design, distinguishing them from those of Explainable AI in general. This discussion is followed by a thorough review of challenges one may face when when working on Explainable Mechanism Design and propose a few solution concepts to those.
... The need for such an explanation is especially acute whenever these systems employ complex algorithms that are generally incomprehensible to the non-expert users, or whenever it is difficult for users to distinguish bad outcomes from good decisions (given the information available to the system). For example, when providing incorrect GPS directions [1], task division that is not preferred by the user [2] or actions picked based on incorrect modelling assumptions leading to a catastrophe (as in the case of the mortgage crisis [3]). In all of these examples, convincing the stakeholders that the decision is justified needs to consider two factors -the complexity of the algorithm and the dissatisfaction of the stakeholders -thus making the said process difficult. ...
Preprint
In many social-choice mechanisms the resulting choice is not the most preferred one for some of the participants, thus the need for methods to justify the choice made in a way that improves the acceptance and satisfaction of said participants. One natural method for providing such explanations is to ask people to provide them, e.g., through crowdsourcing, and choosing the most convincing arguments among those received. In this paper we propose the use of an alternative approach, one that automatically generates explanations based on desirable mechanism features found in theoretical mechanism design literature. We test the effectiveness of both of the methods through a series of extensive experiments conducted with over 600 participants in ranked voting, a classic social choice mechanism. The analysis of the results reveals that explanations indeed affect both average satisfaction from and acceptance of the outcome in such settings. In particular, explanations are shown to have a positive effect on satisfaction and acceptance when the outcome (the winning candidate in our case) is the least desirable choice for the participant. A comparative analysis reveals that the automatically generated explanations result in similar levels of satisfaction from and acceptance of an outcome as with the more costly alternative of crowdsourced explanations, hence eliminating the need to keep humans in the loop. Furthermore, the automatically generated explanations significantly reduce participants' belief that a different winner should have been elected compared to crowdsourced explanations.
Article
As the complexity of multi-robot systems grows to incorporate a greater number of robots, more complex tasks, and longer time horizons, the solutions to such problems often become too complex to be fully intelligible to human users. In this work, we introduce an approach for generating natural language explanations that justify the validity of the system's solution to the user, or else aid the user in correcting any errors that led to a suboptimal system solution. Toward this goal, we first contribute a generalizable formalism of contrastive explanations for multi-robot systems, and then introduce a holistic approach to generating contrastive explanations for multi-robot scenarios that selectively incorporates data from multi-robot task allocation, scheduling, and motion-planning to explain system behavior. Through user studies with human operators we demonstrate that our integrated contrastive explanation approach leads to significant improvements in user ability to identify and solve system errors, leading to significant improvements in overall multi-robot team performance.
Article
Accountability encompasses multiple aspects such as responsibility, justification, reporting, traceability, audit, and redress so as to satisfy diverse requirements of different stakeholders— consumers, regulators, developers, etc. In order to take into account needs of different stakeholders and thus, to put into practice accountability in Artificial Intelligence, the notion of empathy can be quite effective. Empathy is the ability to be sensitive to the needs of someone based on understanding their affective states and intentions, caring for their feelings, and socialization, which can help in addressing the social-technical challenges associated with accountability. The goal of this paper is twofold. First, we elucidate the connections between empathy and accountability, drawing find- ings from various disciplines like psychology, social science, and organizational science. Second, we suggest potential pathways to incorporate empathy.
Article
Full-text available
This paper presents a taxonomy of explainability in human–agent systems. We consider fundamental questions about the Why, Who, What, When and How of explainability. First, we define explainability, and its relationship to the related terms of interpretability, transparency, explicitness, and faithfulness. These definitions allow us to answer why explainability is needed in the system, whom it is geared to and what explanations can be generated to meet this need. We then consider when the user should be presented with this information. Last, we consider how objective and subjective measures can be used to evaluate the entire system. This last question is the most encompassing as it will need to evaluate all other issues regarding explainability.
Conference Paper
Full-text available
Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) techniques. Such systems are employed in a growing number of domains, and making them explainable is an impelling priority. Recently, the domain of eX-plainable Artificial Intelligence (XAI) emerged with the aims of fostering transparency and trustworthiness. Several reviews have been conducted. Nevertheless, most of them deal with data-driven XAI to overcome the opaqueness of black-box algorithms. Contributions addressing goal-driven XAI (e.g., explainable agency for robots and agents) are still missing. This paper aims at filling this gap, proposing a Systematic Literature Review. The main findings are (i) a considerable portion of the papers propose conceptual studies, or lack evaluations or tackle relatively simple scenarios; (ii) almost all of the studied papers deal with robots/agents explaining their behaviors to the human users, and very few works addressed inter-robot (inter-agent) explainability. Finally, (iii) while providing explanations to non-expert users has been outlined as a necessity, only a few works addressed the issues of personalization and context-awareness.
Article
Full-text available
In this paper, the authors consider some of the main ideas underpinning attempts to build automated negotiators--computer programs that can effectively negotiate on our behalf. If we want to build programs that will negotiate on our behalf in some domain, then we must first define the negotiation domain and the negotiation protocol. Defining the negotiation domain simply means identifying the space of possible agreements that could be acceptable in practice. The negotiation protocol then defines the rules under which negotiation will proceed, including a rule that determines when agreement has been reached, and what will happen if the participants fail to reach agreement. One important insight is that we can view negotiation as a game, in the sense of game theory: for any given negotiation domain and protocol, negotiating agents have available to them a range of different negotiation strategies, which will result in different outcomes, and hence different benefits to them. An agent will desire to choose a negotiation strategy that will yield the best outcome for itself, but must take into account that other agents will be trying to do the same.
Conference Paper
Full-text available
Many negotiations in the real world are characterized by incomplete information, and participants' success depends on their ability to reveal information in a way that facilitates agreement without compromising the individual gains of agents. This paper presents a novel agent design for repeated negotiation in incomplete information settings that learns to reveal information strategically during the negotiation process. The agent used classical machine learning techniques to predict how people make and respond to offers during the negotiation, how they reveal information and their response to potential revelation actions by the agent. The agent was evaluated empirically in an extensive empirical study spanning hundreds of human subjects. Results show that the agent was able (1) to make offers that were beneficial to people while not compromising its own benefit; (2) to incrementally reveal information to people in a way that increased its expected performance. The agent also had a positive effect on people's strategy, in that people playing the agent performed significantly higher than people playing other people. This work demonstrates the efficacy of combining machine learning with opponent modeling techniques towards the design of computer agents for negotiating with people in settings of incomplete information.
Article
Full-text available
In a R&D department, several projects may have to be implemented simultaneously within a certain period of time by a limited number of human resources with diverse skills. This paper proposes an optimization model for the allocation of multi-skilled human resources to R&D projects, considering individual workers as entities having different knowledge, experience and ability. The model focuses on three fundamental aspects of human resources: the different skill levels, the learning process and the social relationships existing in working teams. The resolution approach for the multi-objective problem consists in two steps: firstly, a set of non-dominated solutions is obtained by exploring the optimal Pareto frontier and subsequently, based on further information, the ELECTRE III method is utilized to select the best compromise with regards to the considered objectives. The uncertainty associated to each solution is modelled by fuzzy numbers and used in establishing the threshold values of ELECTRE III, while the weights of the objectives are determined taking into account the influence that each objective has on the others.
Article
Full-text available
A multiagent system may be thought of as an artificial society of autonomous software agents and we can apply concepts borrowed from welfare economics and social choice theory to assess the social welfare of such an agent society. In this paper, we study an abstract negotiation framework where agents can agree on multilateral deals to exchange bundles of discrete resources. We then analyse how these deals aect social welfare for dierent instances of the basic framework and dierent interpretations of the concept of social welfare itself. In particular, we show how certain classes of deals are both sucient and necessary to guarantee that a socially optimal allocation of resources will be reached eventually.
Article
Full-text available
This study investigated the impact of four types of explanations following a tourism service failure. Written scenarios were used to orthogonally manipulate explanation type and failure magnitude. Both independent variables had significant effects on customer satisfaction and justice perceptions. Apologies yielded more favorable outcomes than did referential accounts. Specific forms of justice mediated the effects of three explanation types. This research links different explanation types to different forms of justice, thereby shedding light not only on what types of explanations assist most with service recovery, but also on how they have their effects. Yes Yes
Article
Full-text available
We investigate the properties of an abstract negotiation framework where agents autonomously negotiate over allocations of indivisible resources. In this framework, reaching an allocation that is optimal may require very complex multilateral deals. Therefore, we are interested in identifying classes of valuation functions such that any negotiation conducted by means of deals involving only a single resource at a time is bound to converge to an optimal allocation whenever all agents model their preferences using these functions. In the case of negotiation with monetary side payments amongst self-interested but myopic agents, the class of modular valuation functions turns out to be such a class. That is, modularity is a sufficient condition for convergence in this framework. We also show that modularity is not a necessary condition. Indeed, there can be no condition on individual valuation functions that would be both necessary and sufficient in this sense. Evaluating conditions formulated with respect to the whole profile of valuation functions used by the agents in the system would be possible in theory, but turns out to be computationally intractable in practice. Our main result shows that the class of modular functions is maximal in the sense that no strictly larger class of valuation functions would still guarantee an optimal outcome of negotiation, even when we permit more general bilateral deals. We also establish similar results in the context of negotiation without side payments. non
Article
Full-text available
The allocation of resources within a system of autonomous agents, that not only have preferences over alternative allocations of resources but also actively participate in computing an allocation, is an exciting area of research at the interface of Computer Science and Economics. This paper is a survey of some of the most salient issues in Multiagent Resource Allocation. In particular, we review various languages to represent the preferences of agents over alternative allocations of resources as well as different measures of social welfare to assess the overall quality of an allocation. We also discuss pertinent issues regarding allocation procedures and present important complexity results. Our presentation of theoretical issues is complemented by a discussion of software packages for the simulation of agent-based market places. We also introduce four major application areas for Multiagent Resource Allocation, namely industrial procurement, sharing of satellite resources, manufacturing control, and grid computing.
Article
2018 Curran Associates Inc.All rights reserved. Most recent work on interpretability of complex machine learning models has focused on estimating a posteriori explanations for previously trained models around specific predictions. Self-explaining models where interpretability plays a key role already during learning have received much less attention. We propose three desiderata for explanations in general - explicitness, faithfulness, and stability - and show that existing methods do not satisfy them. In response, we design self-explaining models in stages, progressively generalizing linear classifiers to complex yet architecturally explicit models. Faithfulness and stability are enforced via regularization specifically tailored to such models. Experimental results across various benchmark datasets show that our framework offers a promising direction for reconciling model complexity and interpretability.
Article
With the widespread use of AI, understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent's state of mind. Recent empirical studies have confirmed that explaining a system's behavior to human users fosters the latter's acceptance of the system. However, providing overwhelming or unnecessary information may also confuse the users and cause failure. For these reasons, parsimony has been outlined as one of the key features allowing successful human-agent interaction with parsimonious explanation defined as the simplest explanation (i.e. least complex) that describes the situation adequately (i.e. descriptive adequacy). While parsimony is receiving growing attention in the literature, most of the works are carried out on the conceptual front. This paper proposes a mechanism for parsimonious eXplainable AI (XAI). In particular, it introduces the process of explanation formulation and proposes HAExA, a human-agent architecture allowing to make it operational for remote robots. To provide parsimonious explanations, HAExA relies on both contrastive explanations and explanation filtering. To evaluate the proposed architecture, several research hypotheses are investigated in an empirical human-user study that relies on well-established XAI metrics to estimate how trustworthy and satisfactory the explanations provided by HAExA are. The results are analyzed using parametric and non-parametric statistical testing.
Book
Cutting a cake, dividing up the property in an estate, determining the borders in an international dispute - such problems of fair division are ubiquitous. Fair Division treats all these problems and many more through a rigorous analysis of a variety of procedures for allocating goods (or 'bads' like chores), or deciding who wins on what issues, when there are disputes. Starting with an analysis of the well-known cake-cutting procedure, 'I cut, you choose', the authors show how it has been adapted in a number of fields and then analyze fair-division procedures applicable to situations in which there are more than two parties, or there is more than one good to be divided. In particular they focus on procedures which provide 'envy-free' allocations, in which everybody thinks he or she has received the largest portion and hence does not envy anybody else. They also discuss the fairness of different auction and election procedures.
Article
Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: Explainable decisions in Multi-Agent Environments (xMASE). We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI systems' decisions in multi-agent environments.
Conference Paper
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.
Article
This paper deals with a basic issue: How does one approach the problem of designing the “right” objective for a given resource allocation problem? The notion of what is right can be fairly nebulous; we consider two issues that we see as key: efficiency and fairness. We approach the problem of designing objectives that account for the natural tension between efficiency and fairness in the context of a framework that captures a number of resource allocation problems of interest to managers. More precisely, we consider a rich family of objectives that have been well studied in the literature for their fairness properties. We deal with the problem of selecting the appropriate objective from this family. We characterize the trade-off achieved between efficiency and fairness as one selects different objectives and develop several concrete managerial prescriptions for the selection problem based on this trade-off. Finally, we demonstrate the value of our framework in a case study that considers air traffic management. This paper was accepted by Yossi Aviv, operations management.
Article
The Nursing Personnel Scheduling Problem is defined as the identification of that staffing pattern which (1) specifies the number of nursing personnel of each skill class to be scheduled among the wards and nursing shifts of a scheduling period, (2) satisfies total nursing personnel capacity, integral assignment, and other relevant constraints, and (3) minimizes a "shortage cost" of nursing care services provided for the scheduling period. The problem is posed as a mixed-integer quadratic programming problem, which is decomposed by a primal resource-directive approach into a multiple-choice programming master problem, with quadratic programming sub-problems. Initial results suggest that a linear programming formulation, with a post-optimal feasibility search scheme, may be substituted for the multiple-choice master problem. The model is tested on six wards of a 600-bed general hospital, and results are presented.
Conference Paper
A number of recent papers focused on probabilistic robustness analysis and design of control systems subject to bounded uncertainty. In this work, we continue this line of research and show how to generate samples uniformly distributed in lp balls in real and complex vector spaces
Article
We study a multilateral procedure in which responders are told only their own shares. The proposal becomes common knowledge after the response stage and responders have optimistic beliefs after off-equilibrium offers. When discounting is high, the set of equilibrium agreements is a singleton; when it is low, there is a large multiplicity of equilibrium payoffs. In contrast to earlier work, our multiple equilibria are constructed by using strategy profiles in which a responder rejects any offer that reduces his or her own share. Journal of Economic Literature Classification Numbers: C72 and C78.
Article
especially techniques from combinatorial optimization and mathematical programming. Finally, computer science is concerned with the expressiveness of various bidding languages, and the algorithmic aspects of the combinatorial problem. The study of combinatorial auctions thus lies at the intersection of economics, operations research, and computer science. In this book, we look at combinatorial auctions from all three perspectives. Indeed, our contribution is to do so in an integrated and comprehensive way. The initial challenge in interdisciplinary research is defining a common language. We have made an effort to use terms consistently throughout the book, with the most common terms defined in the glossary.
Using explainable scheduling for the mars 2020 rover mission
  • J Agrawal
  • A Yelamanchili
  • S Chien
Agrawal, J.; Yelamanchili, A.; and Chien, S. 2020. Using explainable scheduling for the mars 2020 rover mission. arXiv preprint arXiv:2011.08733.
Contestable black boxes
  • Aler Tubella
  • A Theodorou
  • A Dignum
  • V Michael
Aler Tubella, A.; Theodorou, A.; Dignum, V.; and Michael, L. 2020. Contestable black boxes. In International Joint Conference on Rules and Reasoning, 159-167. Springer.
Explaining ASP-based Operating Room Schedules
  • R Bertolucci
  • C Dodaro
  • G Galatà
  • M Maratea
  • I Porro
Bertolucci, R.; Dodaro, C.; Galatà, G.; Maratea, M.; Porro, I.; and Ricca, F. 2021. Explaining ASP-based Operating Room Schedules. In IPS-RCRA@ AI* IA.
  • T Chakraborti
  • S Sreedharan
  • S Kambhampati
Chakraborti, T.; Sreedharan, S.; and Kambhampati, S. 2020. The Emerging Landscape of Explainable AI Planning and Decision Making. arXiv preprint arXiv:2002.11697.
Argumentation for explainable scheduling
  • K Cyras
  • D Letsios
  • R Misener
  • F Toni
Cyras, K.; Letsios, D.; Misener, R.; and Toni, F. 2019. Argumentation for explainable scheduling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 2752-2759.
Shaping our tools: Contestability as a means to promote responsible algorithmic decision making in the professions
  • L Hunsberger
  • B J Grosz
  • Ieee
  • D N Kluttz
  • N Kohli
  • D K Mulligan
Hunsberger, L.; and Grosz, B. J. 2000. A combinatorial auction for collaborative planning. In Proceedings fourth international conference on multiagent systems, 151-158. IEEE. Kluttz, D. N.; Kohli, N.; and Mulligan, D. K. 2022. Shaping our tools: Contestability as a means to promote responsible algorithmic decision making in the professions. In Ethics of Data and Analytics, 420-428. Auerbach Publications.
Conceptualising contestability: Perspectives on contesting algorithmic decisions
  • A Kulkarni
  • Y Zha
  • T Chakraborti
  • S G Vadlamudi
  • Y Zhang
  • S Kambhampati
  • H Lyons
  • E Velloso
  • T Miller
Kulkarni, A.; Zha, Y.; Chakraborti, T.; Vadlamudi, S. G.; Zhang, Y.; and Kambhampati, S. 2019. Explicable planning as minimizing distance from expected behavior. In AAMAS. Lyons, H.; Velloso, E.; and Miller, T. 2021. Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1): 1-25.
Explainable reinforcement learning through a causal lens
  • P Madumal
  • T Miller
  • L Sonenberg
  • F Vetere
Madumal, P.; Miller, T.; Sonenberg, L.; and Vetere, F. 2020. Explainable reinforcement learning through a causal lens. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 2493-2500.
Not all failure modes are created equal: Training deep neural networks for explicable (mis) classification
  • A Olmo
  • S Sengupta
  • S Kambhampati
Olmo, A.; Sengupta, S.; and Kambhampati, S. 2020. Not all failure modes are created equal: Training deep neural networks for explicable (mis) classification. arXiv preprint arXiv:2006.14841.
An Efficient Protocol for Negotiation over Multiple Indivisible Resources
  • S Saha
  • S Sen
Saha, S.; and Sen, S. 2007. An Efficient Protocol for Negotiation over Multiple Indivisible Resources. In IJCAI, volume 7, 1494-1499.
Ma-radar-a mixed-reality interface for collaborative decision making
  • S Sengupta
  • T Chakraborti
  • S Kambhampati
Sengupta, S.; Chakraborti, T.; and Kambhampati, S. 2018. Ma-radar-a mixed-reality interface for collaborative decision making. ICAPS UISP.
Expectation-aware planning: A unifying framework for synthesizing and executing self-explaining plans for human-aware planning
  • S Sreedharan
  • T Chakraborti
  • C Muise
  • S Kambhampati
Sreedharan, S.; Chakraborti, T.; Muise, C.; and Kambhampati, S. 2020. Expectation-aware planning: A unifying framework for synthesizing and executing self-explaining plans for human-aware planning. AAAI.
Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations
  • S Sreedharan
  • S Srivastava
  • S Kambhampati
  • J Van Der Waa
  • E Nieuwburg
  • A Cremers
  • M Neerincx
Sreedharan, S.; Srivastava, S.; and Kambhampati, S. 2018. Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations. In IJCAI, 4829-4836. van der Waa, J.; Nieuwburg, E.; Cremers, A.; and Neerincx, M. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291: 103404.