Chapter

Allocation of Moral Decision-Making in Human-Agent Teams: A Pattern Approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Artificially intelligent agents will deal with more morally sensitive situations as the field of AI progresses. Research efforts are made to regulate, design and build Artificial Moral Agents (AMAs) capable of making moral decisions. This research is highly multidisciplinary with each their own jargon and vision, and so far it is unclear whether a fully autonomous AMA can be achieved. To specify currently available solutions and structure an accessible discussion around them, we propose to apply Team Design Patterns (TDPs). The language of TDPs describe (visually, textually and formally) a dynamic allocation of tasks for moral decision making in a human-agent team context. A task decomposition is proposed on moral decision-making and AMA capabilities to help define such TDPs. Four TDPs are given as examples to illustrate the versatility of the approach. Two problem scenarios (surgical robots and drone surveillance) are used to illustrate these patterns. Finally, we discuss in detail the advantages and disadvantages of a TDP approach to moral decision making.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Madni and Madni (2018) provide a framework that distinguishes the roles of humans and machines; frequent roles are the human as supervisor and the machine either in an active or passive monitoring role. Van der Waa et al. (2020) focus on moral decisions and distinguish between human moral decisionmaking, supported moral decision-making, co-active moral decision-making, and autonomous moral decision-making, in which the artificial moral agent makes moral decisions on its own. These conceptual papers give examples of the different configurations and discuss the advantages and disadvantages of the different patterns, but they do not examine which factors determine the preference for lower or higher algorithmic input. ...
... Work on human-machine teaming has, to our knowledge, not systematically compared scenarios varying in morality, but it assumes that it will take some time until artificial moral agents reach human or even super-human levels of moral decision-making; consequentially, human-machine teaming is needed (van der Waa et al., 2020). This work, thus, also implicitly assumes that people prefer less algorithmic involvement in moral decisions. ...
... Our results contribute to work on human-algorithm teaming by providing a measure that assesses the preference for certain combinations of human-algorithm teaming. Different patterns have been described before (Madni & Madni, 2018;van der Waa et al., 2020), and it has been shown that people favor hybrid decision-making over pure algorithmic decision-making (Starke & Lünich, 2020), but less was known about preferences for the different forms of human-algorithm teaming. We showed that participants overall preferred algorithmic advice in all decision situations to algorithmic advice only in difficult decision cases. ...
Article
Full-text available
In times of the COVID-19 pandemic, difficult decisions such as the distribution of ventilators must be made. For many of these decisions, humans could team up with algorithms; however, people often prefer human decision-makers. We examined the role of situational (morality of the scenario; perspective) and individual factors (need for leadership; conventionalism) for algorithm preference in a preregistered online experiment with German adults (n = 1,127). As expected, algorithm preference was lowest in the most moral-laden scenario. The effect of perspective (i.e., decision-makers vs. decision targets) was only significant in the most moral scenario. Need for leadership predicted a stronger algorithm preference, whereas conventionalism was related to weaker algorithm preference. Exploratory analyses revealed that attitudes and knowledge also mattered, stressing the importance of individual factors.
... We have extracted a list of interaction patterns from observed human-robot team behavior. The idea of describing humanrobot or human-agent team behavior with patterns has been explored before, such as in ; van Diggelen and Johnson (2019); van der Waa et al. (2020), under the name 'Team Design Patterns.' In existing research, it is described how these patterns can be useful for designers of human-robot teams, as well as for the actual team members to recognize what activities they are engaged in. ...
... These existing pattern languages are generally created in a top-down approach. While mention that Team Design Patterns can emerge from interactions between the human(s) and agent(s) in the team, the pattern languages described in van Diggelen and Johnson (2019); van der Waa et al. (2020) are still designed by the authors of the paper, although the design process is not described in detail. We deliberately use a different name to describe the patterns in our pattern language (interaction patterns instead of team design patterns), because our interaction patterns have not been designed. ...
Article
Full-text available
As robots become more ubiquitous, they will increasingly need to behave as our team partners and smoothly adapt to the (adaptive) human team behaviors to establish successful patterns of collaboration over time. A substantial amount of adaptations present themselves through subtle and unconscious interactions, which are difficult to observe. Our research aims to bring about awareness of co-adaptation that enables team learning. This paper presents an experimental paradigm that uses a physical human-robot collaborative task environment to explore emergent human-robot co-adaptions and derive the interaction patterns (i.e., the targeted awareness of co-adaptation). The paradigm provides a tangible human-robot interaction (i.e., a leash) that facilitates the expression of unconscious adaptations, such as "leading" (e.g., pulling the leash) and "following" (e.g., letting go of the leash) in a search-and-navigation task. The task was executed by 18 participants, after which we systematically annotated videos of their behavior. We discovered that their interactions could be described by four types of adaptive interactions: stable situations, sudden adaptations, gradual adaptations and active negotiations. From these types of interactions we have created a language of interaction patterns that can be used to describe tacit co-adaptation in human-robot collaborative contexts. This language can be used to enable communication between collaborating humans and robots in future studies, to let them share what they learned and support them in becoming aware of their implicit adaptations.
... Approaches such as the nature-of-activities [78,79], under the umbrella of Value Sensitive Design [40], can support the understanding of which set of tasks should be (partially or totally) delegated or shared with AI agents, and which should be left exclusively to humans. Given the collaborative nature of many human-AI systems, team design patterns can be used as an intuitive graphical language for describing and communicating to the team the design choices that influence how humans and AI agents collaborate [80,81]. ...
Article
Full-text available
How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.
... Approaches such as the natureof-activities [78,79], under the umbrella of Value Sensitive Design [40], can support the understanding of which set of tasks should be (partially or totally) delegated or shared with AI agents, and which should be left exclusively to humans. Given the collaborative nature of many human-AI systems, team design patterns can be used as an intuitive graphical language for describing and communicating to the team the design choices that influence how humans and AI agents collaborate [80,81]. ...
Preprint
Full-text available
The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans (e.g., users, designers and developers, manufacturers, legislators). However, the relevant discussions around meaningful human control have so far not resulted in clear requirements for researchers, designers, and engineers. As a result, there is no consensus on how to assess whether a designed AI system is under meaningful human control, making the practical development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying four actionable properties which AI-based systems must have to be under meaningful human control. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue these four properties are necessary for AI systems under meaningful human control, and provide possible directions to incorporate them into practice. We illustrate these properties with two use cases, automated vehicle and AI-based hiring. We believe these four properties will support practically-minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control and responsibility.
Article
Full-text available
Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e., situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g., for completing a task in future) and who can be seen as responsible retrospectively (e.g., for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of Trustworthy Autonomous Systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road-map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.
Article
Full-text available
With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as Meaningful Human Control (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent’s part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human’s understanding in the agent’s reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team’s moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team design should emphasize and support the human to develop a sense of responsibility for the agent’s behavior and for the team’s decisions. The design should include explanations that fit with the assigned team roles as well as the human cognitive state.
Article
Full-text available
Social or humanoid robots do hardly show up in “the wild,” aiming at pervasive and enduring human benefits such as child health. This paper presents a socio-cognitive engineering (SCE) methodology that guides the ongoing research & development for an evolving, longer-lasting human-robot partnership in practice. The SCE methodology has been applied in a large European project to develop a robotic partner that supports the daily diabetes management processes of children, aged between 7 and 14 years (i.e., Personal Assistant for a healthy Lifestyle, PAL). Four partnership functions were identified and worked out (joint objectives, agreements, experience sharing, and feedback & explanation) together with a common knowledge-base and interaction design for child's prolonged disease self-management. In an iterative refinement process of three cycles, these functions, knowledge base and interactions were built, integrated, tested, refined, and extended so that the PAL robot could more and more act as an effective partner for diabetes management. The SCE methodology helped to integrate into the human-agent/robot system: (a) theories, models, and methods from different scientific disciplines, (b) technologies from different fields, (c) varying diabetes management practices, and (d) last but not least, the diverse individual and context-dependent needs of the patients and caregivers. The resulting robotic partner proved to support the children on the three basic needs of the Self-Determination Theory: autonomy, competence, and relatedness. This paper presents the R&D methodology and the human-robot partnership framework for prolonged “blended” care of children with a chronic disease (children could use it up to 6 months; the robot in the hospitals and diabetes camps, and its avatar at home). It represents a new type of human-agent/robot systems with an evolving collective intelligence. The underlying ontology and design rationale can be used as foundation for further developments of long-duration human-robot partnerships “in the wild.”
Conference Paper
Full-text available
This paper introduces the concept of team design patterns and proposes an intuitive graphical language for describing the design choices that influence how intelligent systems (e.g. artificial intelligence, robotics, etc.) collaborate with humans. We build on the notion of design patterns and characterize important dimensions within human-agent teamwork. These dimensions are represented using a simple, intuitive graphical iconic language. The simplicity of the language allows easier expression, sharing and comparison of human-agent teaming concepts. Having such a language has the potential to improve the collaborative interaction among a variety of stakeholders such as end users, project managers, policy makers and programmers that may not be human-agent teamwork experts themselves. We also introduce an ontology and specification formalization that will allow translation of the simple iconic language into more precise definitions. By expressing the essential elements of teaming patterns in precisely defined abstract team design patterns, we provide a foundation that will enable working towards a library of reusable, proven solutions for human-agent teamwork.
Article
Full-text available
Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour. Understanding the behaviour of the machines powered by artificial intelligence that increasingly mediate our social, cultural, economic and political interactions is essential to our ability to control the actions of these intelligent machines, reap their benefits and minimize their harms.
Article
Full-text available
This article focuses on ethical issues raised by increasing levels of autonomy for surgical robots. These ethical issues are explored mainly by reference to state-of-art case studies and imminent advances in Minimally Invasive Surgery (MIS) and Microsurgery. In both areas, surgical workspace is limited and the required precision is high. For this reason, increasing levels of robotic autonomy can make a significant difference there, and ethically justified control sharing between humans and robots must be introduced. In particular, from a responsibility and accountability perspective suitable policies for the Meaningful Human Control (MHC) of increasingly autonomous surgical robots are proposed. It is highlighted how MHC should be modulated in accordance with various levels of autonomy for MIS and Microsurgery robots. Moreover, finer MHC distinctions are introduced to deal with contextual conditions concerning e.g. soft or rigid anatomical environments.
Article
Full-text available
Human-agent teams exhibit emergent behavior at the team level, as a result of interactions between individuals within the team. This begs the question how to design artificial team members (agents) as adequate team players that contribute to the team processes advancing team performance, resilience and learning. This paper proposes the development of a library of Team Design Patterns as a way to make dynamic team behavior at the team and individual level more explicit. Team Design Patterns serve a dual purpose: (1) In the system development phase, designers can identify desirable team patterns for the creation of artificial team members. (2) During the operational phase, team design patterns can be used by artificial team members to drive and stimulate team development, and to adaptively mitigate problems that may arise. We describe a pattern language for specifying team design patterns, discuss their use, and illustrate the concept using representative human-agent teamwork applications.
Article
Full-text available
We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which intelligent autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of this behavior. To provide assistance in discovering ethical principles, we have developed GenEth, a general ethical dilemma analyzer that, through a dialog with ethicists, uses inductive logic programming to codify ethical principles in any given domain. GenEth has been used to codify principles in a number of domains pertinent to the behavior of autonomous systems and these principles have been verified using an Ethical Turing Test, a test devised to compare the judgments of codified principles with that of ethicists.
Article
Full-text available
Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal–political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.
Article
Full-text available
Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.
Article
Full-text available
Shared control is an increasingly popular approach to facilitate control and communication between humans and intelligent machines. However, there is little consensus in guidelines for design and evaluation of shared control, or even in a definition of what constitutes shared control. This lack of consensus complicates cross fertilization of shared control research between different application domains. This paper provides a definition for shared control in context with previous definitions, and a set of general axioms for design and evaluation of shared control solutions. The utility of the definition and axioms are demonstrated by applying them to four application domains: automotive, robot-assisted surgery, brain–machine interfaces, and learning. Literature is discussed for each of these four domains in light of the proposed definition and axioms. Finally, to facilitate design choices for other applications, we propose a hierarchical framework for shared control that links the shared control literature with traded control, co-operative control, and other human–automation interaction methods. Future work should reveal the generalizability and utility of the proposed shared control framework in designing useful, safe, and comfortable interaction between humans and intelligent machines.
Article
Full-text available
We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.
Conference Paper
Full-text available
Integrating cognitive agents and robots into teams that operate in high-demand situations involves mutual and context-dependent behaviors of the human and agent/robot team-members. We propose a cognitive engineering method that includes the development of Interaction Design patterns for such systems as re-usable, theoretically and empirically founded, design solutions. This paper presents an overview of the background, the method and three example patterns.
Conference Paper
Full-text available
We contend that all behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such principles ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior. To provide assistance in developing ethical principles, in particular those pertinent to the behavior of autonmous systems, we have developed GENETH, a general ethical dilemma analyzer that, through a dialog with ethicists, codifies ethical principles in any given domain. (Software freely downloadable at http://uhaweb.hartford.edu/anderson/Site/GenEth.html)
Article
Full-text available
Machine ethics has a broad range of possible implementations in computer technology--from maintaining detailed records in hospital databases to overseeing emergency team movements after a disaster. From amachine ethics perspective, you can look at machines as ethical-impact agents, implicit ethical agents,explicit ethical agents, or full ethical agents. A current research challenge is to develop machines thatare explicit ethical agents. This research is important, but accomplishing this goal will be extremelydifficult without a better understanding of ethics and of machine learning and cognition. This article is part of a special issue on Machine Ethics.
Article
Full-text available
The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics—which has traditionally focused on ethical issues surrounding humans’ use of machines—machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.
Article
Full-text available
The implementation of moral decision-making abilities in AI is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sen- sitive to moral considerations in their choices and ac- tions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposi- tion of ethical theories, and the bottom-up building of systems that aim at specified goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direc- tion for continued research by outlining the value and limitations inherent in each of these approaches.
Article
Full-text available
Dynamic task allocation is an essential requirement for multi-robot systems operating in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of task, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of the collective behavior. Specifically, we analyze the effect that the number of observations and the choice of the decision function have on the performance of the system. The mathematical models are validated in a multi-robot multi-foraging scenario. The model's predictions agree very closely with experimental results from sensor-based simulations.
Article
The generality of decision and game theory has enabled domain-independent progress in AI research. For example, a better algorithm for finding good policies in (PO)MDPs can be instantly used in a variety of applications. But such a general theory is lacking when it comes to moral decision making. For AI applications with a moral component, are we then forced to build systems based on many ad-hoc rules? In this paper we discuss possible ways to avoid this conclusion.
Book
In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens.
Article
The purpose of this article is to draw attention to an aspect of intelligence that has not yet received significant attention from the AI community, but that plays a crucial role in a technology’s effectiveness in the world, namely teaming intelligence. We propose that Al will reach its full potential only if, as part of its intelligence, it also has enough teaming intelligence to work well with people. Although seemingly counterintuitive, the more intelligent the technological system, the greater the need for collaborative skills. This paper will argue why teaming intelligence is important to AI, provide a general structure for AI researchers to use in developing intelligent systems that team well, assess the current state of the art and, in doing so, suggest a path forward for future AI systems. This is not a call to develop a new capability, but rather, an approach to what AI capabilities should be built, and how, so as to imbue intelligent systems with teaming competence.
Article
Background This paper aims to move the debate forward regarding the potential of AI and autonomous robotic surgery with a particular focus on ethical and legal aspects. Methods We conducted a literature search on current and emerging surgical robot technologies, relevant standards and legal systems worldwide. We provide a discussion of unique challenges for robotic surgery faced by proposals made for AI more generally (e.g. Explainable AI) as well as recommendations for developing/improving relevant standards or legal and regulatory frameworks. Conclusion We distinguished three types of robot responsibility by classifying responsibility into: I. Accountability; II. Liability and III. Culpability. The component which produces the least clarity is Culpability, since it is unthinkable in the current state of technology. We envision in the nearer future that, as with autonomous driving, a robot can learn routine tasks which can then be supervised by the surgeon (a doctor‐in‐the‐loop) being in the driving seat. This article is available open access, and you may download your open copy here: https://onlinelibrary.wiley.com/doi/full/10.1002/rcs.1968 Top Scoring Abstracts of the RCOG World Congress 2019, 17-19 June 2019, London, UK. Category - Robotic Surgery BJOG. 2019 Jun;126 Suppl 2:4-239. PMID: 31190473 https://doi.org/10.1111/1471-0528.18_15703 https://obgyn.onlinelibrary.wiley.com/doi/full/10.1111/1471-0528.18_15703
Chapter
The prospect of drone use inside the United States raises far-reaching issues concerning the extent of government surveillance authority, the value of privacy in the digital age, and the role of Congress in reconciling these issues. Drones, or unmanned aerial vehicles (UAVs), are aircraft that can fly without an onboard human operator. An unmanned aircraft system (UAS) is the entire system, including the aircraft, digital network, and personnel on the ground. Drones can fly either by remote control or on a predetermined flight path; can be as small as an insect and as large as a traditional jet; can be produced more cheaply than traditional aircraft; and can keep operators out of harm's way. These unmanned aircraft are most commonly known for their operations overseas in tracking down and killing suspected members of Al Qaeda and related organizations. In addition to these missions abroad, drones are being considered for use in domestic surveillance operations to protect the homeland, assist in crime fighting, disaster relief, immigration control, and environmental monitoring. Although relatively few drones are currently flown over U.S. soil, the Federal Aviation Administration (FAA) predicts that 30,000 drones will fill the nation's skies in less than 20 years. Congress has played a large role in this expansion. In February 2012, Congress enacted the FAA Modernization and Reform Act (P.L. 112-95), which calls for the FAA to accelerate the integration of unmanned aircraft into the national airspace system by 2015. However, some Members of Congress and the public fear there are insufficient safeguards in place to ensure that drones are not used to spy on American citizens and unduly infringe upon their fundamental privacy. These observers caution that the FAA is primarily charged with ensuring air traffic safety, and is not adequately prepared to handle the issues of privacy and civil liberties raised by drone use. This report assesses the use of drones under the Fourth Amendment right to be free from unreasonable searches and seizures. The touchstone of the Fourth Amendment is reasonableness. A reviewing court's determination of the reasonableness of a drone search would likely be informed by location of the search, the sophistication of the technology used, and society's conception of privacy in an age of rapid technological advancement. While individuals can expect substantial protections against warrantless government intrusions into their homes, the Fourth Amendment offers less robust restrictions upon government surveillance occurring in public places including areas immediately outside the home, such as in driveways or backyards. Concomitantly, as technology advances, the contours of what is reasonable under the Fourth Amendment may adjust as people's expectations of privacy evolve. In the 113th Congress, several measures have been introduced that would restrict the use of drones at home. Several of the bills would require law enforcement to obtain a warrant before using drones for domestic surveillance, subject to several exceptions. Others would establish a regime under which the drone user must file a data collection statement stating when, where, how the drone will be used and how the user will minimize the collection of information protected by the legislation.
Article
Autonomous agents optimize the reward function we give them. What they don't know is how hard it is for us to design a reward function that actually captures what we want. When designing the reward, we might think of some specific training scenarios, and make sure that the reward will lead to the right behavior in those scenarios. Inevitably, agents encounter new scenarios (e.g., new types of terrain) where optimizing that same reward may lead to undesired behavior. Our insight is that reward functions are merely observations about what the designer actually wants, and that they should be interpreted in the context in which they were designed. We introduce inverse reward design (IRD) as the problem of inferring the true objective based on the designed reward and the training MDP. We introduce approximate methods for solving IRD problems, and use their solution to plan risk-averse behavior in test MDPs. Empirical results suggest that this approach can help alleviate negative side effects of misspecified reward functions and mitigate reward hacking.
Conference Paper
The aim of this article is to provide a common, easy to use nomenclature to describe highly automated human-machine systems in the realm of vehicle guidance and foster the identification of established design patterns for human-autonomy teaming. With this effort, we intend to facilitate the discussion and exchange of approaches to the integration of humans with cognitive agents amongst researchers and system designers. By use of this nomenclature, we identify most important top-level design patterns, such as delegation and associate systems, as well as hybrid structures of humans working with cognitive agents.
Article
The prospect of drone use inside the United States raises far-reaching issues concerning the extent of government surveillance authority, the value of privacy in the digital age, and the role of Congress in reconciling these issues. Drones, or unmanned aerial vehicles (UAVs), are aircraft that can fly without an onboard human operator. An unmanned aircraft system (UAS) is the entire system, including the aircraft, digital network, and personnel on the ground. Drones can fly either by remote control or on a predetermined flight path; can be as small as an insect and as large as a traditional jet; can be produced more cheaply than traditional aircraft; and can keep operators out of harm's way. These unmanned aircraft are most commonly known for their operations overseas in tracking down and killing suspected members of Al Qaeda and related organizations. In addition to these missions abroad, drones are being considered for use in domestic surveillance operations, which might include in furtherance of homeland security, crime fighting, disaster relief, immigration control, and environmental monitoring. Although relatively few drones are currently flown over U.S. soil, the Federal Aviation Administration (FAA) predicts that 30,000 drones will fill the nation's skies in less than 20 years. Congress has played a large role in this expansion. In February 2012, Congress enacted the FAA Modernization and Reform Act (P.L. 112-95), which calls for the FAA to accelerate the integration of unmanned aircraft into the national airspace system by 2015. However, some Members of Congress and the public fear there are insufficient safeguards in place to ensure that drones are not used to spy on American citizens and unduly infringe upon their fundamental privacy. These observers caution that the FAA is primarily charged with ensuring air traffic safety, and is not adequately prepared to handle the issues of privacy and civil liberties raised by drone use. This chapter assesses the use of drones under the Fourth Amendment right to be free from unreasonable searches and seizures. The touchstone of the Fourth Amendment is reasonableness. A reviewing court's determination of the reasonableness of drone surveillance would likely be informed by location of the search, the sophistication of the technology used, and society's conception of privacy in an age of rapid technological advancement. While individuals can expect substantial protections against warrantless government intrusions into their homes, the Fourth Amendment offers less robust restrictions upon government surveillance occurring in public places and perhaps even less in areas immediately outside the home, such as in driveways or backyards. Concomitantly, as technology advances, the contours of what is reasonable under the Fourth Amendment may adjust as people's expectations of privacy evolve. In the 112th Congress, several measures have been introduced that would restrict the use of drones at home. Senator Rand Paul and Representative Austin Scott introduced the Preserving Freedom from Unwarranted Surveillance Act of 2012 (S. 3287, H.R. 5925), which would require law enforcement to obtain a warrant before using drones for domestic surveillance, subject to several exceptions. Similarly, Representative Ted Poe's Preserving American Privacy Act of 2012 (H.R. 6199) would permit law enforcement to conduct drone surveillance pursuant to a warrant, but only in investigation of a felony.
Article
This paper aims to explore the potential of the discrete choice analysis-approach as a toolbox and research paradigm for the study of moral decision making. This aim is motivated by the observation that while the study of moral choice behaviour has received much attention in Economics and Psychology, the explicit consideration of the moral dimension of decisions is rare in the Choice modelling field. I first review a number of classical theories and results concerning the nature of moral decision making, and how it is shaped by social processes. Based on this review, I discuss in what ways the discrete choice modelling approach can be used to gain new insights into moral decision making, and how ideas from the moral decision making literature may be used to enhance the behavioural realism of choice models. I will argue that these research endeavours hold the potential to further increase the appeal and applicability of discrete choice models in the broader social sciences.
Article
Dynamic task allocation is an essential requirement for multi-robot systems operating in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of tasks, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of the collective behavior. Specifically, we analyze the effect that the number of observations and the choice of the decision function have on the performance of the system. The mathematical models are validated in a multi-robot multi-foraging scenario. The model's predictions agree very closely with results of embodied simulations.
Article
This article presents a model of ethical reasoning. The article reviews lapses in ethical reasoning and the great costs they have had for society. It presents an eight-step model of ethical reasoning that can be applied to ethical challenges and illustrates its application. It proposes that ethical reasoning can be taught across the curriculum. It further points to a source of frustration in the teaching and application of ethics: ethical drift. Finally it draws conclusions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Surveillance technologies have burgeoned during the last several decades. To surveillance's promises and threats, drones add a new dimension, both figuratively and literally. An assessment of the impacts of drones on behavioural privacy identifies a set of specific threats that are created or exacerbated. Natural controls, organisational and industry self-regulation, co-regulation and formal laws are reviewed, both general and specific to various forms of surveillance. Serious shortfalls in the regulatory framework are identified. Remedies are suggested, together with means whereby they may come into being.
Article
Ethics codes and guidelines date back to the origins of medicine in virtually all civilizations. Developed by the medical practitioners of each era and culture, oaths, prayers, and codes bound new physicians to the profession through agreement with the principles of conduct toward patients, colleagues, and society. Although less famous than the Hippocratic oath, the medical fraternities of ancient India, seventh-century China, and early Hebrew society each had medical oaths or codes that medical apprentices swore to on professional initiation. The Hippocratic oath, which graduating medical students swear to at more than 60% of US medical schools, is perhaps the most enduring medical oath of Western civilization. Other oaths commonly sworn to by new physicians include the Declaration of Geneva (a secular, updated form of the Hippocratic oath formulated by the World Medical Association, Ferney-Voltaire, France) and the Prayer of Moses Maimondes, developed by the 18th-century Jewish physician Marcus Herz.
Ethically aligned design: a vision for prioritizing human well-being with autonomous and intelligent systems
  • Ieee Global Initiative
IEEE Global Initiative et al. Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, 2018.
Value alignment or misalignment-what will keep systems accountable?
  • T Arnold
  • D Kasenberg
  • M Scheutz
Thomas Arnold, Daniel Kasenberg, and Matthias Scheutz. Value alignment or misalignment-what will keep systems accountable? In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence, 2017.
Grounding value alignment with ethical principles
  • T W Kim
  • T Donaldson
  • J Hooker
Tae Wan Kim, Thomas Donaldson, and John Hooker. Grounding value alignment with ethical principles. arXiv preprint arXiv:1907.05447, 2019.
Intelligent autonomous vehicles with an extendable knowledge base and meaningful human control
  • Guus Beckers
  • Joris Sijs
  • Jurriaan Van Diggelen
  • J E Roelof
  • Henri Van Dijk
  • Mathijs Bouma
  • Rutger Lomme
  • Fieke Hommes
  • Jasper Hillerstrom
  • Anna Van Der Waa
  • Van Velsen
Guus Beckers, Joris Sijs, Jurriaan van Diggelen, Roelof JE van Dijk, Henri Bouma, Mathijs Lomme, Rutger Hommes, Fieke Hillerstrom, Jasper van der Waa, Anna van Velsen, et al. Intelligent autonomous vehicles with an extendable knowledge base and meaningful human control. In Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III, volume 11166, page 111660C. International Society for Optics and Photonics, 2019.
Andreas Holzinger, Mohammed Imran Sajid, and Hutan Ashrafian. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (ai) and autonomous robotic surgery
  • O' Shane
  • Nathalie Sullivan
  • Colin Nevejans
  • Andrew Allen
  • Simon Blyth
  • Ugo Leonard
  • Katharina Pagallo
  • Holzinger
Shane O'Sullivan, Nathalie Nevejans, Colin Allen, Andrew Blyth, Simon Leonard, Ugo Pagallo, Katharina Holzinger, Andreas Holzinger, Mohammed Imran Sajid, and Hutan Ashrafian. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (ai) and autonomous robotic surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 15(1):e1968, 2019.