Governing Lethal Behavior in Autonomous Robots
Abstract
Expounding on the results of the author's work with the US Army Research Office, DARPA, the Office of Naval Research, and various defense industry contractors, Governing Lethal Behavior in Autonomous Robots explores how to produce an "artificial conscience" in a new class of robots, humane-oids, which are robots that can potentially perform more ethically than humans in the battlefield. The author examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement. The book presents robot architectural design recommendations for Post facto suppression of unethical behavior, Behavioral design that incorporates ethical constraints from the onset, The use of affective functions as an adaptive component in the event of unethical action, and A mechanism that identifies and advises operators regarding their ultimate responsibility for the deployment of autonomous systems. It also examines why soldiers fail in battle regarding ethical decisions; discusses the opinions of the public, researchers, policymakers, and military personnel on the use of lethality by autonomous systems; provides examples that illustrate autonomous systems' ethical use of force; and includes relevant Laws of War. Helping ensure that warfare is conducted justly with the advent of autonomous robots, this book shows that the first steps toward creating robots that not only conform to international law but outperform human soldiers in their ethical capacity are within reach in the future. It supplies the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system capable of ethically using lethal force. Ron Arkin was quoted in a November 2010 New York Times article about robots in the military.
... One key debate within this literature deals with the question of whether international humanitarian law (IHL) in its current form, as the most relevant body of law concerning warfare, sufficiently covers the challenges of LAWS [5,6]. To this end, this article first documents the approaches of relevant organisations and nations towards military AI and highlights how they interrelate with and refer to IHL. ...
... Many of the robustness measures in AI safety only work well when the adversary does not know about the use of these measures and they fare extremely poorly after an analysis. 5 Since current innovation in AI robustness mostly takes place in public research, and the turnover of attacks and defences is very short, it is difficult to create cyber defences for LAWS that could not be revealed by open source intelligence [69]. Even if LAWS were equipped by completely classified robustness measures, military espionage and congeniality (the likelihood of finding a similar solution independently by an adversary) would remain a significant threat. ...
... Some research in xAI claims that this trade-off is a myth, but clearly the myth is the universalised version of this claim that all xAI has this trade-off. The latter claim is not required for our argument.5 See https:// www. ...
We explore existing political commitments by states regarding the development and use of lethal autonomous weapon systems. We carry out two background reviewing efforts, the first addressing ethical and legal framings and proposals from recent academic literature, the second addressing recent formal policy principles as endorsed by states, with a focus on the principles adopted by the United States Department of Defense and the North Atlantic Treaty Organization. We then develop two conceptual case studies. The first addresses the interrelated principles of explainability and traceability, leading to proposals for acceptable scope limitations to these principles. The second considers the topic of deception in warfare and how it may be viewed in the context of ethical principles for lethal autonomous weapon systems.
... The article fills a critical gap in discussing complex socio-technical interactions between AI and warfare. In doing so, it provides a valuable counterpoint to the argument that AI 'rational' efficiency can simultaneously offer a viable solution to humans' psychological and biological fallibility in combat while retaining "meaningful human control" over the war machine (Arkin 2009;Hagerott 2014). 3 The article also argues that framing the narrative in terms of "killer robots," and similar tropes, misconstrues both the nature of AI-enabled warfare and its ability to replicate and thus replace human moral judgment and decision-making. ...
... Ronald Arkin argues that AI systems must appreciate the ethical significance of competing courses of action and use ethical and moral principles in a context-appropriate manner to address some of these vexing questions. Arkin's hypothetical solution is an algorithm that can obviate messy human emotion and irrational impulses and identify situations where there is a significant risk of unethical behavior and respond, either by restraining the system directly or alerting human operators who would intervene to resolve ethical dilemmasa so-called "ethical governor" (Arkin 2009). For Arkin's ethical governor concept to be workable, however, ethics uploaded to an AI system must appreciate the nuances of competing courses of action and execute moral codes (not to kill unnecessarily, avoid collateral damage, not harming non-combatants, and adhering the Geneva and Hague Conventions, etc.) with high fidelity in a context-appropriate manner. ...
... Critics stress the improbability of programming such context-specific and value-laden consequentialist reasoning principles (e.g. not to target civilian populations, what constitutes a legitimate combatant target, and the level of civilian casualties deemed acceptable) into algorithms (Hagerott 2014;Arkin 2009). At the very least, it remains an open empirical question whether machines can be trusted to implement the moral judgments of humans safely and reliably in warfare. ...
Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI "rational" efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining "meaningful" human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights into human-machine interactions to elucidate how AI shapes our capacity to think about future warfare's political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare-the "AI commander problem."
... A 36-paragraph science feature article, TAGMF was published in Scientific American, a 'popular science' magazine aimed at an educated readership. 2 According to the publication, its articles are written by "journalists, scientists, scholars, [and] policy makers", 3 and it "strive[s] to publish stories that use rigorous science and clear thinking to cut through hype, Pollyannaism, and doomsaying." 4 In other words, one would expect this article to be a clearly written, accurate, ...
... In a "bottom-up" approach [43] to building an ethical agent, its actions are governed by encoded ethical principles, whereas in a "top-down" approach, ethical principles are derived by learning from previous or hypothetical cases. For example, Arkin [4] has implemented autonomous agents for military applications by encoding military principles such as the Laws of War and Rules of Engagement, which are based on Just War Theory [41], as constraints on the agent's proposed actions. In the domain of healthcare, Anderson et al. [2] have implemented explicit ethical agents such as EthEl, a medicationreminding robot. ...
We present an analysis of ethical argumentation and rhetorical elements in an article on the debate about growing genetically modified food (GMF), an issue of current interest in environmental ethics. Ethical argumentation is argumentation that a certain action is permissible, forbidden, or obligatory in terms of ethical intuitions, principles, or theories. Based on analysis of argumentation in the article, we propose several argumentation schemes for descriptive modeling of utilitarian arguments as an alternative to using more general schemes such as practical reasoning and argument from consequences. We also show how the article promoted its pro-GMF stance using rhetorical elements such as quotation, argument from expert opinion, and ad hominem attacks. Pedagogical and computational implications of the analysis of argumentation and rhetoric are discussed.
... For example, if the power management agent must cut power to a neighborhood, how should it decide which neighborhood to cut? Some of the first ethical decision-making proof-of-concept systems were focused on military applications where there was potential for lethal use of force [11][12][13]. This application provides a context of rules for the moral conduct of warfare, studied extensively by ethicists and moral philosophers, which influence the way many scholars view ethical decision making today. ...
... Typically, race, gender, and other sensitive attributes not related to the decision may be omitted. While this does not guarantee ethical behavior, these traits reduce the severity of some problems common 12 We should emphasize the tremendous volume of work on extending MDPs to other informatic settings. For example, state factors may not be directly observable [117], such as when a pedestrian becomes temporarily occluded from the view of an autonomous vehicle. ...
As automated decision making and decision assistance systems become common in everyday life, research on the prevention or mitigation of potential harms that arise from decisions made by these systems has proliferated. However, various research communities have independently conceptualized these harms, envisioned potential applications, and proposed interventions. The result is a somewhat fractured landscape of literature focused generally on ensuring decision-making algorithms "do the right thing". In this paper, we compare and discuss work across two major subsets of this literature: algorithmic fairness, which focuses primarily on predictive systems, and ethical decision making, which focuses primarily on sequential decision making and planning. We explore how each of these settings has articulated its normative concerns, the viability of different techniques for these different settings, and how ideas from each setting may have utility for the other.
... For example, if we create self-driving cars that operate autonomously in traffic, then since driving involves making ethical choices, the self-driving cars need to be able to make ethical decisions (Lin 2015). Or, to use another example that AI researchers often discuss, if we create military robots that use artificial intelligence to make life-and-death decisions in warfare, then those military robots may also need to be able to make ethically informed decisions (Arkin 2009). Machine ethics is the interdisciplinary project of trying to design and build "artificial moral agents": technologies able to make moral decisions (Anderson & Anderson 2011; Wallach & Allen 2010). ...
... Some researchers think that this is not only possible, but that artificial moral agents might become able to make ethical decisions that are superior to those that human beings make. In warfare, for example, the researcher Ronald Arkin (2009) argues that AI-driven technologies might be able to make ethically better decisions than human beings, who may be overcome with various different emotional reactions brought about by the extreme nature of the situation they are in -a problem that an emotion-less AI technology would not have. This same idea -i.e., that AI technologies lack human emotions -has also been used to argue that it is not possible to build artificial moral agents (Coeckelbergh 2010a). ...
This is a short encyclopedia entry on the ethics of artificial intelligence, written for the Encyclopedia of the Philosophy of Law and Social Philosophy, and published here: https://doi.org/10.1007/978-94-007-6730-0_1093-1
... Arkin [6] has done extensive research on design and implementation of explicit ethical agents for autonomous machines capable of lethal force, e.g., autonomous tanks and robot soldiers. He points out that the military's motivations for use of these weapons is to avoid harm to human soldiers, avoid weaknesses of human soldiers such as fatigue, and to take advantage of superior capabilities of machines to process large amounts of data and to make decisions quickly. ...
... The Data question is especially significant for explicit ethical agents that must rely in part on subsymbolic processing such as facial recognition. 6 • Data: If A is based upon data obtained from sensors or databases, is the data reliable? ...
This paper proposes a novel approach to AI Ethics education using new argument schemes summarizing key ethical considerations for specialized domains such as military and healthcare AI applications. It then describes use of the schemes in an argument diagramming tool and results of a formative evaluation.
... This research raised ethical questions around deception which the authors did acknowledge (Kite-Powell, 2012, Wagner andArkin, 2010), and it triggered an extended discussion on the capability to insert ethics into autonomous systems. That same year Arkin published a book, strongly relating to the subject of this article, titled 'Governing Lethal Behaviour in Autonomous Robots' (Arkin, 2009). Referring to James Canton at the Institute for Global Futures, Arkin states that autonomy for 'armed robots' will happen, leading to the machine, hunting, identifying, authenticating a target and possibly neutralize or kill it without any human in the decision loop. ...
... Referring to James Canton at the Institute for Global Futures, Arkin states that autonomy for 'armed robots' will happen, leading to the machine, hunting, identifying, authenticating a target and possibly neutralize or kill it without any human in the decision loop. (Arkin, 2009) For the purpose of this article, it is important to note that Arkin on the subject of AWS programmable behavior always refers to documents such as LOAC and IH. This indicates that we can infer from his writings that he does not think that the decision on what or which rules for its behavior are to be programmed into the AWS lies either with the military nor with the system designers, but within international law. ...
Interacting in the perceptual process by artificial means (augmented perception) involves a) capturing and conveying environmental cues, and b) designing relevant interfaces toward the human receiver. Expanding communication beyond vision and hearing in the visocentric paradigm is valuable for a user living with deafblindness but in the long run also for the many opening up for new informational pathways. Augmented perception, regarded as a process, is highly complex spanning incommensurable domains, mental and material, natural and man-made, active agency and passive. We deconstruct augmented perception with special emphasis on getting a common model in spite of these disparate domains. We show that translation is a proper metaphor for the sequence that is necessary for achieving a). We use this deconstruction for designing an interface for haptic communication to the receiving human including b). We hope this work will be supportive for future development of new means of communication.
... In fact, this lack of emotion and sensitivity is precisely what some of LAWS' staunchest advocates maintain is their greatest strength. Ronald Arkin, a prominent roboticist working on military autonomous and unmanned weapons systems, argues that though there would likely still be errors and innocent deaths on a battlefield dominated by robots, on the whole these incidents would occur far less often than is currently the case, and moreover only occur due to genuine mistakes, instead of being intentional acts of revenge or simple disrespect or negligence, as they too regularly seem to be (Arkin 2009(Arkin , 2010. For example, a report from the United States Surgeon General (US Surgeon General 2006) assessing battlefield ethics of soldiers and marines deployed during Operation Iraqi Freedom found that only 47 percent of soldiers and 38 percent of marines thought that non-combatants should be treated with dignity and respect; roughly 10 percent of soldiers and marines admitted to having damaged or destroyed property when it was not necessary; 7 percent of marines and 4 percent of soldiers admitted to having harmed noncombatants when it was not necessary; and only 55 percent of soldiers and 40 percent of marines said they would report a fellow unit member for injuring or killing an innocent non-combatant, and even fewer said they would report theft from non-combatants, mistreatment of non-combatants, or unnecessary destruction of non-combatants' property. ...
... See, e.g.Arkin 2009, Guetlein 2005 ...
... If principles of just war require the possibility to attribute moral responsibility yet the use of autonomous weapon systems can undermine this possibility, then, Sparrow concludes the development and use of such systems must be prohibited (for discussion, see e.g. Wallach & Allen, 2008;Arkin, 2009;Lin et al. 2008;Sharkey, 2010Sharkey, , 2019Bryson, 2010;Asaro, 2012;Roff, 2013;Sparrow, 2016;Simpson & Müller, 2016;Leveringhaus, 2016;Rosert & Sauer, 2019;Gunkel, 2020;Coeckelbergh, 2020;Taddeo & Blanchard, 2022b, Danaher 2022. Others have traced questions of responsibility attribution in other domains such as autonomous cars (Hevelke & Nida-Rümelin, 2015;Lin, 2016;Lin et al. 2017;Nyholm & Smids, 2016;Santoni de Sio, 2017;Nyholm, 2018;Sparrow & Howard, 2017) or examined its scope beyond the confines of a particular area of application (for a recent review see Santoni de Sio & Mecacci, 2021, see also Danaher, 2022). ...
Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness to hold autonomous systems morally responsible, (ii) partially exculpate human agents when interacting with such systems, and that more generally (iii) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.
... There has been some work on the formalization of ethical principles in AI [10]. Previous studies that attempt to integrate norms into AI agents and design formal reasoning systems has focused on: ethical engineering design [12], [27], [28] norms of implementation [15], [24], moral agency [13], [7], mathematical proofs for ethical reasoning [6], logical frameworks for rule-based ethical reasoning [1], [4], [16], reasoning in conflicts resolution [22], and inference to apply ethical judgments to scenarios [5]. ...
As automation in artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision- making raises novel challenges for engineers, ethicists and policymakers, who will have to explore new ways to realize this task. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behaviour of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus that is based on argumentation reasoning with support and attack arguments. This leads to a formal theoretical framework of ethical competence that could be implemented in artificial intelligent systems in order to best formalize certain parameters of ethical decision-making to ensure safety and justified trust.
... This means the idea of proportionality could be consistently enforced. Arkin further states that the technology will advance to the point where it will be far easier for them to comply with the laws of IHL compared to humans (Arkin, 2009). ...
The article questions the compliance of autonomous weapons systems with international humanitarian law (IHL). It seeks to answer this question by analysing the application of the core principles of international humanitarian law with regard to the use of autonomous weapons systems. As part of the discussion on compliance, the article also considers the implications of riskless warfare where non-human agents are used. The article presupposes that it is actually possible for AWS to comply with IHL in very broad and general terms. However, there is a need for discussion, acceptance, and institutionalization of the interpretation for classification of AWS as well as expansion of the legal framework to cater to the advanced technology. This interpretation will also include a system for allocating and attributing responsibility for their use. The article's results will demonstrate the legal consequences of developing and employing weapon systems capable of performing important functions like target selection and engagement autonomously and the role of IHL and IHRL in regulating the use of these weapons, particularly in human control over individual assaults.
... Many instances of disobedience are considered desirable not because a rejected command violates the issuer's true intent, but rather because the command contravenes larger legal or moral principles, regardless of the intent of the commandissuing agent. For example, ethical reasoning mechanisms have been proposed for hypothetical autonomous military systems to ensure that lethal force is deployed only when authorized by the rules of engagement and laws of war [2]. ...
Recent attention has been brought to robots that “disobey” or so-called “rebel” agents that might reject commands. However, any discussion of autonomous agents that “disobey” risks engaging in a potentially hazardous conflation of simply non-conforming behavior with true disobedience. The goal of this paper is to articulate a sense of what constitutes desirable and true disobedience from autonomous systems. To do this, we begin by discussing what it is not. First, we attempt to disentangle figurative uses of the term “disobedience” from those connotative of deeper senses of agency. We then situate true disobedience as being committed by an agent through an action that presupposes some understanding of the violated instruction or command.KeywordsMachine ethicsRobot obedience/disobedienceAutonomy
... Sie sind in der Lage, in Sekundenbruchteilen Entscheidungen zu treffen, in denen ein Mensch gar nicht mehr bewusst entscheiden kann. Das wird als Argument dafür ins Feld geführt, Maschinen moralische Entscheidungen in besonders prekären Situationen zu überlassen, beispielsweise im Krieg (Arkin 2009 Auch kognitionswissenschaftlich ist die Maschinenethik von Bedeutung. Denn der Mensch ist zwar einerseits Vorbild bei der Entwicklung intelligenter Maschinen, die die Fähigkeit zum moralischen Handeln haben. ...
Abstract
Im Zusammenspiel von KI und Biotechnologie zeigten sich in den letzten 40 Jahren umfangreiche Wandlungsprozesse im Forschungsdesign von Wissenschaft und Technologie – über die Biologie hinaus. Dabei handelt es sich um eine Entwicklung, die der Autor mit dem Begriff „Technoresearch“ beschreibt. Dieser Strukturwandel beruht auf einem neuen Forschungs-Paradigma, dem „algorithmic turn“, der allerdings nicht – wie oft behauptet – zum Ende wissenschaftlicher Theorie führt, sondern auf einem neuen Denkstil beruht. Dieses entsteht im Übergang von der klassischen Systemtheorie (Kybernetik) zur Synergetik (Theorie dynamischer komplexer Systeme). Die neue Orientierung an Komplexität und die Art, mit ihr datenbezogen umzugehen, verändert Praxis und Theorie von Wissenschaft und Technologie im 21. Jahrhundert.
... Some people claim that such combat robots would reduce the number of innocent lives and commit fewer war crimes, as they would not be subject to human passions such as fear, revenge, or hatred (cf. Arkin, 2009). ...
Recently, a military robot has autonomously taken the decision to attack enemy troops in the Libyan war without waiting for any human intervention. Russia is also suspected of using such kind of robots in the current war against Ukraine. This news has highlighted the possibility of radical changes in war scenarios. Using a Catholic perspective, we will analyze these new challenges, indicating the anthropological and ethical bases that must lead to the prohibition of autonomous “killer robots” and, more generally, to the overcoming of the just war theory. We will also point out the importance of Francis of Assisi, whom the encyclical Fratelli tutti has proposed again as a model for advancing towards a fraternal and pacified humanity.
... As machines can process data, analyze information, and make decisions in some situations in less time than humans, their use is particularly attractive in the context of defense. While Autonomous Weapons Systems (AWS) promise military and strategic advantages [110], [111], they also come with risks [112]. AWS can be defined as AI systems designed to select (i.e., search for or detect) and engage (i.e., use force against) targets without the need for human control or human action after its activation [113, pp. ...
The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI.
... Naravno, pojam autonomije je veoma složen filozofski problem te je kompletnu analizu ovog pojma nemoguće pružiti u okvirima rada ovako ograničenog obima. U skladu s tim, slediću upotrebu ovog termina koju koristi Salins, ograničiću se na upotrebu termina "autonomni robot" koji je danas standardan u robotici -reč je o robotima koji bar neke od značajnih odluka o svojim radnjama donose na osnovu svog programa (Arkin 2009;Lin et al. 2008). 22 Ukoliko pojam robota shvatimo na ovaj način, postoje četiri moguća gledišta u pogledu pitanja da li roboti mogu da budu moralni delatnici. ...
My main goal in this paper is to conduct a detailed analysis of the moral status of artificial intelligence. I will start by clarifying the notion of moral status, as well as the dichotomy between moral agent and moral patient, which plays a significant role in a vast number of perplexing dilemmas in applied ethics. This clarification is necessary to get a clearer view of the key issues that I intend to answer in the paper; more specifically, to the question (a) whether we can cause harm, in a morally relevant sense, to an intelligent artificial system, and (b) whether an intelligent artificial system can itself act in a way that can be assessed in moral terms.
... It is interesting to note that even for relatively simple systems, perfect control could be unattainable. Any controlled system can be re-designed to make it have a separate external regulator (governor [181]) and a decision making component. This means that control theory is directly applicable to AGI or even superintelligent system control. ...
The invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid the pitfalls of such a powerful technology it is important to be able to control it. However, the possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI cannot be fully controlled. The consequences of uncontrollability of AI are discussed with respect to the future of humanity and research on AI, and AI safety and security.
... Such examples show that the social sciences and humanities alone each have very different ways of locating agency-and also which social entities (such as animals, natural phenomena, ghosts, or machines) they model as agency-capable in the first place (Lindemann, 2016). The question of locating agency is of particular relevance to HMC when dealing with questions of ethics and law (Arkin, 2009;Bryson, 2010;Wallach, 2008). Is it only humans who possess agency in processes of communication? ...
This chapter describes the study of human-machine-communication (HMC) as inherently interdisciplinary. This interdisciplinarity is significant in several ways: When considering interdisciplinarity's scope, there exist narrow forms of correspondence with neighboring disciplines in media and communication studies as do broader connections with more diverse disciplines such as computer science. In regard to the types of interdisciplinarity, it must be taken into account that HMC already represents an interdisciplinary phenomenon for whose investigation the methodological and theoretical integration of approaches from different disciplines persists. When it comes to the goals of interdisciplinarity, HMC aims both at fundamental research (the so-called "epistemological orientation" of interdisciplinarity) and the application of this research, such as the development of "socio-compatible" communicative AI and communicative robots (the so-called "instrumental orientation" of interdisciplinarity). HMC's requirement for cross-compatible approaches becomes most apparent when one keeps in mind that communicative AI and communicative robots challenge the three crucial foundational concepts of media and communication studies: communication, media, and agency. It is only through an interdisciplinary approach that the possibility of rethinking these concepts is solidified in the building of purposeful foundations for empirical research.
... Another question seems to be whether using autonomous weapons in war would make wars worse or perhaps less bad? If robots reduce war crimes and crimes in war, the answer may well be positive and has been used not only as an argument in favor of these weapons (Arkin 2009;Müller 2016) but also as an argument against (Amoroso and Tamburrini 2018). Arguably, the main threat is not the use of such weapons in conventional warfare but in asymmetric conflicts or by nonstate agents, including criminals. ...
The current state of the art in cognitive robotics, covering the challenges of building AI-powered intelligent robots inspired by natural cognitive systems. A novel approach to building AI-powered intelligent robots takes inspiration from the way natural cognitive systems—in humans, animals, and biological systems—develop intelligence by exploiting the full power of interactions between body and brain, the physical and social environment in which they live, and phylogenetic, developmental, and learning dynamics. This volume reports on the current state of the art in cognitive robotics, offering the first comprehensive coverage of building robots inspired by natural cognitive systems.
Contributors first provide a systematic definition of cognitive robotics and a history of developments in the field. They describe in detail five main approaches: developmental, neuro, evolutionary, swarm, and soft robotics. They go on to consider methodologies and concepts, treating topics that include commonly used cognitive robotics platforms and robot simulators, biomimetic skin as an example of a hardware-based approach, machine-learning methods, and cognitive architecture. Finally, they cover the behavioral and cognitive capabilities of a variety of models, experiments, and applications, looking at issues that range from intrinsic motivation and perception to robot consciousness.
Cognitive Robotics is aimed at an interdisciplinary audience, balancing technical details and examples for the computational reader with theoretical and experimental findings for the empirical scientist.
... i Personal conversation, 2011.ii Arkin, R. 2009. Governing Lethal Behavior in Autonomous Robots. ...
In this essay I will explore an understanding of the potential moral agency of robots, arguing that the key characteristics of physical embodiment, adaptive learning, empathy in action, and a teleology toward the good are the primary necessary components for a machine to become a moral agent. In this context, other possible options will be rejected as necessary for moral agency, including simplistic notions of intelligence, computational power, and rule-following, complete freedom, a sense of God, and an immaterial soul. I argue that it is likely that such moral machines may be able to be built, and that this does not diminish humanity or human personhood.
... Humans often overlook relevant factors or get confused by complicated interactions of conflicting factors. They also sometimes get overcome by emotions, such as dislike of certain groups or fear during military battles [135]. Some researchers hope that sophisticated machines can avoid these problems and then make better moral judgments and decisions than humans. ...
Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework - computational ethics - that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.
... "The application of lethal force as a response must be constrained by Laws of War and Rules of Engagements (ROE) before it can be employed by autonomous systems" (Arkin, 2009). ...
Artificial intelligence and technological advancements have headed to the development of robots capable of performing various functions. One of the purposes of robots is to replace human soldiers on battlefields. Killer robots, referred to as "autonomous weapon systems," pose a threat to the principles of human accountability that underpin the international criminal justice system and the current law of war that has arisen to support and enforce it. It poses a challenge to the Law of War's conceptual framework. In the worst-case scenario, it might encourage the development of weapons systems specifically to avoid liability for the conduct of the war by both the government and individuals. Furthermore, killer robots cannot comply with the fundamental law of war principles like the principle of responsibility. The accountability of autonomous
... It would be a synthetic ethics in the double sense of the synthetic method in science, the idea of 'learning by doing' and of 'learning by doing', that is to say, a form of ethics that is both applied, practiced and involves judging and that is a form of inquiry, a learning process. Proponents of artificial moral machines recommend applying known but inevitably contested rules and recipes to various new situations, and inputting them as code to determine the robots' behavior [10,50]. 17 While critics of social robotics, like, [29,31,38,52], seem convinced that they already possess the ethical knowledge that allows them to judge new situations and forms of interaction, sometimes even before they actually arise. ...
Focusing on social robots, this article argues that the form of embodiment or presence in the world of agents, whether natural or artificial, is fundamental to their vulnerability and ability to learn. My goal is to compare two different types of artificial social agents, not on the basis of whether they are more or less “social” or “intelligent”, but on that of the different ways in which they are embodied, or made present in the world. One type may be called ‘true robots’. That is, machines that are three dimensional physical objects, with three required characteristics: individuality, environmental manipulation and mobility in physical space. The other type may be defined as "analytic agents", for example ‘bots’ and ‘apps’, which in social contexts can act in the world only when embedded in complex systems that include heterogeneous technologies. These two ways of being in the world are quite different from each other, and also from the way human persons are present. This difference in ways of embodiment, which is closely related to the agents’ vulnerability and ability to learn, conditions in part the way artificial agents can interact with humans, and therefore it has major consequences for the ethics (and politics) of these technologies.
The focus of this chapter is on some of the ethical and philosophical issues at the intersection of robotics and artificial intelligence (AI) applications in the health care sector and medical assistance in dying (e.g. physician-assisted suicide and euthanasia), including: (1) Is there a role for robotic systems/AI to play in the orchestration or delivery of assisted dying?; (2) Can the use of robotic systems/AI make the orchestration of assisted dying more ethical?; and (3) What insights can be generated in the ethical debate on physician assisted suicide and euthanasia from considering the prospect of robotic systems/AI assisting with the provision of or providing assistance in dying? The prospect of including robotic systems/AI in the context of assisted dying provides opportunity to revisit longstanding philosophical and ethical issues under new light. Indeed, reflecting on these questions may invigorate debate, for example in reconsidering the de-medicalization of assisted dying, reconsidering whether assisted dying is within the proper scope of medicine, and reconsidering which normative approach to the ethics of assisted dying is the most appropriate.
Unmanned systems (UMS) in military applications will often play a role in determining the success or failure of combat missions and thus in determining who lives and dies in times of war. Designers of UMS must therefore consider ethical, as well as operational, requirements and limits when developing UMS. The ethical issues involved in UMS design may be grouped under two broad headings: Building Safe Systems and Designing for the Law of Armed Conflict. This chapter identifies and discusses a number of issues under each of these headings and offers some analysis of the implications of each issue and how it might be addressed.
The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of “excessiveness”. Some possible approaches to Proportionality compliance are then presented, such as restricting AWS operations to environments lacking civilian presence, using AWS in targeted strikes in which proportionality judgments are pre-made by human commanders, and a ‘price tag’ approach of pre-assigning acceptable collateral damage values to military hardware in conventional attritional warfare. The article argues that application of these three compliance methods would result in AWS’ achieving acceptable Proportionality compliance levels in many combat environments and scenarios, allowing AWS to perform most key tasks in conventional warfare.
Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine blame for any wrongful killing done by autonomous weapons. Analyzing moral responsibility for autonomous weapons starts by recognizing that although they are capable of causing moral consequences, they are neither praiseworthy nor blameworthy in the moral sense. As such, their military role is that of a tool, albeit a rather sophisticated one, and responsibility for their use is roughly analogous to that of existing “smart” weapons. There will likely be some difficulty in managing these systems as they become more intelligent and more prone to unpredicted behavior, but the moral notion of shared responsibility and the legal notion of command responsibility are sufficient to locate responsibility for their use.
This article reflects on securitization efforts with respect to ‘killer robots’, known more impartially as autonomous weapons systems (AWS). Our contribution focuses, theoretically and empirically, on the Campaign to Stop Killer Robots, a transnational advocacy network vigorously pushing for a pre-emptive ban on AWS. Marking exactly a decade of its activity, there is still no international regime formally banning, or even purposefully regulating, AWS. Our objective is to understand why the Campaign has not been able to advance its disarmament agenda thus far, despite all the resources, means and support at its disposal. For achieving this objective, we challenge the popular assumption that strong stigmatization is the universally best strategy towards humanitarian disarmament. We investigate the consequences of two specifics present in AWS, which set them apart from processes and successes of the campaigns to ban anti-personnel landmines, cluster munitions, and laser-blinding weapons: the complexity of AWS as a distinct weapons category, and the subsequent circumvention of its complexity through the utilization of pop-culture, namely science fiction imagery. We particularly focus on two mechanisms through which such distortion has occurred: hybridization and grafting. These provide the conceptual basis and heuristic tools to unpack the paradox of over-securitization: success in broadening the stakeholder base in relation to the first mechanism and deepening the sense of insecurity in relation to the second one does not necessarily lead to the achievement of the desired prohibitory norm. In conclusion, we ask whether it is not the time for a more epistemically-oriented expert debate with a less ambitious, lowest common denominator strategy as the preferred model of arms control for such a complex weapons category.
The militarisation of Artificial Intelligence Diplomacy has resulted in the development of heavy weapons that are more powerful than traditional weaponry, fail to distinguish between civilians and combatants, and cause unnecessary suffering. Superpowers and middle powers have made significant investments in digital technologies, resulting in the production of digital weapons that violate international humanitarian law and human rights standards, and complicate the achievement of global peace. Armed drones and militarised robots cause unnecessary pain and suffering to helpless civilians. These weapons have been used to combat terrorism, but, surprisingly, have not addressed issues of terrorism that affect post-Cold War international relations. As a result, the use of armed drones is causing more harm than is necessary to achieve the objective of war. There is a call for international artificial intelligence (AI) governance, as well as a need to understand the effects and serious threats that armed drones pose to international humanitarian law (IHL), as well as to peace processes in international relations and global cooperation. Scholars, policy-makers, human rights activists and peace practitioners should participate more actively in debates about the military application of AI diplomacy, in order to develop effective AI diplomacy rules and regulations. This serves to mitigate the risks and threats associated with armed drones on IHL and international human rights standards, which are the foundations of the post-modern world.
Otonom silah sistemleri diğer bir adıyla katil robotlar, yapay zeka ve robotik teknolojisinden faydalanarak herhangi bir insan müdahalesine gerek duymadan hedefleri seçen ve saldırıda bulunan ileri teknoloji silahlardır. Otonom silah sistemleri savaş meydanında pek çok fayda sağlamaktadır. Tehlikeli görevlerde önleyici olarak kullanılarak sivillerin ve askeri kayıpların önüne geçmektedir. Askerlerin silahlı çatışma sırasında sahip olduğu hayal kırıklığı, intikam, öfke, yorgunluk gibi zaaflardan tamamen bağımsız hareket ederek savaşın gidişatını/seyrini kökten değiştirmektedir. Anlamlı insan kontrolü bulunmadan harekete geçen otonom silah sistemleri ise hukuki ve cezai sorumluluğun belirlenmesinde problemler yaratmakta ve bir makine tarafından yaşam ve ölüm kararının verilmesi insan onurunun dokunulmazlığını tartışmaya açmaktadır. Bu çalışma, otonom silah sistemlerinin uluslararası insancıl hukukta yarattığı problemleri, günümüz silahlı çatışmalarından örneklerle inceleyecektir. Bu amaç doğrultusunda; uluslararası insancıl hukuk prensipleri olan ayırt etme , ölçülülük ve önleme ilkeleri detaylı bir şekilde tartışılacaktır. Otonom silah sistemlerinin Martens kaydı doğrultusunda yasaklanıp yasaklanmaması gerekliliği hususu uluslararası arenadaki tartışmalar doğrultusunda incelenecektir.
In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.
Written by an award-winning historian of science and technology, Planet in Peril describes the top four mega-dangers facing humankind – climate change, nukes, pandemics, and artificial intelligence. It outlines the solutions that have been tried, and analyzes why they have thus far fallen short. These four existential dangers present a special kind of challenge that urgently requires planet-level responses, yet today's international institutions have so far failed to meet this need. The book lays out a realistic pathway for gradually modifying the United Nations over the coming century so that it can become more effective at coordinating global solutions to humanity's problems. Neither optimistic nor pessimistic, but pragmatic and constructive, the book explores how to move past ideological polarization and global political fragmentation. Unafraid to take intellectual risks, Planet in Peril sketches a plausible roadmap toward a safer, more democratic future for us all.
The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.
The laws of war are facing new challenges from emerging technologies and changing methods of warfare, as well as the growth of human rights and international criminal law. International mechanisms of accountability have increased and international criminal law has greater relevance in the calculations of political and military leaders, yet perpetrators often remain at large and the laws of war raise numerous normative, structural and systemic issues and problems. This edited collection brings together leading academic, military and professional experts to examine the key issues for the continuing role and relevance of the laws of war in the twenty-first century. Marking Professor Peter Rowe's contribution to the subject, this book re-examines the purposes of the laws of war and asks whether existing laws found in treaties and customs work to achieve these purposes and, if not, whether they can be fixed by specific reforms or wholesale revision.
The laws of war are facing new challenges from emerging technologies and changing methods of warfare, as well as the growth of human rights and international criminal law. International mechanisms of accountability have increased and international criminal law has greater relevance in the calculations of political and military leaders, yet perpetrators often remain at large and the laws of war raise numerous normative, structural and systemic issues and problems. This edited collection brings together leading academic, military and professional experts to examine the key issues for the continuing role and relevance of the laws of war in the twenty-first century. Marking Professor Peter Rowe's contribution to the subject, this book re-examines the purposes of the laws of war and asks whether existing laws found in treaties and customs work to achieve these purposes and, if not, whether they can be fixed by specific reforms or wholesale revision.
The development of remote-controlled or increasingly autonomous machines in military use since the beginning of the 20th century serves as the historical frame of reference for this study. Though the controversy regarding the role of autonomous systems in military contexts has been studied, little interest for its historical perspective has been shown. Until the end of the 20th century, the United States of America was the center of the technological development and manufacture of these weapons systems, and it was there that their evolution has been expressed predominantly in popular culture as well as in the political and ethical debate surrounding their application.
The sources drawn upon originate with American Science Fiction in its formative phase (1926–1936), a time when remote-controlled weapons systems were arguably for the first time widely noticed by the general public. The literary texts that were chosen are marked by the aim to display scientific plausibility while still retaining the ability to express ideas, visions and utopian or dystopian concepts of possible futures. They have been analyzed according to the following three criteria:
a) How are humans perceived in contrast to machines, automatons, robots etc.
b) What are the characteristics of the relationship between man and machine as drawn in those works of literature
c) How is this relationship portrayed in scenarios of violence and struggle for power between man and machine
In the selected texts machines that are depicted as the instrument of evil or of an antagonist and machines displaying hostility to humans, sometimes being the enemy of mankind, are strongly represented. Other relevant motifs include the unrestrained working of the machine-principle, the shaping of the world according to the nature of the machine and machines being in clear opposition to the concept of the human being and its qualities. In the minority are narratives that show machines to be man’s companion or successor.
Computational methods such as machine learning and especially artificial intelligence will lend weapon systems a new quality compared with existing ones with automated/autonomous functions. To regulate weapon systems with autonomous functions with the tools of arms control a new approach is necessary: This must be focused on the human role in decision-making processes. Despite this focus, the enabling technologies involve some specific challenges regarding the scope and verification of regulation. While technology will not solve problems created by the use of technology, it may be able to offer some remedies.
This research explores how robots can be used to protect parties to a conflict and civilians from suicide bombings in conflict areas under the international humanitarian law. This paper examines the issue from the perspectives of robotics and international law using a real robot. In this paper, the articles of international humanitarian law are interpreted, and a framework required for robotic systems is proposed. From the perspective of the principle of distinction, the principle of proportionality, and the principle of precaution, ideas are presented to calculate the legal indicators named “certainty,” “effort,” “nexus possibility,” “necessity,” and “collateral damage.”
The post-Cold WarCold War era is witnessing the replacement of industrial power with information power. The Information AgeInformation Age which is still evolving is largely the product of information technology: computers.
This article discusses Indonesia’s role in securing cyberspace or cyber resilience in the scope of domestic, bilateral and multilateral. Since the establishment of the National Cyber and Crypto Agency (BSSN) in 2017, Indonesia has reported many cases of cyber-attacks both in the government and in the private sphere. This article aims to find out what roles that Indonesia did in shaping cyber security and resilience in the sphere of the domestic, bilateral and multilateral. This article uses a descriptive method with literature review by using secondary data from a literature review that is available. The result of this article is that Indonesia acts as a Protectee, Mediator, and Balancer in accordance with the behavior that is shown in each phenomenon such as domestic, bilateral and multilateral (regional), which depend on the dynamic situation. However, it does not replace Indonesia’s position as a country with a status that is not in alliance with other countries, namely independence and active (Bebas-Aktif).AbstrakArtikel ini membahas tentang peran Indonesia di dalam mengamankan ruang siber atau cyberspace di lingkup domestik, bilateral dan multilateral. Sejak dibentuknya Badan Siber dan Sandi Negara atau BSSN pada tahun 2017, Indonesia melaporkan mendapatkan begitu banyak serangan siber baik di lingkungan pemerintahan maupun di lingkup swasta. Artikel ini bertujuan untuk mengetahui peran yang dilakukan oleh Indonesia dalam membentuk keamanan dan ketahanan siber di lingkup domestik, bilateral dan multilateral. Metode riset yang digunakan adalah metode deskriptif melalui kajian literatur dengan menggunakan data sekunder dari kajian literatur yang sudah tersedia. Hasil riset menunjukan bahwa Indonesia berperan sebagai Protected, Mediator, dan Balancer sesuai dengan perilaku yang ditunjukkan di masing – masing situasi baik domestik, bilateral maupun situasi multilateral yang cenderung dinamis, namun tidak mengganti posisi Indonesia sebagai negara dengan statusnya yang tidak beraliansi dengan negara lainnya yaitu bebas aktif.
Otonom silah sistemleri, uluslararası düzeyde kabul edilmiş tek tip bir tanımı bulunmasa da bağımsız olarak hareket edebilen ve genel olarak bir insan unsuruna ihtiyaç duymadan hedef belirleyip gerektiğinde bu hedefe saldırabilen sistemler şeklinde tanımlanabilir. Bu sistemler, teknolojik ilerlemeye bağlı olarak, silahlı çatışmalarda kullanım amacıyla artan bir talep oranına sahiptir. Bu silah sistemlerinin askeri insan gücünün kullanılamayacağı veya askeri insan gücünü kullanmanın yürütülen çatışma bakımından dezavantaj teşkil edebileceği yerlerde devreye girebilme olanağı, ilerleyen teknolojiyle birlikte daha da geliştirilebilme potansiyeli ve kullanım kolaylığı gibi hususlar devletler tarafından tercih edilmesinde önemli rol oynamaktadır. Bu şekilde, artan ilgiyle birlikte otonom silah sistemlerinin silahlı çatışmalardaki konumu, uluslararası hukuk bakımından yeni bir tartışma ve çalışma alanı yaratmıştır. Otonom silah sistemleri her ne kadar niteliği gereği insansız kullanıma elverişli olsa da şu ana kadar çoğunlukla insan kontrolü altında kullanılmıştır. Otonom silah sistemlerinin silahlı çatışmalarda kullanılması, silahlı çatışmalara ilişkin uluslararası hukuk kuralları bakımından çeşitli sorun ve sakıncalar doğurabilme potansiyeline sahiptir. Bu çerçevede incelenmesi gereken temel meseleler arasında, otonom silah sistemlerinin silahlı çatışmalar hukukunun ayrım gözetme, orantılılık gibi bazı temel ilkeleri karşısındaki durumu ve söz konusu silahlar üzerinde hangi seviyedeki insan kontrolünün bu kontrolü anlamlı (meaningful) kılacağı sorunu sayılabilir. Bu çalışmada otonom silah sistemleri konusundaki tartışmalar ele alınacak, anlamlı insan kontrolünün otonom silah sistemleri bakımından önemi ve bu tür silah sistemlerinin kullanımının silahlı çatışmalar hukukuna uygunluğu bakımından anlamlı insan kontrolünün hangi seviyelerde olması gerektiği uluslararası insancıl hukuk kapsamında değerlendirilecektir. Ayrıca anlamlı insan kontrolü kavramıyla beraber düşünüldüğünde otonom silah sistemlerinin silahlı çatışmalarda yer almasının genel bağlamda avantajlı olup olmadığı tartışılacaktır.
Autonomous robots are currently being developed for tasks that may require those robots to assume a position of authority over humans. Our work examines the ethical boundaries of human-robot interaction in the context of robot-initiated punishment of humans. We observe that positions of authority often require the ability to punish in order to maintain societal norms. If autonomous robots are to assume roles of authority, they too, must be capable of punishing individuals that violate norms. This work constructs a discussion regarding permissible robot behavior, particularly from the perspective of robot-administered punishment, examining the current and future use cases of such technology and applying a consequence-based approach as a starting point for analysis.
It is a truism that, due to human weaknesses, human soldiers have yet to have sufficiently ethical warfare. It is arguable that the likelihood of human soldiers to breach the Principle of Non-Combatant Immunity, for example, is higher in contrast to smart soldiers who are emotionally inept. Hence, this paper examines the possibility that the integration of ethics into smart soldiers will help address moral challenges in modern warfare. The approach is to develop and employ smart soldiers that are enhanced with ethical capabilities. Advocates of this approach think that it is more realistic to make competent entities (i.e., smart soldiers) become morally responsible than to enforce moral responsibility on human soldiers with inherent (moral) limitations. This view somewhat seeks a radical transition from the usual anthropocentric warfare to a robocentric warfare with the belief that the transition has moral advantages. However, the paper defends the claim that despite human limitations, the capacity of ethically enhanced smart soldiers for moral sensitivity is artificial and unauthentic. There are significant problems with the three models of programming ethics into smart soldiers. Also, there are further challenges from the absence of emotion as a moral gauge, and the problems of apportioning responsibility in case of mishap from the actions or omissions of smart soldiers. Among other reasons, the paper takes the replacement of human soldiers as an extreme approach towards an ethical warfare. This replacement predicates ethical complications that outweigh the benefits from the exclusive use of smart soldiers.
ResearchGate has not been able to resolve any references for this publication.