Article

Autonomous Weapons and Distributed Responsibility

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context of the military chain of command. The military hierarchy is a system of distributing responsibility between decision makers on different levels and constraining autonomy. If autonomous weapons are employed as agents operating within this system, then responsibility for their actions can be attributed to their creators and their civilian and military superiors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Position [58] To analyse US's decision-making and moral justifications on the use of AWS. Position [59] To analyse the meaning of responsibility of autonomous robots. Position [60] To reflect on the legal implications and consequences of AWS in the theatre of war. ...
... This speed advantage is essential in high-stake environments where quick reactions can determine the outcome of engagements [24]. Furthermore, AWS can significantly accelerate the Observe-Orient-Decide-Act decision-making loop, allowing for more efficient execution of missions [58], [59]. By operating at high speeds, AWS can outperform human commanders in coordinating complex military manoeuvres [64]. ...
... The compact size of these sensors allows for widespread deployment and concealment in various environments, providing a strategic edge in pre-engagement reconnaissance missions [53]. Furthermore, autonomous systems can detect enemies, conduct surveillance, and potentially carry weapons for situations requiring lethal force, significantly enhancing military operational capabilities [58], [59], [61]. Opportunity 2 -Training and Awareness: AWS are being driven by the involvement of technical and corporate experts in their design, development, deployment and governance, ensuring the incorporation of cutting-edge technologies and providing a competitive edge in military operations [48]. ...
Preprint
Full-text available
The integration of Autonomous Weapon Systems (AWS) into military operations presents both significant opportunities and challenges. This paper explores the multifaceted nature of trust in AWS, emphasising the necessity of establishing reliable and transparent systems to mitigate risks associated with bias, operational failures, and accountability. Despite advancements in Artificial Intelligence (AI), the trustworthiness of these systems, especially in high-stakes military applications, remains a critical issue. Through a systematic review of existing literature, this research identifies gaps in the understanding of trust dynamics during the development and deployment phases of AWS. It advocates for a collaborative approach that includes technologists, ethicists, and military strategists to address these ongoing challenges. The findings underscore the importance of Human-Machine teaming and enhancing system intelligibility to ensure accountability and adherence to International Humanitarian Law. Ultimately, this paper aims to contribute to the ongoing discourse on the ethical implications of AWS and the imperative for trustworthy AI in defense contexts.
... Multiple authors have argued against direct individual criminal liability for operators or military commanders in the context of LAWS, emphasizing that such liability requires intent or recklessness and a direct causal link between action and outcome [18]; Dickinson, 2019; Chengeta, [19] pp. [16][17][18][19][20][21][22][23][24][25][26][27]Crootof,[7] p. 1376; Egeland, [3] p. 106; Saxon, [20] p. 28). Given that neither operators nor commanders are directly involved in the execution 2 For clarity and completeness, this type of solution theoretically allows advocating for the responsibility of the system itself. ...
... The application of the doctrine as a possible answer to the responsibility gap is not new. It has been defended in the AI ethics and legal literature by Himmelreich [22][23][24] among others in the context of LAWS. In this section, we argue that such an adaptation might not only make sense from an ethical or legal standpoint but also from an empirical perspective. ...
... In the remainder of this article, we will use the term 'command responsibility' because the article is set in a military context.4 Authors who believe that the problem of assigning responsibility can be (at least partly) solved by looking at the hierarchical structure in the military include: Schulzke[22][23][24], Schmitt[25]. For authors who don't believe in the feasibility, see:[7,8,19,26,27]. ...
Article
Full-text available
Artificial intelligence (AI) has found extensive applications to varying degrees across diverse domains, including the possibility of using it within military contexts for making decisions that can have moral consequences. A recurring challenge in this area concerns the allocation of moral responsibility in the case of negative AI-induced outcomes. Some scholars posit the existence of an insurmountable “responsibility gap”, wherein neither the AI system nor the human agents involved can or should be held responsible. Conversely, other scholars dispute the presence of such gaps or propose potential solutions. One solution that frequently emerges in the literature on AI ethics is the concept of command responsibility, wherein human agents may be held responsible because they perform a supervisory role over the (subordinate) AI. In the article we examine the compatibility of command responsibility in light of recent empirical studies and psychological evidence, aiming to anchor discussions in empirical realities rather than relying exclusively on normative arguments. Our argument can be succinctly summarized as follows: (1) while the theoretical foundation of command responsibility appears robust (2) its practical implementation raises significant concerns, (3) yet these concerns alone should not entirely preclude its application (4) they underscore the importance of considering and integrating empirical evidence into ethical discussions.
... One of these concepts is 'distributed (moral) responsibility' (e.g . Floridi 2016;Schulzke 2013;Strasser 2021). According to this concept, responsibility for an AI's action can be attributed to different actors in a given situation. ...
... So, if a machine does not function as it should (e.g. mistakes green busses for tanks, or interprets irrelevant communication as hostile due to language constraints and missing cultural sensitivity), then the developers, respectively the company, are morally responsible for this malfunction (see for a comparison to product liability, Asaro 2007; quoted in Schulzke 2013). In this systemic context, they are morally responsible for the normative proper functioning of the machine (development and monitoring process). ...
... Since autonomous robots seem to lack that capability, it follows that they should not be entrusted to make life-and-death decisions and act on them. However, some researchers dispute these arguments and that the problem of determining responsibility for autonomous military robots can be solved by addressing it within the context of the military chain of commands (e.g., Schulzke, 2013, andChampagne andTonkens, 2015), or by developing responsibility practices that clearly establish lines of accountability (Noorman &Johnson, 2014, andNoorman, 2014). ...
... So, "the moral blame and accompanying punishment could be placed squarely on a human agent (or agents) who, through her own volition, has traded a part of her freedoms for the prestige of occupying a high-ranking position in a given social hierarchy (…). If no one in willing to accept this responsibility, then they should not deploy autonomous killer robots in the first place" (Champagne & Tonkens, 2015, 136; see also Schulzke 2013). ...
... A number of the authors believe that the problem of assigning responsibility can be (partly) solved by looking at the hierarchical structure in the military (Himmelreich, 2019;Nyholm, 2018;Schmitt, 2012;Schulzke, 2013). As such, Nyholm argues that "When we try to allocate responsibility for any harms or deaths caused by these technologies, we should not focus on theories of individual agency and responsibility for individual agency. ...
... For a clear explanation of the difference between the two views, see:(Sander, 2010).37 The authors that discuss command responsibility in relation to responsibility gaps in LAWS:(Bo et al., 2022, 35-38;Schwarz, 2021; McFarland, 2020, 162-164; Laura A Dickinson, 2019, 79-81; Margulies, 2019, 413-415;Nyholm, 2018;Saxon, 2016;Crootof, 2016 Crootof, , 1378 Crootof, -1381 Roff, 2014, 357-358;Schulzke, 2013).Content courtesy of Springer Nature, terms of use apply. Rights reserved. ...
Article
Full-text available
AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature around the concept of responsibility gaps and different solutions have been devised to close or bridge these gaps. In order to move forward in the research around LAWS and the problem of responsibility, it is important to increase our understanding of the different perspectives and discussions in this debate. This paper attempts to do so by disentangling the various arguments and providing a critical overview. After giving a brief outline of the state of the technology of LAWS, I will review the debates over responsibility gaps using three differentiators: those who believe in the existence of responsibility gaps versus those who do not, those who hold that responsibility gaps constitute a new moral problem versus those who argue they do not, and those who claim that solutions can be successful as opposed to those who believe that it is an unsolvable problem.
... A classic objection in the literature on autonomous weapon systems (AWS) is that if AWS were to violate the law of armed conflict (LOAC), it would be difficult or perhaps even impossible to properly determine who is responsible or should be held to be responsible (Sparrow, 2007;Sharkey, 2007;Pagallo, 2011). 1 AWS would therefore create so-called "responsibility gaps", undermining the ethics and laws of war. 2 This objection, however, has seen a number of rebuttals (Lokhorst and van den Hoven, 2012;Schulzke, 2012;Simpson and Müller, 2015;Müller, 2016;Robillard, 2018), even from those generally sympathetic to its underlying concerns (Leveringhaus, 2016). Yet in virtually all statements of the objection and responses to it, the discussions center around "autonomous weapon systems" as a whole class of entities, without paying heed to the ways different autonomous systems may alter the moral and legal landscape due to variations in their specific design and use. 3 This article seeks to remedy this by providing a taxonomical approach to evaluating responsibility gaps for AWS, showing how differences in the sophistication, deployment, or human ability to intervene on such systems alters the responsibility landscape. ...
... 3 The response to Sparrow provided in (Schulzke, 2012) is general as well, but his focus on the realities of responsibility distribution in military organizations captures an idea very close to that developed here below. Note also that many of the overly general statements about AWS are rooted in a lack of precision about what exactly is taken to be meant by "autonomous weapon systems" (e.g., Sparrow himself discusses a number of simpler autonomous weapons early on in his arguments, but in the core of his objection shifts the discussion to highly advanced agent-like systems). ...
Preprint
Full-text available
One classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as "autonomous weapon systems", and so the objection is too broad. In this article I present a taxonomic approach to the objection, examining a number of systems that would count as AWS under the prevalent definitions provided by the United States Department of Defense and the International Committee of the Red Cross, and I show that for virtually all such systems there is a clear locus of responsibility which presents itself as soon as one pays closer heed to the exact systems in question. In developing these points, I also present a method for dealing with near-future types of AWS which may be thought to create situations where responsibility gaps may still potentially arise.
... Armies anticipating AWS might do so because these machines would lack emotions and other irrelevant considerations during operation, as they would not need skills that are not directly relevant for them to carry out the order given by the commander. Indeed, an AWS with general AI would rather be a drawback than an advantage (Schulzke 2012). ...
... It would be unnecessary to deny that the intentions have a crucial role in our everyday moral -and also legal -judgment, but in the military, commanders have the responsibility -and the burden of punishment -instead of their soldiers due to the chain of command (Schulzke 2012). For this reason, it is questionable whether a soldier on the battlefield has to have the right intention in order to morally justify his or actions. ...
Article
Full-text available
Autonomous Weapons Systems (AWS) have not gained a good reputation in the past. This attitude is odd if we look at the discussion of other – usually highly anticipated – AI-technologies, like autonomous vehicles (AVs); whereby even though these machines evoke very similar ethical issues, philosophers’ attitudes towards them are constructive. In this article, I try to prove that there is an unjust bias against AWS because almost every argument against them is effective against AVs too. I start with the definition of “AWS.” Then, I arrange my arguments by the Just War Theory (JWT), covering jus ad bellum, jus in bello and jus post bellum problems. Meanwhile, I draw attention to similar problems against other AI-technologies outside the JWT framework. Finally, I address an exception, as addressed by Duncan Purves, Ryan Jenkins and Bradley Strawser, who realized the unjustified double standard, and deliberately tried to construct a special argument which rules out only AWS.
... While learning algorithms are increasingly implemented in organizations with "real world impact" (e.g., crime prediction, cancer diagnosis, court decisions), a much discussed danger of basing organizational decisions on correlations is that it hides biases and flaws commonly attributed to data, as put forward in literature on data work (Cunha & Carugati, 2018;Gitelman, 2013;Pine, 2019). Related to this, it becomes increasingly difficult to unpack what data and statistics are behind algorithmic outputs and it is suggested that with the growing complexity of, for example, machine-learning algorithms and neural networks only a few, highly specialized, professionals will be able to understand their logic (Burke, 2019;Burrell, 2016;Faraj et al., 2018;Schulzke, 2013). In other words, the underlying logic of learning algorithms becomes opaque, which further enlarges the knowledge gap between learning algorithms and the targeted work domain. ...
... By analyzing practices that fill in a gap related to algorithmic technologies, our study also contributes to research on the work needed to explain the blackboxed nature of learning algorithms. Previous research asked who would be accountable for making decisions in the age of learning algorithms (Burke, 2019; Burton et al., 2019;Schulzke, 2013;Von Krogh, 2018). ...
... Several AMA debaters have claimed that free will is necessary for being a moral agent (Himma 2009;Hellström 2012;Friedman and Kahn 1992). Others make a similar (and perhaps related) claim that autonomy is necessary (Lin et al. 2008;Schulzke 2013). In the AMA debate, some argue that artificial entities can never have free will (Bringsjord 1992;Shen 2011;Bringsjord 2007) while others, like James Moor (2006Moor ( , 2009, are open to the possibility that future machines might acquire free will. ...
... Several authors have suggested that responsibility gaps can be handled by distributing responsibility for the acts of an AMA across all those human moral agents involved who are also capable of moral responsibility, like designers, users, investors and other contributors (Adams 2001;Champagne and Tonkens 2013;Singer 2013). Champagne and Tonkens claim that this solution would depend on a human moral agent agreeing to take on this responsibility; an idea developed further is the notion of such voluntary undertaken responsibility as continuously negotiable between the involved human parties (Lokhorst and van den Hoven 2012;Champagne and Tonkens 2013;Noorman 2014;Schulzke 2013). The implication here seems to be that if no agreement is in place, the responsibility gap prevails. ...
Article
Full-text available
This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This "normative approach" suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the normative approach to AMA.
... Since autonomous robots seem to lack that capability, it follows that they should not be entrusted to make life-and-death decisions and act on them. However, some researchers dispute these arguments and that the problem of determining responsibility for autonomous military robots can be solved by addressing it within the context of the military chain of commands (e.g., Schulzke, 2013, andChampagne andTonkens, 2015), or by developing responsibility practices that clearly establish lines of accountability (Noorman &Johnson, 2014, andNoorman, 2014). ...
... If no one in willing to accept this responsibility, then they should not deploy autonomous killer robots in the first place. (Champagne & Tonkens, 2015, 136; see also Schulzke 2013). ...
Chapter
Full-text available
Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this chapter, the authors critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) autonomous lethal military robots. They start by looking at military commanders because they are the ones with whom responsibility normally lies. The authors argue that this is typically still the case when lethal robots kill wrongly – even if these robots act autonomously. Nonetheless, they next look into the possible moral responsibility of the actors at the beginning and the end of the causal chain: those who design and manufacture armed military robots, and those who, far from the battlefield, remotely control them.
... This includes the notion of command responsibility. 22 For ease of expression, this paper will use phrasing such as 'an agent will attempt to do x'; or 'the agent wants to do y' . This should not be read as implying the agent having any form of sapience, consciousness, or free will. ...
Article
Full-text available
The challenge in deploying Autonomous Weapons Systems (‘ aws ’) is not that it can kill people and destroy objects, but ensuring that it only kills the right people and destroys the right objects. In this paper, we use a hypothetical recently discussed at a military ai conference as a springboard to introduce important dimensions of the ‘Alignment Problem’ into the discourse concerning aws . This paper will consider important dimensions of what is known as the Alignment Problem, why it is difficult to specify smart goals for autonomous systems, why intelligent systems can pursue dumb goals, and the legal implications for assurance of aws . We begin with some preliminary definitions and conceptual analyses. We then outline the Alignment Problem including introducing the concept of objective functions and rewards. We then turn to an exploration of what the Alignment Problem implies for aws testing, and why apparently simple solutions may not be effective. From here we discuss the implications that the Alignment Problem has for international law applicable to aws , addressing legal obligations relating to the responsibility of states to respect and ensure respect for with international humanitarian law ( ihl ) and international human rights law ( ihrl ).
... Such mechanisms ensure that responsibility can be accurately assigned, whether to human operators, developers, or commanders, thereby maintaining a clear chain of command even in cases where AWS are responsible for lethal actions (Vallor, 2013). Furthermore, as AWS autonomy increases, concerns about a "responsibility gap" arise, where it becomes unclear who is accountable for the system's actions if human oversight is lacking (Schulzke, 2013). ...
Article
Full-text available
This research examines the risks and control measures associated with building trustworthy Autonomous Weapon Systems (AWS), a rapidly evolving technology with various implications for military operations and international security. While AWS present advantages in precision and efficiency, they also imply operational, technical, and ethical challenges. Through a comprehensive analysis of relevant studies, this article identifies key risks inherent in AWS development, including algorithmic biases, unintended engagements, and cyber security vulnerabilities. For these, control measures are proposed to mitigate and avoid them, such as advanced fail-safe mechanisms, multi-layered human oversight protocols, and robust cyber security solutions. Particular attention is given to the role of meaningful human control as a fundamental mechanism for enhancing AWS trustworthiness without compromising operational effectiveness. The findings highlight the need for a dynamic, proactive, multidisciplinary risk-based approach to AWS development as trustworthy systems, emphasising the importance of international collaboration in establishing standardised risk assessment methodologies, trustworthiness benchmarks, and certification processes. Moreover, by systematically analysing both risks and control measures, this research provides a design framework for addressing the complex challenges of building trustworthy AWS in the context of evolving warfare technologies.
... 1 Several others can be interpreted as sympathetic to this claim. In military robotics, some think that artificial systems cannot be autonomous (Hellström, 2013;Schulzke, 2013) and design seems to play some role in why. Similarly, others in machine ethics suggest that artificial agents cannot be autonomous because they are programmed by designers (Bringsjord, 2008;Grodzinsky et al., 2008;Johnson, 2006;Torrance, 2008). ...
Article
Full-text available
Raul Hakli & Pekka Mäkelä (2016, 2019) make a popular assumption in machine ethics explicit by arguing that artificial agents cannot be responsible because they are designed. Designed agents, they think, are analogous to manipulated humans and therefore not meaningfully in control of their actions. Contrary to this, I argue that under all mainstream theories of responsibility, designed agents can be responsible. To do so, I identify the closest parallel discussion in the literature on responsibility and free will, which concerns ‘design cases’. Design cases are theoretical examples of agents that appear to lack responsibility because they were designed, philosophers use these cases to explore the relationship between design and responsibility. This paper presents several replies to design cases from the responsibility literature and uses those replies to situate the corresponding positions on the design and responsibility of artificial agents in machine ethics. I argue that each reply can support the design of responsible agents. However, each reply also entails different levels of severity in the constraints for the design of responsible agents. I offer a brief discussion of the nature of those constraints, highlighting the challenges respective to each reply. I conclude that designing responsible agents is possible, with the caveat that the difficulty of doing so will vary according to one’s favoured reply to design cases.
... As can be seen from this list, a "distributed responsibility" is suggested here (Floridi, 2016;Funiok, 2005;Schulzke, 2013). Thus, according to the discourse, it is not the publishing houses alone that are to be held responsible, but also a variety of other actors. ...
Conference Paper
Full-text available
On November 2, 2020, Vienna, Austria, was hit by a terrorist attack. Countless photos and videos created by eyewitnesses were shared on social media and picked up by journalistic media. Especially the use of the images by two major Austrian media caused public outcry and an intensive media-ethical debate. This paper focuses on the meta-journalistic discourse on visual communication norms spurred by the visual media coverage of the attack, what was criticized and by whom, and the consequences of the discourse. A variety of actors spoke out, accusing the two media outlets of showing attack images that violate journalistic codes. While there was a broad consensus that showing the images was inappropriate in the journalistic context, the level of reflection was generally low. The discussion was limited to image motives, but their aesthetics and modes of representation were not discussed. Thus, how something is depicted and what implications these depictions have was not adequately addressed.
... Are classic ideas of allocating responsibility for decisions obsolete if the authorship of decisions cannot be clearly attributed to humans anymore (Matthias, 2004)? Do we need models of distributed responsibility (Schulzke, 2013) to be able to comprehend accountability for decisions as products of 'human-algorithm hybrids' (Beckers & Teubner, 2023) or socio-technical 'assemblages' (Ananny & Crawford, 2018)? Questions such as these have been discussed across different disciplines in recent years. ...
Article
Full-text available
Social science research has been concerned for several years with the issue of shifting responsibilities in organisations due to the increased use of data-intensive algorithms. Much of the research to date has focused on the question of who should be held accountable when 'algorithmic decisions' turn out to be discriminatory, erroneous or unfair. From a sociological perspective, it is striking that these debates do not make a clear distinction between responsibility and accountability. In our paper, we draw on this distinction as proposed by the German social systems theorist Niklas Luhmann. We use it to analyse the changes and continuities in organisations related to the use of data-intensive algorithms. We argue that algorithms absorb uncertainty in organisational decision-making and thus can indeed take responsibility but cannot be made accountable for errors. By using algorithms, responsibility is fragmented across people and technology, while assigning accountability becomes highly controversial. This creates new discrepancies between responsibility and accountability , which can be especially consequential for organisations' internal trust and innovation capacities. K E Y W O R D S accountability and responsibility, algorithmic accountability, algorithmic decisions, Niklas Luhmann, organisation theory
... 24 Thus, even this objection from responsibility is not relevant to AWS or AI as such. Moreover, the objection is subject to compelling rebuttals with regards to AWS (e.g., [26,70]; [105, pp. 273-277]; [103, pp. ...
Article
Full-text available
In many debates surrounding autonomous weapon systems (AWS) or AI-enabled platforms in the military, critics present both over- and under-hyped presentations of the capabilities of such systems, creating a risk of derailing critical debates on how best to regulate these in the military. In particular, in this article, I show that critics utilize over-hype to generate fear about the capabilities of such systems or to create objections that do not hold for more realistically viewed platforms, and they use under-hype to sell AWS and military AI short, creating an image of these as far less capable than is in actuality the case. The hyped presentations in this debate also gloss over many core realities of how modern militaries function, what sorts of platforms they are seeking to develop and use, and what actual combatants are likely to be willing to deploy in real warfighting scenarios. More critically for the regulatory debates themselves, hype (both over and under) forces genuine but subtle arguments on issues with autonomous and AI-enabled systems to be sidelined as scholars deal with the more politically divisive topics brought to the fore by critics. Finally, over- and under-hype creates grave risks of skewing the regulatory debates far enough from the realities of AWS and military AI development and deployment that central state actors may lose willingness to support any eventual treaties established. Thus, in their fervor to generate objections and force rapid regulation of AWS and military AI, critics risk alienating those key players most necessary for such regulation to be globally meaningful and effective.
... An extreme case would be the responsibility gap for autonomous weapons. Weapon systems can select and harm targets without human intervention, which makes it challenging to determine who should be accountable for unintended consequences (Schulzke, 2013). Despite ongoing discussions about the (non-)existence and the positive/negative sides of responsibility gaps (Königs, 2022;Munch, Mainz, & Bjerring, 2023;Tigard, 2021), here we aim to empirically demonstrate why responsibility gaps should be bridged from a psychological and motivational perspective. ...
Article
Full-text available
In the last decade, the ambiguity and difficulty of responsibility attribution to AI and human stakeholders (i.e., responsibility gaps) has been increasingly relevant and discussed in extreme cases (e.g., autonomous weapons). On top of related philosophical debates, the current research provides empirical evidence on the importance of bridging responsibility gaps from a psychological and motivational perspective. In three pre-registered studies (N = 1259), we examined moral judgments in hybrid moral situations, where both a human and an AI were involved as moral actors and arguably responsible for a moral consequence. We found that people consistently showed a self-interest bias in the evaluation of hybrid transgressions, such that they judged the human actors more leniently when they were depicted as themselves (vs. others; Studies 1 and 2) and ingroup (vs. outgroup; Study 3) members. Moreover, this bias did not necessarily emerge when moral actors caused positive (instead of negative) moral consequences (Study 2), and could be accounted for by the flexible responsibility attribution to AI (i.e., ascribing more responsibility to AI when judging the self rather than others; Studies 1 and 2). The findings suggest that people may dynamically exploit the “moral wiggle room” in hybrid moral situations and reason about AI’s responsibility to serve their self-interest.
... 413-415); Nyholm (2018); Saxon (2016); Crootof (2016Crootof ( , pp. 1378Crootof ( -1381; Roff (2014, pp. 357-358); Schulzke (2013). 6 Since the doctrine also applies to civilian leaders, the generic expression "superior responsibility" is often preferred. ...
Article
Full-text available
The possible future use of lethal autonomous weapons systems (LAWS) and the challenges associated with assigning moral responsibility leads to several debates. Some authors argue that the highly autonomous capability of such systems may lead to a so-called responsibility gap in situations where LAWS cause serious violations of international humanitarian law. One proposed solution is the doctrine of command responsibility. Despite the doctrine’s original development to govern human interactions on the battlefield, it is worth considering whether the doctrine of command responsibility could provide a solution by applying the notion analogously to LAWS. A fundamental condition underpinning the doctrine’s application is the control requirement, stipulating that a superior must exert some degree of control over subordinates. The aim of this article is to provide an in-depth analysis of this control condition and assess whether it leads to the impossibility of applying the doctrine of command responsibility to LAWS. To this end, the first section briefly introduces the topic of LAWS and responsibility gaps. The subsequent section provides a concise overview of the doctrine itself and the conditions typically necessitated for its application. In the third section, a comprehensive scrutiny of the control requirement is undertaken through examination of key case law, examining how the concept has been interpreted. Finally, the fourth section delves into the evaluation of commanders’ potential to exert effective control over their (non-human) subordinates. Based on this, the feasibility of considering command responsibility as a viable solution is assessed, aiming to determine whether its application should be prima facie excluded or warrants further exploration.
... 88 Bu durum, herhangi bir bireyin katılımının varlığını ve kapsamını tespit etmeyi zorlaştırmaktadır. 89 Sorunları daha da karmaşık hale getirmek için, yazılım nadiren tek bir monolitik yapı olarak geliştirilmektedir. Bunun yerine, nihai bir ürün oluşturmak için genellikle farklı ülkelerde, farklı zamanlarda ve hatta bazen tamamen farklı son kullanımlar için geliştirilen farklı bileşenlerle birlikte birçok paket bir araya getirilmektedir. ...
Conference Paper
Full-text available
Silahlı çatışmaların değişen doğası neticesinde, yapay zekâ teknolojilerinin silahlı çatışmalarda aktif olarak kullanılmaya başlandığı açıktır. İnsan müdahalesine ihtiyaç duymadan, makine öğrenmesi ve yapay zekâ teknolojileri doğrultusunda hedefleri seçip yok edebilen otonom silah sistemleri teknoloji ve hukuk arasında epistemik bir kopuşa sebebiyet vermektedir. Uluslararası Ceza Mahkemesi Statüsü’nün 25(1) maddesi, mahkemenin yargı yetkisini gerçek kişiler ile sınırlandırmıştır ve bireysel cezai sorumluluğu tesis etmek uluslararası ceza hukukunun temel bir prensibidir. Ancak yapay zekâ sistemleri suçun manevi unsurunu (mens rea) ve maddi unsurunu (actus reus) sağlamaktan çok uzaktır. Bu yüzden otonom silah sistemleri tarafından işlenen savaş suçlarında sorumluluğun kime atfedileceği belirsizdir. Potansiyel olarak; operatörlere, yazılımcılara, üreticilere, devletlere ve komutanlara sorumluluk atfedilebilir. Yazılımcılar, silahlı çatışma esnasında otonom sistemleri kullanmaktan sorumlu olan operatörlerin aksine otonom sistemlerin davranışlarını oluşturdukları kodlarla belirlemede önemli rol oynamaktadırlar. Bu bildiride yazılımcıların sorumluluğuna odaklanılmasının sebebi; kendi özgür iradesi olmayan otonom silah sistemlerini oluşturdukları kodlarla yönlendiren yazılımcılara, bu silah sistemlerinin yarattığı uluslararası hukuk ihlallerinden dolayı sorumluluk atfedilmesi gerekliliğidir. Bu bildiride tartışılacak olan ana araştırma sorusu; yazılımcıların yapay zekâ tarafından işlenen suçlarda, Roma Statüsü’nün 25. maddesinde düzenlenen bireysel cezai sorumluluk kapsamında suçun faili olarak nitelendirilmesinin mümkün olup olmadığıdır. Bu doğrultuda uluslararası ceza hukukunda bulunan müşterek suç teşebbüsü doktrini incelenecektir.
... An extreme case would be the responsibility gap for autonomous weapons. Weapon systems can select and harm targets without human intervention, which makes it challenging to determine who should be accountable for unintended consequences (Schulzke, 2013). Despite ongoing discussions about the (non-) existence and the positive/negative sides of responsibility gaps (Königs, 2022;Munch et al., 2023;Tigard, 2021), here we aim to empirically demonstrate why responsibility gaps should be bridged from a psychological and motivational perspective. ...
Preprint
Full-text available
In the last decade, the ambiguity and difficulty of responsibility attribution to AI and human stakeholders (i.e., responsibility gaps) has been increasingly relevant and discussed in extreme cases (e.g., autonomous weapons). On top of related philosophical debates, the current research provides empirical evidence on the importance of bridging responsibility gaps from a psychological and motivational perspective. In three pre-registered studies (N = 1259), people judged themselves (versus others; Studies 1 and 2) and ingroup (versus outgroup; Study 3) members more leniently for identical transgressions. Moreover, such self-interest bias in hybrid transgressions was accounted for by the flexible responsibility attribution to AI (i.e., ascribing more responsibility to AI when judging the self rather than others; Studies 1 and 2). The findings suggest that people may dynamically exploit the moral “wiggle room” in hybrid moral situations and reason about AI's responsibility to serve their self-interest.
... To be a fitting bearer of 10 See List and Pettit 2011;Huebner 2014;Tollefsen 2015;Epstein 2017;Strohmaier 2020 for examples of those following this strategy. 11 Schulzke (2012) has also argued for a notion of distributed responsibility that he contends can close the responsibility gap supposedly opened by AMS. My notion differs from his in responsibility is necessarily borne not by the members of the collective, but that the collective can be a bearer of responsibility itself. ...
Article
Full-text available
The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most likely be deployed. This will allow us to move beyond the solely individualist understandings of responsibility at work in most treatments of these cases, toward one that includes collective responsibility. This allows us to attribute collective responsibility to the collectives of which the AMS form a part, and to account for the distribution of burdens that follows from this attribution. I argue that this expansion of our responsibility practices will close at least some otherwise intractable techno-responsibility gaps.
... A classic objection in the literature on autonomous weapon systems (AWS) is that if AWS were to violate the law of armed conflict (LOAC), it would be difficult or perhaps even impossible to properly determine who is responsible or should be held to be responsible (Sparrow, 2007;Sharkey, 2007;Pagallo, 2011). 1 AWS would therefore create so-called "responsibility gaps", undermining the ethics and laws of war, 2 This objection, however, has seen a number of rebuttals (Lokhorst & van den Hoven, 2012;Schulzke, 2012;Simpson & Müller, 2015;Müller, 2016;Robillard, 2018), even from those generally sympathetic to its underlying concerns (Leveringhaus, 2016). Yet in virtually all statements of the objection and responses to it, the discussions center around "autonomous weapon systems" as a whole class of entities, without paying heed to the ways different autonomous systems may alter the moral and legal landscape due to variations in their specific design and use. 3 This article seeks to remedy this by providing a taxonomical approach to evaluating responsibility gaps for AWS, showing how differences in the sophistication, deployment, or human ability to intervene on such systems alters the responsibility landscape. ...
Article
Full-text available
A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon systems”, and so the objection is too broad. In this article I present a taxonomic approach to the objection, examining a number of systems that would count as AWS under the prevalent definitions provided by the United States Department of Defense and the International Committee of the Red Cross, and I show that for virtually all such systems there is a clear locus of responsibility which presents itself as soon as one focuses on specific systems, rather than general notions of AWS. In developing these points, I also suggest a method for dealing with near-future types of AWS which may be thought to create situations where responsibility gaps can still arise. The main purpose of the arguments is, however, not to show that responsibility gaps do not exist or can be closed where they do exist. Rather, it is to highlight that any arguments surrounding AWS must be made with reference to specific weapon platforms imbued with specific abilities, subject to specific limitations, and deployed to specific times and places for specific purposes. More succinctly, the arguments show that we cannot and should not aim to treat AWS as if all of these shared all morally relevant features, but instead on a case-by-case basis. Thus, we must contend with the realities of weapons development and deployment, and tailor our arguments and conclusions to those realities, and with an eye to what facts obtain for particular systems fulfilling particular combat roles.
... Their emphasis is on AWS deployment and its deliberate transfer of human killing decision-making power to a machine. Though the control to initiate lethal measures is held by humans, human involvement is limited (Schulzke, 2013). This is the AWS defining characteristic. ...
Article
Full-text available
The article questions the compliance of autonomous weapons systems with international humanitarian law (IHL). It seeks to answer this question by analysing the application of the core principles of international humanitarian law with regard to the use of autonomous weapons systems. As part of the discussion on compliance, the article also considers the implications of riskless warfare where non-human agents are used. The article presupposes that it is actually possible for AWS to comply with IHL in very broad and general terms. However, there is a need for discussion, acceptance, and institutionalization of the interpretation for classification of AWS as well as expansion of the legal framework to cater to the advanced technology. This interpretation will also include a system for allocating and attributing responsibility for their use. The article's results will demonstrate the legal consequences of developing and employing weapon systems capable of performing important functions like target selection and engagement autonomously and the role of IHL and IHRL in regulating the use of these weapons, particularly in human control over individual assaults.
... Still others have endorsed an intermediate position, arguing that gaps exist but that they are narrower than usually thought (Nyholm 2020, Chap. 3;Schulzke 2013;Simpson and Müller 2016). ...
Article
Full-text available
Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of the most sophisticated AI systems do indeed create responsibility gaps, and I ask whether we can bridge these gaps at will , viz. whether certain people could take responsibility for AI-caused harm simply by performing a certain speech act, just as people can give permission for something simply by performing the act of consent. So understood, taking responsibility would be a genuine normative power. I first discuss and reject the view of Champagne and Tonkens, who advocate a view of taking liability . According to this view, a military commander can and must, ahead of time, accept liability to blame and punishment for any harm caused by autonomous weapon systems under her command. I then defend my own proposal of taking answerability , viz. the view that people can makes themselves morally answerable for the harm caused by AI systems, not only ahead of time but also when harm has already been caused.
... Much of the dispute around the responsibility gap problem has concentrated on AWS and AV (e.g., Burri, 2017;Danaher, 2016;Hevelke & Nida-Rümelin, 2015;Matthias, 2004;Nyholm, 2018;Roff, 2013;Schulzke, 2013;Sparrow, 2007;Vladeck, 2014). But nothing singles out AWS or AV as particularly relevant. ...
Article
Full-text available
The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate the conceptual choices we have, in the light of a systematic understanding of why the concept is important in the first place—in short, the way forward is to engage in conceptual engineering. The paper then illustrates what approaching the responsibility gap problem as a conceptual engineering problem looks like. It outlines argumentative pathways out of the responsibility gap problem and relates these to existing contributions to the dispute.
... Since the publication of an executive order by the US Department of Defence on AWS (2012), there has been active and sustained consideration on the ethics of AWS by multiple actors, including policy-makers, roboticists, academics, civil society organizations, and state actors, particularly at the United Nation's Convention on Certain Conventional Weapons Group of Governmental Experts related to emerging technologies in the area of lethal autonomous weapon systems (LAWS). The use of AWS poses a number of pressing ethical questions, and the debate has taken place along a number of fronts accordingly, from the locus of responsibility if or when the use of AWS goes wrong (Sparrow 2007;Schulzke 2013;Neha Jain 2016;Himmelreich 2019;Taddeo and Blanchard Forthcoming), to the loss of dignity implied by the decision to delegate the decision to kill to artificial agents (Birnbacher 2016;Heyns 2017;Sharkey 2019), to the nature of meaningful human control over AWS (Amoroso and Tamburrini 2020). ...
Article
Full-text available
In this article, we focus on the scholarly and policy debate on autonomous weapon systems (AWS) and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war (proportionality) or by providing a propagandistic value (last resort). We argue that whilst these objections offer pressing concerns in their own right, they suffer from important limitations: they overlook the difficulties of calculating ad bellum proportionality; confuse the concept of proportionality of effects with the precision of weapon systems; disregard the ever-changing nature of war and of its ethical implications; mistake the moral obligation imposed by the principle of last resort with the impact that AWS may have on political decision to resort to war. Our analysis does not entail that AWS are acceptable or justifiable, but it shows that ad bellum principles are not the best set of ethical principles for tackling the ethical problems raised by AWS; and that developing adequate understanding of the transformations that the use of AWS poses to the nature of war itself is a necessary, preliminary requirement to any ethical analysis of the use of these weapons.
... There is related concern that these systems are given too much autonomy, especially as they cannot be questioned about their actions in the case of a mistake being made. An extreme example is autonomous weapons mistakenly firing on a civilian -can the validity of the action be assessed when it is not possible to interrogate the rationale that underpinned it (Russell et al., 2015, Schulzke, 2013? ...
Article
There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.
... I would like to mention that one can deal with this issue without promoting the need of attributing moral responsibility to AAs. See, for instance, Schulzke 2013 andChampagne &Tonkens 2015. [that is, morally acceptable] behavior we thereby inculcate. ...
Article
Full-text available
It is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to show that the Extension Argument should overcome especially strong ethical considerations; moreover, its epistemological grounds are not too solid, partly because the justifications of its premises are in conflict.
... We mean that such expansion has not changed the standards for responsibility; expansion occurs when we realize that some other entities in fact meet the standards for responsibility that already exist. 3 For examples of other discussions of morally responsible AI, see Beard 2014;Floridi and Sanders 2004;Gunkel 2017;Himma 2009;Johnson 2006;Johnson and Miller 2008;Schulzke 2013;Stahl 2006;Sullins 2006. 4 This stands in contrast to views like Floridi and Sanders (2004). ...
Article
Full-text available
As artificial intelligence (AI) becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that our Strawsonian approach is either the only one worthy of consideration or the obviously correct approach, but we think it is preferable to trying to marry fundamentally different ideas of moral responsibility (i.e. one for AI, one for humans) into a single cohesive account. Under a Strawsonian framework, people are morally responsible when they are appropriately subject to a particular set of attitudes—reactive attitudes—and determine under what conditions it might be appropriate to subject machines to this same set of attitudes. Although the Strawsonian account traditionally applies to individual humans, it is plausible that entities that are not individual humans but possess these attitudes are candidates for moral responsibility under a Strawsonian framework. We conclude that weak AI is never morally responsible, while a strong AI with the right emotional capacities may be morally responsible.
... Companion robotstribute responsibility to any of the players when it comes to military robots(Sparrow 2007), whereas other authors have suggested a way of attributing responsibility (e.g.,Schulzke 2013). of the risks and moral dilemmas involved in military robots, some people, including Stephen Hawking and Elon Musk, have called for a ban of 'killer robots' ab . https://www.technologyreview.com/s/539876/military-robotsarmed-but-how-dangerous/ ...
... 167. or the user (such as in the case of neglect that leads to a failure), or even the machine itself. 24 While each has its pros, cons and limitations, the discussion is too complex to attempt to resolve in this paper. ...
... 167. or the user (such as in the case of neglect that leads to a failure), or even the machine itself. 24 While each has its pros, cons and limitations, the discussion is too complex to attempt to resolve in this paper. ...
Chapter
Full-text available
Common Article 1 of the Geneva Conventions requires that states ‘respect and ensure respect for’ the Geneva Conventions ‘in all circumstances’. In the new 2016 Commentary to the Convention, the existence of not only a negative obligation, but also a positive obligation of third countries to a conflict to prevent violations was confirmed. Hence, third countries must do everything ‘reasonably in their power to prevent and bring such violations to an end’. The use of autonomous weapons systems (AWS) is imminent in the future, as demonstrated by the Pentagon committing to spend $2 billion on research, with similar research programmes taking place in other countries. The buying and selling of these AWS is an equally impending part of the future. Consequently, inevitably a state that is buying or being supplied with AWS will use them in a conflict. Therefore, suppliers of such systems will have to comply with the aforementioned positive obligation. This paper will examine the positive obligation’s impact on the state supplying AWS to a conflict. This includes the question of whether it will be their responsibility at the manufacturing stage to ensure that the system cannot violate the Geneva Conventions and – because autonomous systems are somewhat uncontrollable and unpredictable as they will also learn rather than only carrying out pre-programmed commands – whether the supplying state will be obligated to maintain a permanent tether to the supplied AWS to monitor them. The implications of tethering the supplied AWS may go well beyond ensuring compliance with international humanitarian law (IHL), and may include multiplying the leverage of the supplying state by turning the systems into ‘cyber mercenaries’.
... In response, it has been argued that the actions of some autonomous machines, particularly autonomous weapons, are constrained by the hierarchical structure of the military and the (implicit) agreements between the State, its citizens, and its military force. Hence, existing social structures fill the alleged responsibility gap (Galliott 2015;Leveringhaus 2016;Schulzke 2013). Some others argue that although autonomous weapons do not create a responsibility gap, they do create a related gap, namely, a 'blameworthiness' gap (Simpson and Müller 2016). ...
Article
Full-text available
Ethics settings allow for morally significant decisions made by humans to be programmed into autonomous machines, such as autonomous vehicles or autonomous weapons. Customizable ethics settings are a type of ethics setting in which the users of autonomous machines make such decisions. Here two arguments are provided in defence of customizable ethics settings. Firstly, by approaching ethics settings in the context of failure management, it is argued that customizable ethics settings are instrumentally and inherently valuable for building resilience into the larger socio-technical systems in which autonomous machines operate. Secondly, after defining the preliminary condition of responsibility attribution and demonstrating how ethics settings enable humans to exert control over the outcomes of morally significant incidents, it is shown that ethics settings narrow the responsibility gap.
... Moor (2006) provides a philosophical discussion of how developers tend to evaluate the performance of the tools based on accomplishing what they were designed for, and after the technology matures, these norms become of second nature (i.e., ethics derived from their human developers). Other prominent ethical issues in the AI domain that are inherently carried over to other domains are: undesirable uses of AI (Schulzke, 2013), loss of accountability (Beiker, 2012;Floridi and Taddeo, 2016), and machine ethics (Anderson et al., 2005). All of these constitute an active, and rapidly evolving, area of research that continues as the adoption of AI methods increases. ...
Article
Full-text available
Nuclear technology industries have increased their interest in using data-driven methods to improve safety, reliability, and availability of assets. To do so, it is important to understand the fundamentals between the disciplines to effectively develop and deploy such systems. This survey presents an overview of the fundamentals of artificial intelligence and the state of development of learning-based methods in nuclear science and engineering to identify the risks and opportunities of applying such methods to nuclear applications. This paper focuses on applications related to three key subareas related to safety and decision-making. These are reactor health and monitoring, radiation detection, and optimization. The principles of learning-based methods in these applications are explained and recent studies are explored. Furthermore, as these methods have become more practical during the past decade, it is foreseen that the popularity of learning-based methods in nuclear science and technology will increase; consequently, understanding the benefits and barriers of implementing such methodologies can help create better research plans, and identify project risks and opportunities.
... Here, we perceive that the governance of emerging technologies often faces an institutional void intertwined with distributed responsibility in hybrid networks of relevant institutions (Sclove, 1995;Wetmore, 2004). Following the ongoing discussion in the domain of technology ethics related to the responsibility gap for actions of learning automata, we recognize that responsibilities are formed during, often unstructured, negotiations between different groups of actants, such as designers, legislators, and users (Felt et al., 2017;Noorman, 2014;Schulzke, 2013;van de Poel, 2011). Furthermore, with the initial premise that SDV technology is a complex artefact, we recognize a potentially wider web of governance actants, beyond the human ones. ...
Chapter
Full-text available
As an emerging technology, the potential deployment of self-driving vehicles (SDVs) in cities is attributed with significant uncertainties and anticipated consequences requiring responsible governance of innovation processes. Despite a growing number of studies on policies and governance arrangements for managing the introduction of SDVs, there is a gap in understanding about country-specific governance strategies and approaches. This chapter addresses this gap by presenting a comparative analysis of SDV-related policy documents in Finland, UK, and Germany, three countries which are actively seeking to promote the introduction of SDVs and which have distinct administrative traditions. Our analytical framework is based on the set of premises about technology as a complex socio-technical phenomenon, operationalized using governance cultures and sociotechnical imaginaries concepts. Our comparative policy document analysis focuses on the assumed roles for SDV technology, the identified domains and mechanisms of governance, and the assumed actors responsible for steering the development process. The results highlight similarities in pro-automation values across three different countries, while also uncovering important differences outside the domain of traditional transport policy instruments. In addition, the results identify different types of potential technological determinism, which could restrict opportunities for responsiveness and divergent visions of mobility futures in Europe. Concluding with a warning against further depolitization of technological development and a dominant focus on economic growth, we identify several necessary directions for further developing governance and experimentation processes.
... 5 Some advocacy groups call it an Baccountability gap.6Responsibility may lie with developers (Lokhorst and van den Hoven 2011), politicians(Steinhoff 2013), or the AWS itself(Hellström 2012; Burri 2017, p. 73). Responsibility might be shared(Schulzke 2013;Robillard 2018), or Ba new kind of ... responsibility^might be required(Pagallo 2011, p. 353). ...
Article
Full-text available
Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.
Chapter
Full-text available
This book discusses digitalization, robotization, and automation of society and of the economy and the use of artificial intelligence from an ethical perspective. After an introduction on the correlation between morality and technology and an assessment of the moral capability of technologies, the book introduces ethical principles serving the evaluation of the digital transformation and the use of artificial intelligence. Subsequently, the digital transformation and its chances and challenges are analyzed from an ethical standpoint. Finally, ethical approaches addressing the challenges are developed. One of the research-focuses of Peter G Kirchschlaeger (Full Professor of Theological Ethics and Director of the Institute of Social Ethics ISE at the Department of Theology of the University of Lucerne; prior Visiting Fellow at Yale University) lies on ethics of digitalization, robotization, automatization, and artificial intelligence.
Article
Full-text available
This study analyses whether targeted killing by drones is inherently consistent with International Humanitarian Law (IHL) principles. Despite its commonly held negative perception, this study contends that targeted killing can align with IHL. This is due to the targeted killing method of drone strikes offering the unique advantage of being in accordance with IHL principles compared to other forms of attacks. However, the use of autonomous drones poses a significant risk to IHL and is likely to violate international obligations. This study discusses that autonomous drones may be unable to analyze data accurately and extract valuable insights. This could cause them to face difficulties in maintaining the necessary balance between civilian harm and anticipated military advantage. As a result, it is argued that autonomous drones are unable to adhere to the IHL principles, particularly the principle of proportionality. The study examines the attribution issue of autonomous drones and proposes that they should be regarded as agents of the State, making their actions attributable to the State.
Book
Full-text available
Known colloquially as a killer robot, an autonomous weapon system (AWS), is a robotic weapon. Upon activation, it can decide for itself when and against whom to use force enough to kill. This dissertation will address the issues posed by AWS. The focus will be on AWS that do not feature ‘meaningful human control’ during times of peace and armed conflict. Thus, unless otherwise stated, in this dissertation, all AWS discussed will be those that do not feature meaningful human control. There are numerous benefits to AWS. For example, this technology has the potential to save the lives of soldiers charged with menial, dangerous tasks. Furthermore, AWS does not tire, become angry or frustrated and so on. Consequently, civilian lives may be saved by their use also. Additionally, AWS leaves a digital footprint that can effectively track events and bring criminals to justice, and AWS cannot wilfully commit a crime itself. Nonetheless, AWS may make going to war far too easy and they pose a severe risk to human rights, including the right to life and dignity and the right to a remedy for a victim. The use of force is a key concern. Does AWS comply to international regulations concerning the use of force? Is the technology, a machine with the power of life and death over human beings, compatible with the right to dignity? A gap in accountability may be created in particular by AWS that do not feature meaningful human control and this could then impact the rights of victims to seek the protection of international law. The legal duty of states under Article 36 of the Additional Protocol I to the Geneva Conventions to review new weapons will be investigated in this dissertation to identify a suitable legal reply to AWS. This duty will also be examined to assess to what extent AWS aligns with recognised standards. According to Article 36, it is required that new weapons be assessed to identify if they are acceptable in relation to several standards, including the human rights system, and whether they result in needless suffering. To begin, this dissertation asserts that AWS that are fully autonomous or have no meaningful human control are not, in fact, strictly weapons. These so-called ‘robot combatants’ should be dealt with carefully by the international community. After the elements of Article 36 are understood in detail, it is proposed here that it is appropriate to accept AWS that do not feature meaningful human control. Regulations of International Humanitarian Law, including precaution, distinction, proportionality rules, are also used to examine AWS. Given that these rules were written to apply to humans and not to machines, which by their very nature cannot exert human judgement, machines will typically fail to satisfy the rules. In addition, the limits of the technology as it exists in the present day and the vague definitions of IHL terms mean that these definitions cannot be transformed into computer code. In addition, the gap in responsibility created by AWS has the potential to have a negative impact on the rights of victims to pursue a remedy due to the question over who should be held accountable for the actions of AWS. The different types of accountability acknowledged in international law, including command responsibility, corporate, individual and state responsibility, are reviewed in relation to the difficulties posed by AWS. This discussion investigates current proposals for how to resolve these difficulties, including the concept of split responsibility and the argument that command responsibility can be applied to AWS. However, these solutions are found to be impracticable and defective. This dissertation supports the findings of scholars who argue that meaningful human control can resolve the difficulties associated with AWS. However, international law offers no definition of this term, so jurisprudence concerning the concept of ‘control’ as a means of determining accountability is used to inform a definition in this dissertation. Tests, which include the strict control test and the effective control test, are discussed to examine ideas around ‘dependence’ and ‘control’, which are central to accountability. It is concluded that meaningful human control over a system of weapons can only exist when a human being is responsible for the functions of the system that relate to the selection of a kill target and the decision to execute an action. That is, human input is required for the completion of the most important functions of a weapons system. If that input is absent, the system should be incapable of carrying out these functions.
Article
Full-text available
Teknolojik gelişmeler toplumsal değişimin ve gelişimin önemli belirleyicisi ola gelmiştir. Topluluklar arası etkileşimin önemli bir aracı olarak diplomasi de teknolojik gelişmelerden dolayısıyla etkilenmektedir. Diplomasinin tarih boyunca teknolojik gelişmelerden etkilendiği ve bu doğrultuda dönüştüğü kabul edilmektedir. Demiryolunun yaygınlaşmasından telgrafın icadına değin diplomasinin bu tür teknolojik gelişmelerden etkilendiği ve bu teknolojilerle birlikte geliştiği düşünülmektedir. Bu doğrultuda, günümüz dünyasının en önemli teknolojilerinden biri olarak kabul edilen yapay zekâ teknolojisinin de diplomasiyi etkileyeceği yerinde bir varsayım olarak ortaya çıkmaktadır. Bu makale de yapay zekâ teknolojisinin diplomasi üzerindeki etkilerini konu edinmektedir. Yapay zekâ çağında diplomasinin nasıl bir hâl aldığı diplomatik bir konu olarak yapay zekâ, diplomasinin uygulandığı ortamı şekillendiren bir faktör olarak yapay zekâ ve diplomatik bir araç olarak yapay zekâ şeklinde üç alt başlıkta ele alınmıştır.
Article
Full-text available
In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful moral responsibly can only be discharged by human agents who are willing to take a moral gambit: they decide to design/develop/deploy AWS despite the uncertainty about the effects an AWS may produce, hoping that unintended and unwanted or unforeseen outcomes may never occurs, but also accepting to be held responsible if such outcomes will occur. We argue that, while a moral gambit is permissible for the use of non-lethal AWS, this is not the case for the actions of lethal autonomous weapon systems.
Chapter
Full-text available
On November 2, 2020, Vienna, Austria, was hit by a terrorist attack. Countless photos and videos created by eyewitnesses were shared on social media and picked up by journalistic media. Especially the use of the images by two major Austrian media caused public outcry and an intensive media-ethical debate. The article focuses on the meta-journalistic discourse on visual communication norms spurred by the visual media coverage of the attack, what was criticized and by whom, and the consequences of the discourse. A variety of actors spoke out, accusing the two media outlets of showing attack images that violate journalistic codes. While there was a broad consensus among the discussants that showing the images was inappropriate in the journalistic context, the level of reflection was low. The discussion was limited to image motives, but the journalists and covered actors did not discuss the reasons for not publishing these visuals, the visuals’ aesthetics and modes of representation. Thus, how something is depicted was excluded from the debate. In ihrem Beitrag »Niemand muss diese Videos zeigen!« Der medienethische Diskurs über die visuelle Berichterstattung zum Terroranschlag 2020 in Wien ergründen Katharina Lobinger und Cornelia Brantner die bislang limitierte Komplexität in der Diskussion um die Verwendung (audio-)visueller Dokumente im Kontext von Gewalt und Terror in der medialen Berichterstattung auf. Am 2. November 2020 wurde in der österreichischen Hauptstadt Wien ein Terroranschlag verübt. Unzählige von Augenzeug*innen erstellte Fotos und Videos wurden in sozialen Medien geteilt und von journalistischen Medien aufgegriffen. Insbesondere die Verwendung der Bilder in den Medien zweier großer österreichischer Medienkonzerne führte zu einem öffentlichen Aufschrei und einer intensiven medienethischen Debatte. In ihrem Beitrag konzentrieren sich die Autorinnen auf den meta-journalistischen Diskurs über Normen der visuellen Kommunikation, der durch die visuelle Medienberichterstattung über den Anschlag ausgelöst wurde, was von wem kritisiert wurde und welche Folgen dieser Diskurs hatte. Wie im Artikel beleuchtet wird, meldete sich zwar eine Vielzahl von Akteur*innen zu Wort und warf den beiden Medien vor, gegen journalistische Kodizes zu verstoßen. Doch, obwohl unter den Diskutant*innen ein breiter Konsens darüber herrschte, dass das Zeigen der Bilder im journalistischen Kontext unangemessen war, war der Grad der Reflexion gering. Wie Lobinger und Brantner verdeutlichen, beschränkte sich die Diskussion weitgehend auf die Motive der Bilder. Die Journalist*innen und Akteur*innen gingen jedoch wenig auf die Ästhetik der Bilder, die Art der Darstellung oder auf die Gründe, aus denen Bilder nicht veröffentlicht werden sollten, ein. Die Frage, wie etwas dargestellt wird, wurde gänzlich ausgeklammert. Damit leisten die Autorinnen einen wichtigen Beitrag zur Diskussion um das Vorhandensein und die Bedeutung visueller Kompetenzen im Journalismus.
Article
Full-text available
The advent of autonomous weapons brings intriguing opportunities and significant ethical dilemmas. This article examines how increasing weapon autonomy affects approval of military strikes resulting in collateral damage, perception of their ethicality, and blame attribution for civilian fatalities. In our experimental survey of U.S. citizens, we presented participants with scenarios describing a military strike with the employment of weapon systems with different degrees of autonomy. The results show that as weapon autonomy increases, the approval and perception of the ethicality of a military strike decreases. However, the level of blame towards commanders and operators involved in the strike remains constant regardless of the degree of autonomy. Our findings suggest that public attitudes to military strikes are, to an extent, dependent on the level of weapon autonomy. Yet, in the eyes of ordinary citizens, this does not take away the moral responsibility for collateral damage from human entities as the ultimate "moral agents". Public Significance Statement This study examines differences in public perceptions of autonomous weapons-one of the key military innovations of our time. We demonstrate that the public perceives the use of fully autonomous weapon systems as more ethically problematic than systems with lower autonomy. 2
Article
Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, at best, morally problematic. However, the article also argues that anti-natalists can find the production of some possible artificially conscious AI permissible. Thus, the creation of potentially conscious AI could be accepted by both friends and foes of anti-natalism.
Article
Full-text available
In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the retributive approach to machine crime in favor of prioritizing restitution. I argue that this shift better conforms to what justice demands when sophisticated artificial agents of uncertain moral status are concerned.
Book
When is war just? What does justice require? If we lack a commonly-accepted understanding of justice – and thus of just war – what answers can we find in the intellectual history of just war? Miller argues that just war thinking should be understood as unfolding in three traditions: the Augustinian, the Westphalian, and the Liberal, each resting on distinct understandings of natural law, justice, and sovereignty. The central ideas of the Augustinian tradition (sovereignty as responsibility for the common good) can and should be recovered and worked into the Liberal tradition, for which human rights serves the same function. In this reconstructed Augustinian Liberal vision, the violent disruption of ordered liberty is the injury in response to which force may be used and war may be justly waged. Justice requires the vindication and restoration of ordered liberty in, through, and after warfare.
Article
Full-text available
We consider the social impact of nanotechnology (NT) from the point of view of its military applications and their implications for security and arms control. Several applications are likely to bring dangers – to arms-control treaties, humanitar-ian law, military stability, or civil society. To avoid such dangers, we propose some approaches to nanotechnology arms control.
Article
There is a general presumption that the law should be congruent with morality—that is, that the prohibitions and permissions in the law should correspond to the prohibitions and permissions of morality. And indeed in most areas of domestic law, and perhaps especially in the criminal law, the elements of the law do in general derive more or less directly from the requirements of morality. I will argue in this chapter, however, that this correspondence with morality does not and, at present, cannot hold in the case of the international law of war. For various reasons, largely pragmatic in nature, the law of war must be substantially divergent from the morality of war.
Article
These two theses are closely related but not identical. This is because the content of the norms of jus in bello could be symmetric in a particular conflict even if the independence thesis is false. This would be the case if the ad bellum status of both combatants was the same (e.g. if both sides were fighting an unjust war). Second, there may be reasons for denying the symmetry thesis that are unrelated to the independence thesis. For example, it may be that in bello rights and obligations are asymmetric, not because they are dependent on ad bellum status, but because they are dependent on the varying capabilities of combatants. Thus, non-state actors and weak states have long argued that they ought to be bound by relaxed standards of jus in bello, relieving them of the obligation to wear distinguishable uniforms, bear their arms openly, and perhaps even allowing the targeting of civilians in terrorist attacks. I have recently argued for the converse conclusion that strong states fighting radically weaker opponents should be bound by more stringent in bello requirements. Nonetheless, the connection between the independence thesis and the symmetry thesis is close, and of crucial importance to the ethics of war and its theoretical foundation.
Article
In this paper I argue that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot’s behavior only by ascribing to it some predisposition or ‘intention’ to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons.
Article
There are at least three things we might mean by “ethics in robotics”: the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies between amoral and fully autonomous moral agents. Thus, robots might move gradually along this continuum as they acquire greater capabilities and ethical sophistication. It also argues that many of the issues regarding the distribution of responsibility in complex socio-technical systems might best be addressed by looking to legal theory, rather than moral theory. This is because our overarching interest in robot ethics ought to be the practical one of preventing robots from doing harm, as well as preventing humans from unjustly avoiding responsibility for their actions.
Article
Man and machine are rife with fundamental differences. Formal research in artificial intelligence and robotics has for half a century aimed to cross this divide, whether from the perspective of understanding man by building models, or building machines which could be as intelligent and versatile as humans. Inevitably, our sources of inspiration come from what exists around us, but to what extent should a machine’s conception be sourced from such biological references as ourselves? Machines designed to be capable of explicit social interaction with people necessitates employing the human frame of reference to a certain extent. However, there is also a fear that once this man-machine boundary is crossed that machines will cause the extinction of mankind. The following paper briefly discusses a number of fundamental distinctions between humans and machines in the field of social robotics, and situating these issues with a view to understanding how to address them.
Book
The debate about the role of women in war, violent conflict and the military is not only a long and ongoing one; it is also a heated and controversial one. The contributions to this anthology come from experts in the field who approach the topic from various angles thus offering different and, at times, diverging perspectives. The reader will therefore gain in-depth insight into the most important aspects and positions in the debate.
Article
The article consists of the text of David A. Whetten's Presidential Address to the Academy of Management delivered last August 2000 in Toronto, Ontario. Whetten reflects on his professional journey and lessons he's learned from experiences. The address made quite an impact on the members of the Academy.
Article
The following commentaries are responses to the rough drafts of six lectures—the Hourani Lectures—that I delivered at the University of Buffalo in November of 2006. This draft manuscript is being extensively revised and expanded for publication by Oxford University Press as a book provisionally called The Morality and Law of War . Even though in January 2007 the book was still both unpolished and incomplete, David Enoch at that time generously organized a workshop at the Law School of the Hebrew University of Jerusalem to discuss its ideas and arguments. George Fletcher chaired the meeting and Re'em Segev, Yuval Shany, and Noam Zohar all presented superb commentaries. The following papers have all grown out of that memorable occasion.
Article
Introduction. Robots have been a part of our work environment for the past few decades, but they are no longer limited to factory automation. The additional range of activities they are being used for is growing. Robots are now automating a wide range of professional activities such as: aspects of the health-care industry, white collar office work, search and rescue operations, automated warfare, and the service industries. A subtle but far more personal revolution has begun in home automation as robot vacuums and toys are becoming more common in homes around the world. As these machines increase in capability and ubiquity, it is inevitable that they will impact our lives ethically as well as physically and emotionally. These impacts will be both positive and negative, and in this paper I will address the moral status of robots and how that status, both real and potential, should affect the way we design and use these technologies. Morality and Human-Robot Interactions. As robotics technology becomes more ubiquitous, the scope of human-robot interactions will grow. At the present time, these interactions are no different than the interactions one might have with any piece of technology, but as these machines become more interactive, they will become involved in situations that have a moral character that may be uncomfortably similar to the interactions we have with other sentient animals.
Article
This is the first comparative, cross-national study of the participation of women in the armed forces of NATO countries. Along side an analysis of this key topic stands a critique of existing theoretical models and the proposal of a revised analytical framework. Unlike previous works this new study employs mixed-methodological research design combining quantitative and qualitative data - a large N-analysis based on general policies and statistical information concerning every country in the sample with more in-depth case-studies. This volume includes original empirical data regarding the presence of women in the armed forces of NATO countries, proposes an index of 'gender inclusiveness' and assesses the factors that affect women's military roles. The book also presents two new key case studies - Portugal and the Netherlands - based on both documentary sources and in-depth interviews of both men and women officers in the two countries. This book will be of great interest to all students and scholars of strategic studies, gender and women studies and military history.
Article
In September 1994, Lawrence P. Rockwood, then a counterintelligence officer with the U.S. Army's Tenth Mountain Division, was deployed to Haiti as part of Operation Restore Democracy, the American-led mission to oust the regime of Raoul Cedras and reinstall President Jean-Bertrand Aristide. Shortly after arriving in-country, Captain Rockwood began receiving reports of human rights abuses at the local jails, including the murder of political prisoners. He appealed to his superiors for permission to take action but was repeatedly turned down. Eventually, after filing a formal complaint with an army inspector general, he set off to inspect the jails on his own. The next day, Captain Rockwood found himself on a plane headed back to the United States, where he was tried by court-martial, convicted on several counts, and discharged from military service. In this book, Rockwood places his own experience within the broader context of the American military doctrine of "command responsibility"-the set of rules that holds individual officers directly responsible for the commission of war crimes under their authority. He traces the evolution of this doctrine from the Civil War, where its principles were first articulated as the "Lieber Code," through the Nuremberg trials following World War II, where they were reaffirmed and applied, to the present.Rockwood shows how in the past half-century the United States has gradually abandoned its commitment to these standards, culminating in recent Bush administration initiatives that in effect would shield American commanders and officials from prosecution for many war crimes. The Abu Ghraib and Guant'namo prison abuse scandals, the recently disclosed illegal CIA detention centers, the unprecedented policy of tolerating acts considered as torture by both international standards and U.S. military doctrine, and the recent cover-ups of such combat-related war crimes as the Haditha massacre of November 2005, all reflect an "official anti-humanitarian" trend, Rockwood argues, that is at odds with our nation's traditions and principles. Copyright © 2007 by University of Massachusetts Press. All rights reserved.
Article
Military robots and other, potentially autonomous robotic systems such as unmanned combat air vehicles (UCAVs) and unmanned ground vehicles (UGVs) could soon be introduced to the battlefield. Look further into the future and we may see autonomous micro- and nanorobots armed and deployed in swarms of thousands or even millions. This growing automation of warfare may come to represent a major discontinuity in the history of warfare: humans will first be removed from the battlefield and may one day even be largely excluded from the decision cycle in future high-tech and high-speed robotic warfare. Although the current technological issues will no doubt be overcome, the greatest obstacles to automated weapons on the battlefield are likely to be legal and ethical concerns. Armin Krishnan explores the technological, legal and ethical issues connected to combat robotics, examining both the opportunities and limitations of autonomous weapons. He also proposes solutions to the future regulation of military robotics through international law.
Article
Military robots and other, potentially autonomous robotic systems such as unmanned combat air vehicles (UCAVs) and unmanned ground vehicles (UGVs) could soon be introduced to the battlefield. Look further into the future and we may see autonomous micro- and nanorobots armed and deployed in swarms of thousands or even millions. This growing automation of warfare may come to represent a major discontinuity in the history of warfare: humans will first be removed from the battlefield and may one day even be largely excluded from the decision cycle in future high-tech and high-speed robotic warfare. Although the current technological issues will no doubt be overcome, the greatest obstacles to automated weapons on the battlefield are likely to be legal and ethical concerns. Armin Krishnan explores the technological, legal and ethical issues connected to combat robotics, examining both the opportunities and limitations of autonomous weapons. He also proposes solutions to the future regulation of military robotics through international law.
Article
“Who done it?” is not the first question that comes to mind when one seeks to make sense of mass atrocity. So brazen are the leader-culprits in their apologetics for the harms, so wrenching the human destruction clearly wrought, meticulously documented by many credible sources. Yet in legal terms, mass atrocity remains disconcertingly elusive. The perversity of its perpetrators is polymorphic, impeding criminal courts from tracing true lines of responsibility in ways intelligible through law's pre-existing categories, designed with simpler stuff in mind. Genocide, crimes against humanity, and the worst war crimes are possible only when the state or other organizations mobilize and coordinate the efforts of many people. Responsibility for mass atrocity is therefore always widely shared, often by thousands. Yet criminal law, with its liberal underpinnings, insists on blaming particular individuals for isolated acts. Is such law therefore constitutionally unable to make any sense of the most catastrophic conflagrations of our time? Drawing on the experience of several recent prosecutions (both national and international), this book trenchantly diagnoses law's limits at such times and offers a spirited defense of its moral and intellectual resources for meeting the vexing challenge of holding anyone criminally accountable for mass atrocity. Just as today's war criminals develop new methods of eluding law's historic grasp, so criminal law flexibly devises novel responses to their stratagems. Mark Osiel examines several such recent legal innovations in international jurisprudence and proposes still others.
Book
Expounding on the results of the author's work with the US Army Research Office, DARPA, the Office of Naval Research, and various defense industry contractors, Governing Lethal Behavior in Autonomous Robots explores how to produce an "artificial conscience" in a new class of robots, humane-oids, which are robots that can potentially perform more ethically than humans in the battlefield. The author examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement. The book presents robot architectural design recommendations for Post facto suppression of unethical behavior, Behavioral design that incorporates ethical constraints from the onset, The use of affective functions as an adaptive component in the event of unethical action, and A mechanism that identifies and advises operators regarding their ultimate responsibility for the deployment of autonomous systems. It also examines why soldiers fail in battle regarding ethical decisions; discusses the opinions of the public, researchers, policymakers, and military personnel on the use of lethality by autonomous systems; provides examples that illustrate autonomous systems' ethical use of force; and includes relevant Laws of War. Helping ensure that warfare is conducted justly with the advent of autonomous robots, this book shows that the first steps toward creating robots that not only conform to international law but outperform human soldiers in their ethical capacity are within reach in the future. It supplies the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system capable of ethically using lethal force. Ron Arkin was quoted in a November 2010 New York Times article about robots in the military.
Article
The law now generally excuses soldiers who obey a superior's criminal order unless its illegality would be immediately obvious to anyone on its face. Such illegality is "manifest," on account of its procedural irregularity, its moral gravity, and the clarity of the legal prohibition it violates. These criteria, however, often conflict with one another, are over- and underinclusive, and vulnerable to frequent changes in methods of warfare. Though sources of atrocity are shown to be highly variable, these variations display recurrent patterns, indicating corresponding legal norms best suited to prevention. There are also discernible connections, that the law can better exploit, between what makes men willing to fight ethically and what makes them willing to fight at all. Specifically, obedience to life-threatening orders springs less from habits of automatism than from soldiers' informal loyalties to combat buddies, whose disapproval they fear. Except at the very lowest levels, efficacy in combat similarly depends more on tactical imagination than immediate, letter-perfect adherence to orders. To foster such practical judgment in the field, military law should rely more on general standards than the bright-line rules it has favored in this area. A stringent duty to disobey all unlawful orders, coupled to a standard-like excuse for reasonable errors, would foster greater disobedience to criminal orders. It would encourage a more fine-grained attentiveness to soldiers' actual situations. It would thereby enable many to identify a superior's order as unlawful, under the circumstances, in situations where unlawfulness may not be immediately and facially obvious to all. This approach aims to prevent atrocity less by increased threat of ex post punishment, than by ex ante revisions in the legal structure of military life. It contributes to "civilianizing" military law while nonetheless building upon virtues already internal to the soldier's calling. In developing these conclusions, the author draws evidence from a wide array of recent wars and peacekeeping missions.
Article
It sounds like science fiction, but it is fact: On the battlefields of Iraq and Afghanistan, robots are killing America' s enemies and saving American lives. But today' s PackBots, Predators, and Ravens are relatively primitive machines. The coming generation of "war-bots" will be immensely more sophisticated, and their development raises troubling new questions about how and when we wage war. There was little to warn of the danger ahead. The Iraqi insurgent had laid his ambush with great cunning. Hidden along the side of the road, the bomb looked like any other piece of trash. American soldiers call these jury -rigged bombs IEDs, of f icial shorthand f or improv ised explosiv e dev ices. The unit hunting f or the bomb was an explosiv e ordnance disposal (EOD) team, the sharp end of the spear in the ef f ort to suppress roadside bombings. By 2006, about 2,500 of these attacks were occurring a month, and they were the leading cause of casualties among U.S. troops as well as Iraqi civ ilians. In a ty pical tour in Iraq, each EOD team would go on more than 600 calls, def using or saf ely exploding about two dev ices a day . Perhaps the most telling sign of how critical the teams' work was to the American war ef f ort is that insurgents began of f ering a rumored $50,000 bounty f or killing an EOD soldier.
Article
Acoustic weapons are under research and development in a few countries. Advertised as one type of non‐lethal weapon, they are said to immediately incapacitate opponents while avoiding permanent physical damage. Reliable information on specifications or effects is scarce, however. The present article sets out to provide basic information in several areas: effects of large‐amplitude sound on humans, potential high‐power sources, and propagation of strong sound.Concerning the first area, it turns out that infrasound ‐ prominent in journalistic articles ‐ does not have the alleged drastic effects on humans. At audio frequencies, annoyance, discomfort and pain are the consequence of increasing sound pressure levels. Temporary worsening of hearing may turn into permanent hearing losses depending on level, frequency, duration etc.; at very high sound levels, even one or a few short exposures can render a person partially or fully deaf. Ear protection, however, can be quite efficient in preventing these effects. Beyond hearing, some disturbance of the equilibrium, and intolerable sensations mainly in the chest can occur. Blast waves from explosions with their much higher overpressure at close range can damage other organs, at first the lungs, with up to lethal consequences.For strong sound sources, mainly sirens and whistles can be used. Powered, e.g., by combustion engines, these can produce tens of kilowatts of acoustic power at low frequencies, and kilowatts at high frequencies. Using explosions, up to megawatt power would be possible. For directed use the size of the sources needs to be on the order of 1 meter, and the required power supplies etc. have similar sizes.Propagating strong sound to some distance is difficult, however. At low frequencies, diffraction provides spherical spreading of energy, preventing a directed beam. At high frequencies, where a beam is possible, non‐linear processes deform sound waves to a shocked, saw‐tooth form, with unusually high propagation losses if the sound pressure is as high as required for marked effects on humans. Achieving sound levels which would produce aural pain, equilibrium problems, or other profound effects seems unachievable at ranges above about 50 m for meter‐size sources. Inside buildings, the situation is different, especially if resonances can be exploited.Acoustic weapons would have much less drastic consequences than the recently banned blinding laser weapons. On the other hand, there is a greater potential of indiscriminate effects due to beam spreading. Because in many situations acoustic weapons would not offer radically improved options for military or police, in particular if opponents use ear protection, there may be a chance for preventive limits. Since acoustic weapons could come in many forms for different applications, and because blast weapons are widely used, such limits would have to be graduated and detailed.
Article
There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies between amoral and fully autonomous moral agents. Thus, robots might move gradually along this continuum as they acquire greater ethical capabilities and moral sophistication. It argues that we must be careful not to treat robots as moral agents prematurely. It also argues that many of the issues regarding the distribution of responsibility in complex socio-technical systems might best be addressed by looking to legal theory, rather than moral theory. This is because our overarching interest in robot ethics ought to be the practical one of preventing robots from doing harm, as well as ensuring that humans take responsibility for their actions.
Article
There are at least three things we might ,mean by “ethics in robotics”: the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency ,that lies between amoral and fully autonomous moral agents. Thus, robots might move gradually along this continuum as they acquire greater ethical capabilities and moral sophistication. It argues that we must be careful not to treat robots as moral agents prematurely. It also argues that many of the issues regarding the distribution of responsibility in complex socio-technical systems might best be addressed by looking to legal theory, rather than moral theory. This is because ,our overarching interest in robot ethics ought to be the practical one of preventing robots from doing harm, as well as ensuring that humans take responsibility for their actions. Keywords.Ethics, Responsibility, Artificial Moral Agents, Agency, Military
Article
"Command responsibility" is an umbrella term used in military and international law to cover a variety of ways in which individuals in positions of leadership may be held accountable. In its broadest sense the term refers to the liability of a military commander for failure properly to discharge his duties. The failure need not necessarily imply insufficient control over the conduct of subordinates: a commander could be punished, for example, because he exposed his troops to undue risk. But in a narrower sense, the term refers to the commander's liability for the criminal conduct of his underlings. This type of liability may in turn be variously structured, and be either civil, disciplinary or criminal in nature. Of late, however, the term is usually reserved to denote a species of this latter type - a species in which not only a military commander, but also a non-military leader, is held criminally liable for the conduct of his subordinates as if he personally had executed the criminal deed. Problems related to this particular species of command responsibility, as it has developed in international law, are the subject-matter of this essay.
Article
The concept of autonomous artificial agents has become a pervasive feature in computing literature. The suggestion that these artificial agents will move increasingly closer to humans in terms of their autonomy has reignited debates about the extent to which computers can or should be considered autonomous moral agents. This article takes a closer look at the concept of autonomy and proposes to conceive of autonomy as a context-dependent notion that is instrumental in understanding, describing and organizing the world. Based on the analysis of two distinct conceptions of autonomy, the argument is made that the limits to the autonomy of artificial agents are multiple and flexible dependent on the conceptual frameworks and social contexts in which the concept acquires meaning. A levelling of humans and technologies in terms of their autonomy is therefore not an inevitable consequence of the development of increasingly intelligent autonomous technologies, but a result of normative choices.
Article
The use of robots to care for the young and the old, and as autonomous agents on the battlefield, raises ethical issues.
Article
The development of autonomous robot weapons is well underway for use in a new style of hi-tech warfare. This will lead to less physical risk to the combatants deploying them but greater moral risk. There has been insuffi cient consideration of how these new weapons will impact on innocents. Two of the most serious ethical concerns discussed here are: (i) the inability of robot weapons to discriminate between combatants and non-combatants and (ii) the inability of such robots to ensure a proportionate response in which the military advantage will outweigh civilian casualties.
Article
While autonomous weapons are not new, few of the ethical, legal or operational implications have been clearly identified and solved. In this section, we look at some of the legal aspects and the tactical implications that flow from them. The overriding need to limit collateral damage and avoid killing innocents means that the man-in-the-loop is vital, but is his increasing distance from the battlefield a disadvantage? UCAVs are a case in point and we look at them and the weapons they will carry in the future. We shall return to the subject in future issues.
Article
abstract The United States Army's Future Combat Systems Project, which aims to manufacture a ‘robot army’ to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed: the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.
Article
This essay analyzes the use of military robots in terms of the jus in bello concepts of discrimination and proportionality. It argues that while robots may make mistakes, they do not suffer from most of the impairments that interfere with human judgment on the battlefield. Although robots are imperfect weapons, they can exercise as much restraint as human soldiers, if not more. Robots can be used in a way that is consistent with just war theory when they are programmed to avoid using force against all but the most clearly hostile targets. However, the essay also cautions against using robots for counterinsurgency because they may alienate people in the contested area and lead to an escalation of hostilities. KeywordsWar–Just war theory– Jus in bello –Robot–UAV