ArticlePDF Available
Cite as: van Diggelen, J., Metcalfe, J.S., van den Bosch, K. et al. Role of emotions in responsible military AI. Ethics Inf
Technol 25, 17 (2023). https://doi.org/10.1007/s10676-023-09695-w
Role of Emotions in Responsible Military AI
Jurriaan van Diggelen1, Jason S. Metcalfe2, Karel van den Bosch1, Mark Neerincx1, José Kersholt1
1 TNO, Soesterberg, Netherlands,
2 Army Research Laboratory (ARL), Maryland, USA
1. Introduction
Following the rapid rise of military Artificial Intelligence (AI), people have warned against mankind withdrawing
from the immediate war-related events resulting in the “dehumanization” of war (Joerden, 2018). The premise
that machines decide what is destroyed and what is spared, and when to take a human life, would be deeply
morally objectionable. After all, a machine isn't a conscious being, doesn't have emotions like empathy,
compassion, or remorse, and isn't subject to military law (Sparrow, 2007). This argument has sparked the call
for meaningful human control, requiring moral decisions to be made by humans, not machines (Amoroso &
Tamburrini, 2020). The United States have proposed a similar principle, named “appropriate levels of human
judgment”1. Likewise, the NATO principles of responsible use of AI in Defence (NATO, 2021) state that “AI
applications will be developed and used with appropriate levels of judgment and care”.
While the definition of these principles, and how they should be operationalized, is controversial and under
debate, one thing is commonly agreed upon: the human must play the role of moral agent in military decision-
making (Scharre, 2018). To enable humans to fulfil this role, the human-AI interaction must provide adequate
support for this. Frequently claimed support capabilities are: the AI system must be able to explain its
reasoning to humans (NATO, 2021); the human must be informed sufficiently and timely by the AI (Boardman
& Butcher, 2019); and the human must be able to inspect and intervene in the plans and decisions of the AI
(Ekelhof, 2019).
Although useful, we contend that such support is insufficient because it only applies to the rational reasoning
processes of moral decision-making. We argue that support should also involve the processing of emotions
that the human experiences, as these intrinsically reflect the human's personal values towards the decision-
making problem. This paper discusses the function of emotions in military moral decision-making and claims
that an appropriate level of emotional involvement is required for all those in the decision-making chain. Finally,
we provide suggestions for the design and implementation of such emotional support.
2. Human emotions matter in military decision-making
AI-systems are capable of analyzing situations and judging the value of possible actions in rational terms. As
AI-systems do not have emotions, their decisions, -or the decisions they propose to humans-, will always follow
logic and rationality. This is believed by some to bring about superior decisions (Haraburda, 2019). It is
understandable why some regard emotions as detrimental to decision-making. Firstly, emotions are difficult to
standardize and control. In a critical organization like the military, soldiers need to behave in a predictable and
consistent manner, which should not be disturbed by their individual beliefs, momentary fears and desires.
Second, emotions may be poor moral advisors. For example, emotions such as anger might induce feelings of
revenge which may in a military context even lead to war crimes. Thirdly, strong emotions may lead to
functional dropout, such as post-traumatic stress syndrome (PTSS), or decision paralysis. Others however
argue that emotions are important, as they shape the choices of a military decision maker, and help to make
decisions (Zilincik, 2022; Desmet & Roeser, 2015). We too argue that the property of humans to experience
1 https://geneva.usmission.gov/2016/04/12/u-s-delegation-statement-on-appropriate-levels-of-human-
judgment/
emotions is critically important for the appraisal of decision problems, and that this property enables a human-
AI team to make decisions that are aligned with the conception of morality as adopted by the individual, its
organization, and society. We do not argue that analytical rules should not be used when making decisions in
moral situations. In contrast, such rules are important, and should be used as guide in the decision-making
process. We also do not imply that human emotions should always have the final word in making the decision,
because it is evident that emotions, particularly if they are intense, may distort a person's judgment, causing
biased and erroneous decisions (Williams, 2010). But we do argue that acknowledging and processing
emotions is important as they enable humans to appreciate what applying a moral rule really means in a given
situation; to feel the consequences of considered decisions. We assert that such emotion-induced feeling of
anticipated outcomes is essential. It helps to feel committed to the upcoming decision, to feel responsible, to
feel regret for likely consequences, and to accept accountability.
We will discuss two functions of emotions that are important for moral decision-making.
2.1 Emotions reflect values
A main function of emotional reactions is to provide the individual with information about the subjective value
attached to the pros and cons of the set of options available (Hartley & Sokol-Hessner, 2018). These emotions
can be directly experienced in the decision situation at hand, but can also play a role in anticipating a particular
outcome (Loewenstein & Lerner 2003). Anticipation means that the decision maker mentally simulates a
particular outcome and the related feelings, for example regret. Whether anticipated or directly experienced, an
emotion informs the decision maker that the situation is appraised as relevant to one’s concerns. Beneficial
outcomes lead to positive emotions, detrimental outcomes to negative ones. As argued by (Schwartz, 2016),
both rational and emotional evaluations are needed for human moral reasoning.
To illustrate this, consider a real-life scenario described by (Scharre 2018): “My sniper team had been sent to
the Afghanistan-Pakistan border to scout infiltration routes where Taliban fighters were suspected of crossing
back into Afghanistan … A young girl of maybe five or six headed out of the village and up our way …
frequently glancing in our direction. It wasn’t a very convincing ruse. She was spotting for Taliban fighters.”
Exposed to the risk of being attacked by the Taliban, the team brought itself to safety again by calling for
exfiltration and aborting the mission. During the mission debrief, the team realized they had another, possibly
more “mission-effective”, option. The girl participated in hostile activities by doing the spotting for the enemy
and would have been a lawful target for engagement. About that option, Scharre writes: “Of course, it would
have been wrong. Morally, if not legally. In our discussion, no one needed to recite the laws of war or refer to
abstract ethical principles. No one needed to appeal to empathy. The horrifying notion of shooting a child in
that situation didn’t even come up”.
This example illustrates that human moral decision-making is the result of a complex interplay of many
simultaneous factors, such as sensations, feelings, emotions, and thoughts. Feelings and emotions are often
the first reactions to a situation. They occur automatically and form part of subsequent judgment processing,
providing information about our main concerns, our core values.
2.2 Emotions drive motivations and behavior
As emotions reflect our core values, they are the main drivers of motivation and behavior (Zeelenberg et al.
2008). Emotions act as a spotlight that optimizes the usage of our scarce cognitive resources. They may
indicate what aspects of a situation have our focus during the moral decision-making process. Examples are
emotions such as fear or desire of the anticipated outcome, or compassion with or dislike of the persons
affected by the moral decision. However, even though the emotion will trigger a behavioral tendency, emotion
regulation processes may lead to different outcomes. A decision maker could, for example, make a reappraisal
of the situation or refrain from performing the behavior on the basis of a more thorough risks analysis.
Emotions continue to affect the decision maker after the moral activity has been conducted, i.e. retrospectively.
For example, emotions such as satisfaction, joy, sadness and regret, facilitate the decision-maker to reflect
upon the considerations and decision, learn from it, and use that on subsequent occasions. Eventually, this will
lead to the learned behavior being ingrained in their intuition.
3. Emotional engagement in an AI-driven defense organization
Following our argument that moral judgment necessitates an appropriate level of emotional involvement, we
will now discuss why this is becoming increasingly important as AI-systems become more widely used.
Although it is unclear how AI will exactly change military practice, we can identify a number of trends and
predict how they will affect human emotional engagement.
AI-tools are very well equipped to adopt a purely rational and computational moral reasoning style. Given the
appeal of emotionless moral reasoning (as argued in the preceding section), some researchers have argued
that the rise of AI should be embraced as an opportunity to make warfare more moral (Arkin 2010). There are
two arguments against this viewpoint. Firstly, it ignores the previously mentioned function of emotions:
acknowledging moral values and motivating behavior. Secondly, it is based on the flawed idea that AI would
put the human out of the loop entirely (Johnson & Vera, 2019).
Although humans may not be present at the moment that an AI-based system autonomously executes a
morally sensitive activity (e.g., due to required reaction time or a lack of connectivity), humans will unarguably
be present in other phases of the operation, such as mission planning, or debriefing. Furthermore, during the
development of an AI-system, human programmers were involved in designing the AI's moral behavior.
For example, consider a future minesweeping operation by Autonomous Underwater Vehicles (AUV’s). The
AUV’s are equipped with preprogrammed behaviors that allow them to inspect and defuse naval mines. During
the planning phase, human navy personnel are responsible for tasking the system. They specify the search
pattern, the available time, and how to act when mines are positioned close to other objects, such as fisher
boats. Weighing the risks of missing an enemy mine against the risk of unintentional damage to fisher boats is
a moral consideration that is conducted during the planning phase. During the mission the AUV performs its
actions fully autonomously, as in underwater operations there are no opportunities for real-time human-robot
communication. During the debriefing phase, the AI informs and explains to the human operators on how the
mission went and suggests potential points of improvement.
This future scenario illustrates that AI still requires human involvement, but that human control is limited to prior
to, and after the operation (Diggelen et al, 2023). Clearly, these mission characteristics have implications for a
human's emotional involvement. It may be expected that during the mission, operators experience low to
moderate levels of emotion, as when the AI makes its critical decisions, the operators cannot monitor the
situation, nor can they intervene at that point in time. However, the impact of the AI's decisions in critical
situations is severe. For developers to properly design decision-making for the AI in advance of the actual
operation, they need to properly appreciate the moral implications of potential decisions. We argue that this
proper appreciation does not arise when the problem is treated as a rational calculation only. Designers need a
proper emotional involvement to feel engaged in the considerations and feel committed and responsible for the
decisions they eventually implement in the system. Without a proper emotional involvement, designers may
run the risk of becoming indifferent to the moral consequences of decisions taken by their AI-system.
Likewise, developers that instruct an AI-based system to act on the battlefield are likely to be less emotionally
involved in the system's decisions and the consequences thereof than soldiers who actually operate on the
battlefield. However, the contrary could also be true. For example, a drone pilot may experience more intense
emotions than a traditional fighter jet pilot. This is because drone operators often closely observe their
assigned targets for an extended period of time using high fidelity video. Because of this, they are likely to see
the target as a human who goes about normal life activities. It is likely that the drone-operator experiences
strong emotions when anticipating the future outcomes of decisions, or when observing their consequences
(Enemark, 2019).
Summarizing, deploying AI in military organizations will have severe consequences for how humans are
involved in military missions, and will have disruptive effects on human emotional involvement. Given that
emotions play such an important role in moral behavior, this aspect requires proper attention when developing
responsible AI.
4. Designing for emotional involvement
The first step in the design process is to determine the roles of humans and AI in moral decision-making when
they collaborate, either directly or indirectly. These humans could be programmers, planners, operators, or
commanders. They should all be supported, not only in an analytical-rational manner, but also to appreciate
the morality of the impending decisions. This requires the right level of emotional engagement. Note that higher
emotional involvement is not necessarily better. As argued, emotional experiences have downsides and
upsides for moral judgement. Therefore, determining the right level of emotional engagement is far from trivial
and is currently poorly understood.
The next step should be to design human-AI interactions, which involve emotional appraisals that address the
moral values at stake and can be related to the rational moral reasoning in the decision-making process.
Abstract numbers and symbols may not trigger the required experience for such moral assessments.
Dialogues with a conversational AI-agent (Hagemeijer, 2020) could provide and request concrete information
of the situation to create appropriate engagement and understanding of the relevant moral aspects. Immersive
displays may also support such assessments, by controlling the perceptual richness of the situation (visuals,
sounds, tactile, or smell), and by creating a narrative of the situation. In general, such human-AI interactions
would help to sense and weight the moral aspects, accommodating appropriate emotional appraisals and
value assessments.
5. Conclusion
Emotions play a crucial role in human moral decision-making. They reflect moral values in a person, and they
establish the motivation and engagement required for appropriate moral judgement and care. Therefore,
emotions cannot be ignored when designing responsible military artificial intelligence. Human-AI interaction
relies on asynchronous control of technology, whereby the humans specify in advance what decisions the
machines should take when anticipated events occur later in time. This asynchrony between decision-making
and decision-execution raises novel challenges, such as how to assist decision-makers in comprehending and
feeling the moral impact of potential decisions.
We propose the following research agenda to tackle these. Firstly, we must study which types and levels of
emotion are most appropriate for moral decision-making in various situations, how they represent the values at
stake, and how they interact with rational evaluations. Secondly, we must better understand how these
emotions can be evoked in human-AI interactions, such as in dialogues with conversational agents and
sensory and narratively immersive user interfaces. Overall, we argue that there is a fundamental need to more
fully integrate humans into AI-driven organizations. Attempts to simplify human-AI interaction by parceling out
or ignoring human emotional state run a considerable risk of omitting some of the most valuable information
available from the human counterparts, i.e. their emotions. This, we argue, may lead to instantiating the exact
problems that meaningful human control is meant to avoid.
References
Amoroso, D., & Tamburrini, G. (2020). Autonomous weapons systems and meaningful human control: ethical
and legal issues. Current Robotics Reports, 1(4), 187-194.
Arkin, R. C. (2010). The case for ethical autonomy in unmanned systems. Journal of Military Ethics, 9(4), 332-
341
Boardman, M., & Butcher, F. (2019). An exploration of maintaining human control in AI enabled systems and
the challenges of achieving it. In Workshop on Big Data Challenge-Situation Awareness and Decision Support.
Brussels: North Atlantic Treaty Organization Science and Technology Organization. Porton Down: Dstl Porton
Down.
Desmet, P. M., & Roeser, S. (2015). Emotions in design for values. Handbook of Ethics, Values, and
Technological Design: Sources, Theory, Values and Application Domains, 203-219.
J. van Diggelen, K. van den Bosch, M. Neerincx, M. Steen (2023), Designing for Meaningful Human Control in
Military Human-Machine Teams, in Research handbook on Meaningful Human Control of Artificial Intelligence
Systems, Edward Elgar Publishing.
Ekelhof, M. (2019). Moving beyond semantics on autonomous weapons: Meaningful human control in
operation. Global Policy, 10(3), 343-348.
Enemark, C. (2019). Drones, risk, and moral injury. Critical Military Studies, 5(2), 150-167.
Hartley, C., & Sokol-Hessner, P. (2018). Affect is the foundation of value. In: A.S. Fox, R.C. Lapate, A.J.
Shackman and R.J. Davidson. The Nature of Emotion: Fundamental Questions, 348-51, New York: Oxford
University Press.
M. Hagemeijer, Affective Intelligent System Design for Empathy in Decision-making (2020), Unpublished
Bachelor Thesis.
Haraburda, S. S. (2019). Benefits and Pitfalls of Data-Based Military Decisionmaking | Small Wars Journal.
Retrieved January 16, 2023, from https://smallwarsjournal.com/jrnl/art/benefits-and-pitfalls-data-based-military-
decisionmaking
Joerden, J. C. (2018). Dehumanization: The Ethical Perspective. In Dehumanization of Warfare (pp. 55-73).
Springer, Cham.
Johnson, M., & Vera, A. (2019). No AI is an island: the case for teaming intelligence. AI magazine, 40(1), 16-
28.
Loewenstein, G., & Lerner, J. S. (2003). The role of affect in decision making. In: R.J. Davidson, K.R. Scherer
and H.H. Goldsmith. Handbook of affective sciences. New York: Oxford University Press.
NATO, NATO principles of responsible AI (2021);
https://www.nato.int/cps/en/natohq/official_texts_187617.htm
Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. WW Norton & Company.
Schwartz, M. S. (2016). Ethical decision-making theory: An integrated approach. Journal of Business
Ethics, 139(4), 755-776.
Sparrow, R. (2007). Killer robots. Journal of applied philosophy, 24(1), 62-77.
Williams, B. S. (2010). Heuristics and biases in military decision-making. Military Review, 40–52
Zeelenberg, M., Nelissen, R. M., Breugelmans, S. M., & Pieters, R. (2008). On emotion specificity in decision-
making: Why feeling is for doing. Judgment and Decision-making, 3(1), 18.
Zilincik, S. (2022). The Role of Emotions in Military Strategy. Psychology, 5(2), 11-25.
... Obviously, the difficulties increase when more factors need to be taken into account. In such complex situations guidelines fail to provide clear-cut directions, and humans have to rely on their own personal moral values and to decide accordingly [36]. If a human successfully accomplishes to make the system behave in alignment with the human's personal moral values (e.g. by providing clear instructions in advance), then this should be considered as having moral control. ...
... Results on measures of believability and moral load, as well as comments that participants made to the experimenter, indicate that conducting triage in the task environment generated the immersion and ethical involvement required for appreciating the moral consequences of decisions [36]. Participants experienced the task over-all as likeable (i.e., engaging to do). ...
Article
Full-text available
AI will increasingly be used to collaborate with humans on ethical tasks. This study contributes to the need for practical methods to assess whether a human–AI system is under Meaningful Human Control (MHC). We propose three measurable components of MHC: subjective-, normative-, and moral control. To empirically evaluate the qualities of the MHC measuring approach, we developed an experiment in which a human–AI team performs triage during a pandemic outbreak. Participants performed the role of physician. Moral pressure was induced by a rapid influx of patients and limited resources. Three designs of human–AI collaboration were tested as a repeated within-subjects factor: (A) agent provides information and decision advice; (B) human assigns some patients to agent for triage; (C) human instructs agent to autonomously conduct triage on all patients. The measures were sufficiently sensitive to detect effects of the three human–AI team design on MHC: When advised by an agent (A), or when issuing tasks to an agent (B), participants felt more engaged, were able to exercise more control, and were more compliant with ethical guidelines. When the agent performed triage autonomously (C), participants reported a lower moral load, and judged the collaboration as less believable. Subjective, normative, and moral control can serve as a practical approach for assessing MHC.
... Weapon technologies, and therefore, autonomous weapon technologies, can be evaluated in two different categories according to their use for offensive and defensive purposes. Although these systems are equipped with autonomous features such as self-defense against threats without human intervention and the ability to autonomously detect targets [19], the authority to use lethal forces still resides in humans [42]. For this reason, when we talk about autonomous weapon technologies today, we do not talk about full autonomy. ...
... However, after a moral action is completed or retrospectively completed, emotions related to previous decisions may continue to influence the decision-maker. For example, feelings of pleasure, excitement, grief, and regret allow the decision-maker to reflect on the issue and choice, learn from it, and use this knowledge in the future [42]. Furthermore, operators may be subject to moral injury [6]. ...
Article
Full-text available
This study examines the use of artificial intelligence in the development of autonomous weapon systems. The human involvement and responsibility aspects of the development process were examined, particularly in terms of ensuring that the system worked correctly and determining who was responsible for its actions. This paper also discusses the importance of ethical considerations such as fairness and non-discrimination when designing these systems. The study also emphasizes the importance of psychology and emotions when designing autonomous systems, as these factors can influence decisions and choices. The ability of autonomous systems to discriminate among combatants, casualties, and civilians is also an important ethical consideration. The paper concludes by emphasizing the importance of adopting a human-centered approach when designing autonomous weapon systems and recommends the use of the extended human-centered AI framework in the design of autonomous weapon systems by incorporating psychological and emotional dimensions, as it is seen as a useful tool to ensure safe, healthy, and efficient AI systems for humans. In addition, the study suggests that ACTIVE ethics, an approach to virtue ethics, can be used in the design of AWSs. When used with the extended human-centered AI framework, it is thought that more ethically appropriate and robust projects can be realized.
Conference Paper
Artificial Intelligence (AI) systems frequently exhibit systematic blind spots, often referred to as hallucinations in Large Language Models (LLMs), posing risks in high-stakes applications such as autonomous systems, security, and military operations. This paper explores how human intuitive responses, along with conscious reasoning processes, can be integrated to mitigate AI blind spots and enhance decision-making effectiveness within Human-AI teams. We introduce Human-Guided Artificial Intelligence (HGAI) as a framework for achieving this goal. Specifically, we examine the role of System-1 intuitive processing, as captured through physiological signals such as electroencephalography (EEG) and electrocardiography (ECG), System-2 reasoning-based decision-making, and Multi-Modal Fusion (MMF) mechanisms while assessing the knowledge required to develop more reliable, context-aware, and ethically aligned intelligent decision-making systems for highly complex environments.
Article
Full-text available
Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.
Article
Full-text available
Burdening capabilities of traditional methods for military decisionmaking, extreme quantities of data inundate military leaders. In supporting their decisions, they readily use quantitative measures to extract valuable information from this data. Illustrated in several historical examples, qualitative measures, especially those involving the human domain of warfare, have crucial effects upon decisions. For example, cunning and determination of a weaker force can overwhelm dominance of superior discipline, weapons, and armor. Untested assumptions and deliberate ignorance of critical data can doom a commander facing an enemy with much better competencies and discipline. Failure to understand assumptions in military plans and to develop realistic contingencies can also doom an attack. Fighting a war with industrial-proven management techniques, such as statistical analysis, while ignoring human data from subject matter experts, could pummel military decisions into a strategic calamity. And, undisciplined and unethical culture of troops can easily negate overwhelming numerical advantages. As a result, effective analyses require qualitative information to uncover insights into the human domain of warfare.
Article
Full-text available
The purpose of this article is to draw attention to an aspect of intelligence that has not yet received significant attention from the AI community, but that plays a crucial role in a technology’s effectiveness in the world, namely teaming intelligence. We propose that Al will reach its full potential only if, as part of its intelligence, it also has enough teaming intelligence to work well with people. Although seemingly counterintuitive, the more intelligent the technological system, the greater the need for collaborative skills. This paper will argue why teaming intelligence is important to AI, provide a general structure for AI researchers to use in developing intelligent systems that team well, assess the current state of the art and, in doing so, suggest a path forward for future AI systems. This is not a call to develop a new capability, but rather, an approach to what AI capabilities should be built, and how, so as to imbue intelligent systems with teaming competence.
Article
Full-text available
Ongoing discussions about autonomous weapons typically share concerns of losing control and the potentially destabilizing consequences for global security. To the extent that there is any consensus among states, academics, NGOs and other commentators involved in diplomatic efforts under the auspices of the UN Convention on Certain Conventional Weapons, it is grounded in the idea that all weapons should be subject to meaningful human control. This intuitively appealing concept immediately gained traction, although at a familiar legal‐political cost: nobody knows what the concept actually means in practice. Although global discourses on policy and governance are typically infused with ambiguity, abstract concepts are of little use if they ignore the operational context that confronts the military in their application. This article places this intuitively appealing concept in context, and thus examines it in operational practice. Paying attention to this military practice is important as it demonstrates that meaningful human control is not the only, or the best, approach through which to characterize the human role and govern the challenges raised by autonomous weapons.
Article
Full-text available
Ethical decision-making (EDM) descriptive theoretical models often conflict with each other and typically lack comprehensiveness. To address this deficiency, a revised EDM model is proposed that consolidates and attempts to bridge together the varying and sometimes directly conflicting propositions and perspectives that have been advanced. To do so, the paper is organized as follows. First, a review of the various theoretical models of EDM is provided. These models can generally be divided into (a) rationalist-based (i.e., reason); and (b) non-rationalist-based (i.e., intuition and emotion). Second, the proposed model, called ‘Integrated Ethical Decision Making,’ is introduced in order to fill the gaps and bridge the current divide in EDM theory. The individual and situational factors as well as the process of the proposed model are then described. Third, the academic and managerial implications of the proposed model are discussed. Finally, the limitations of the proposed model are presented.
Chapter
To structure the debate on the ethical admissibility of “dehumanization” of warfare, e.g. by using drones, the distinction between evaluation rules and imputation rules seems to be helpful. Evaluation rules give information as to whether a certain behavior is obligatory, forbidden or permitted. From these rules, the rules that give information as to whether or not a person can actually be held responsible for a certain behavior must be distinguished; for this requires imputation. And without imputation of a behavior as an act (or omission) of a certain person, it is not even reasonable to apply the evaluation rules to this behavior. For it is by applying the imputation rules that the subject is determined to which the evaluation rules can then refer to. This also means that there are two categorically different avenues to shed light onto “dehumanized” behavior in war: It is possible that there are evaluation rules that forbid certain “dehumanized” behavior as a matter of principle; and it is possible that there are imputation rules, the violation of which could lead to persons successfully but unjustifiedly evading their own responsibility. The first avenue will be—after some introductory remarks in sections “Two Meanings of ‘Dehumanization’” to “The Distinction Between Evaluation Rules and Imputation Rules”—examined below in sections “Ban on Usage of Certain Weapons and Weapon-Systems” to “Drones and ‘Stealth’”, while the second is then investigated in sections “The Distinction Between Combatants and Civilians” and “Problems of Imputation”.
Article
This article assesses the ethical significance of drone violence by focusing on the experience of drone operators as moral agents. In recent debates over the use of armed drones, ethical judgments have tended to be informed by the Just War principles that traditionally govern the conduct of war. However, Just War thinking is not the only way to think morally about the killing that drone operators do. Drone violence could also be assessed by reference to the notion that killing a human being can cause ‘moral injury’ to the killer because it betrays his or her personal standards of right conduct. Killing that is deemed permissible by others (by reference to Just War principles) can be judged further and differently by killers themselves, and a drone operator can make this judgement in the unprecedented circumstance of remote killing witnessed in real-time via a powerful video-camera mounted on the aircraft. Although it is sometimes claimed that physically-distant drone operators are as morally disengaged and as immune to risk as the players of violent video-games, evidence to the contrary is emerging. Accordingly, the purpose of this article is to illuminate the situation of drone operators who might actually be highly-engaged morally and to explore reasons why they might experience moral injury. If the risk of moral injury is real, it undermines the risk-avoidance rationale for sending drones rather than troops to dangerous places, and it could serve as an additional ethical basis for restraining drone violence.
Article
CARL VON CLAUSEWITZ'S metaphoric description of the condition of war is as accurate today as it was when he wrote it in the early 19th century. The Army faces an operating environment characterized by volatility, uncertainty, complexity, and ambiguity.2 Military professionals struggle to make sense of this paradoxical and chaotic setting. Succeeding in this environment requires an emergent style of decision making, where practitioners are willing to embrace improvisation and reflection.3 The theory of reflection-in-action requires practitioners to question the structure of assumptions within their professional military knowledge.4 For commanders and staff officers to willingly try new approaches and experiment on the spot in response to surprises, they must critically examine the heuristics (or "rules of thumb") by which they make decisions and understand how they may lead to potential bias. The institutional nature of the military decision making process (MDMP), our organizational culture, and our individual mental processes in how we make decisions shape these heuristics and their accompanying biases. The theory of reflection-in-action and its implications for decision making may sit uneasily with many military professionals. Our established doctrine for decision making is the MDMP. The process assumes objective rationality and is based on a linear, step-based model that generates a specific course of action and is useful for the examination of problems that exhibit stability and are underpinned by assumptions of "technical-rationality." 5 The Army values MDMP as the sanctioned approach for solving problems and making decisions. This stolid template is comforting; we are familiar with it. However, what do we do when our enemy does not conform to our assumptions embedded in the process? We discovered early in Iraq that our opponents fought differently than we expected.