ArticlePDF Available

A Virtue-Based Framework to Support Putting AI Ethics into Practice

Authors:

Abstract and Figures

Many ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, several AI ethics researchers have pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach. This paper proposes a complementary to the principled approach that is based on virtue ethics. It defines four “basic AI virtues”, namely justice, honesty, responsibility and care, all of which represent specific motivational settings that constitute the very precondition for ethical decision making in the AI field. Moreover, it defines two “second-order AI virtues”, prudence and fortitude, that bolster achieving the basic virtues by helping with overcoming bounded ethicality or hidden psychological forces that can impair ethical decision making and that are hitherto disregarded in AI ethics. Lastly, the paper describes measures for successfully cultivating the mentioned virtues in organizations dealing with AI research and development.
This content is subject to copyright. Terms and conditions apply.
/ Published online: 21 June 2022
Philosophy & Technology (2022) 35: 55
Vol.:(0123456789)
https://doi.org/10.1007/s13347-022-00553-z
1 3
RESEARCH ARTICLE
A Virtue‑Based Framework toSupport Putting AI Ethics
intoPractice
ThiloHagendor1
Received: 10 March 2022 / Accepted: 11 June 2022
© The Author(s) 2022
Abstract
Many ethics initiatives have stipulated sets of principles and standards for good tech-
nology development in the AI sector. However, several AI ethics researchers have
pointed out a lack of practical realization of these principles. Following that, AI eth-
ics underwent a practical turn, but without deviating from the principled approach.
This paper proposes a complementary to the principled approach that is based on
virtue ethics. It defines four “basic AI virtues”, namely justice, honesty, responsibil-
ity and care, all of which represent specific motivational settings that constitute the
very precondition for ethical decision making in the AI field. Moreover, it defines
two “second-order AI virtues”, prudence and fortitude, that bolster achieving the
basic virtues by helping with overcoming bounded ethicality or hidden psychologi-
cal forces that can impair ethical decision making and that are hitherto disregarded
in AI ethics. Lastly, the paper describes measures for successfully cultivating the
mentioned virtues in organizations dealing with AI research and development.
Keywords AI virtues· AI ethics· Business ethics· Moral psychology· Bounded
ethicality· Implementation· Machine learning· Artificial intelligence
1 Introduction
Current AI ethics initiatives, especially when adopted in scientific institutes or com-
panies, mostly embrace a principle-based approach (Mittelstadt, 2019). However,
establishing principles alone does not suffice; they also must be convincingly put
into practice. Most AI ethics guidelines do shy away from coming up with meth-
ods to accomplish this (Hagendorff, 2020). Nevertheless, recently more and more
research papers appeared that describe steps on how to come “from what to how”
(Eitel-Porter, 2020; Morley etal., 2020; Theodorou & Dignum, 2020; Vakkuri etal.,
* Thilo Hagendorff
thilo.hagendorff@uni-tuebingen.de
1 Cluster ofExcellence “Machine Learning: New Perspectives forScience”, University
ofTuebingen, Tübingen, Germany
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
2019a). However, AI ethics still fails in certain regards. The reasons for that are
manifold. This is why both in academia and public debates, many authors state that
AI ethics has not permeated the AI industry yet, quite the contrary (Vakkuri etal.,
2019b). Despite the mentioned reasons, this is due to current AI ethics discourses
hardly taking considerations on moral psychology into account. They do not con-
sider the limitations of the human mind, the many hidden psychological forces like
powerful cognitive biases, blind spots and the like that can affect the likelihood of
ethical or unethical behavior. In order to effectively improve moral decision making
in the AI field and to live up to common ideals and expectations, AI ethics initiatives
can seek inspiration from another ethical framework that is yet largely underrepre-
sented in AI ethics, namely virtue ethics. Instead of focusing only on principles, AI
ethics can put a stronger focus on virtues or, in other words, on character disposi-
tions in AI practitioners in order to effectively put itself into practice. When using
the term “AI practitioners” or “professionals”, this includes AI or machine learn-
ing researchers, research project supervisors, data scientists, industry engineers and
developers, as well as managers and other domain experts.
Moreover, to bridge the gap between existing AI ethics initiatives and the require-
ments for their successful implementation, one should consider insights from moral
psychology because, up to now, most parts of the AI ethics discourse disregard the
psychological processes that limit the goals and effectiveness of ethics programs.
This paper aims to respond to this gap in research. AI ethics, in order to be truly suc-
cessful, should not only repeat bullet points from the numerous ethics codes (Jobin
etal., 2019). It should also discuss the right dispositions and character strengths in
AI practitioners that can help not only to identify ethical issues and to engender the
motivation to take action, but also—and this is even more important—to discover
and circumvent one’s own vulnerability to psychological forces affecting moral
behavior. The purpose of this paper is to state how this can be executed and how
AI ethics can choose a virtue-based approach in order to effectively put itself into
practice.
2 AI Ethics—the Current Principled Approach
Current AI ethics programs often come with specific weaknesses and shortcomings.
First and foremost, without being accompanied by binding legal norms, their nor-
mative principles lack reinforcement mechanisms (Rességuier & Rodrigues, 2020).
Basically, deviations from codes of ethics have no or very minor consequences.
Moreover, even when AI applications fulfill all ethical requirements stipulated, it
does not necessarily mean that the application itself is “ethically approved” when
used in the wrong contexts or when developed by organizations that follow unethical
intentions (Hagendorff, 2021a; Lauer, 2020). In addition to that, ethics can be used
for marketing purposes (Floridi, 2019; Wagner, 2018). Recent AI ethics initiatives of
the private sector have faced a lot of criticism in this regard. In fact, industry efforts
for ethical and fair AI are compared to past efforts of “Big Tobacco” to whitewash
the image of smoking (Abdalla & Abdalla, 2020). “Big Tech”, so the argument, uses
ethics initiatives and targeted research funds to avoid legislation or the creation of
55 Page 2 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
binding legal norms (Ochigame, 2019). Hence, avoiding or addressing criticism like
that is paramount for trustworthy ethics initiatives.
The latest progress in AI ethics research was configured by a “practical turn”,
which was among other things inspired by the conclusion that principles alone can-
not guarantee ethical AI (Mittelstadt, 2019). To accomplish that, so the argument,
principles must be put into practice. Recently, several frameworks were developed,
describing the process “from what to how” (Hallensleben et al., 2020; Morley
etal., 2020; Zicari, 2020). Basically, this implies considering the context depend-
ency in the process of realizing codes of ethics, the different requirements for dif-
ferent stakeholders, as well as the demonstration of ways of dealing with conflicting
principles or values, for instance in the case of fairness and accuracy (Whittlestone
etal., 2019). Ultimately, however, the practical turn frameworks are often just more
detailed codes of ethics that use more fine-grained concepts than the initial high-
level guidelines. For instance, instead of just stressing the importance of privacy, like
the first generation of comprehensive AI ethics guidelines did, they hint to the Pri-
vacy by Design or Privacy Impact Assessment toolkits (Cavoukian, 2011; Cavouk-
ian etal., 2010; Oetzel & Spiekermann, 2014). Or instead of just stipulating princi-
ples for AI, they differentiate between stages of algorithmic development, namely
business and use-case development; design phase, where the business or use case
is translated into tangible requirements for AI practitioners; training and test data
procurement; building of the AI application; testing the application; deployment
of the application and monitoring of the application’s performance (Morley etal.,
2020). Other frameworks (Dignum, 2018) are rougher and differentiate between
ethics by design (integrating ethical decision routines in AI systems (Hagendorff,
2021c)), ethics in design (finding development methods that support the evaluation
of ethical implications of AI systems (Floridi etal., 2018)) and ethics for design
(ensuring integrity on the side of developers (Johnson, 2017)). But, as stated above,
all frameworks still stick to the principled approach. The main transformation lies
in the principles being far more nuanced and less abstract compared to the begin-
nings of AI ethics code initiatives (Future of Life Institute, 2017). Typologies for
every stage of the AI development pipeline are available. Differentiating principles
solves one problem, namely the problem of too much abstraction. At the same time,
however, it leaves some other problems open. Speaking more broadly, current AI
ethics disregards certain dimensions it should actually be having. In organizations
of all kinds, the likelihood of unethical decisions or behavior can be controlled to
a certain extent. Antecedents for unethical behavior are individual characteristics
(gender, cognitive moral development, idealism, job satisfaction, etc.), moral issue
characteristics (the concentration and probability of negative effects, the magnitude
of consequences, the proximity of the issue, etc.) and organizational environment
characteristics (a benevolent ethical climate, ethical culture, code existence, rule
enforcement, etc.) (Kish-Gephart etal., 2010). With regard to AI ethics, these fac-
tors are only partially considered. Most parts of the discourse are focused on dis-
cussing organizational environment characteristics (codes of ethics) or moral issues
characteristics (AI safety) (Brundage etal., 2018; Hagendorff, 2020, 2021b), but not
individual characteristics (character dispositions) increasing the likelihood of ethical
decision making in AI research and development.
Page 3 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
Therefore, a successful ethics strategy should focus on individual dispositions
and organizational structures alike, whereas the overarching goal of every measure
should be the prevention of harm. Or, in this case: prevent AI-based applications
from inflicting direct or indirect harm. This rationale can be fulfilled by ensuring
explainability of algorithmic decision making, by mitigating biases and promoting
fairness in machine learning, by fostering AI robustness and the like. However, in
addition to listing these issues is asking how AI practitioners can be taught to intui-
tively keep them in mind. This would mean to transition from a situation of an exter-
nal “ethics assessment” of existing AI products with a “checkbox guideline” to an
internal process of establishing “ethics for design”.
Empirical research shows that having plain knowledge on ethical topics or moral
dilemmas is likely to have no measurable influence on decision making. Even ethics
professionals, meaning ethics professors and other scholars of ethics, typically do
not act more ethically than non-ethicists (Schwitzgebel, 2009; Schwitzgebel & Rust,
2014). Correspondingly, in the AI field, empirical research shows that ethical prin-
ciples have no significant influence on technology developer’s decision making rou-
tines (McNamara etal., 2018). Ultimately, ethical principles do not suffice to secure
prosocial ways to develop and use new technologies (Mittelstadt, 2019). Normative
principles are not worth much if they are not acknowledged and adhered to. In order
to actually acknowledge the importance of ethical considerations, certain character
dispositions or virtues are required, among others, virtues that encourage us to stick
to moral ideals and values.
3 Basic AI Virtues—the Foundation forEthical Decision Making
Western virtue ethics has its roots in moral theories of Greek philosophers. How-
ever, after deontology and utilitarianism became more mainstream in modern phi-
losophy, virtue ethics recently experienced a “comeback”. Roughly speaking, this
comeback of scholarly interest in virtue ethics was initiated by Anscombe’s essay
“Modern Moral Philosophy” (1958) but found prominent supporters and continued
to grow by MacIntyre (1981), Nussbaum (1993), Hursthouse (2001) and many more.
Virtue ethics also has a rich tradition in East and Southeast philosophy, especially
in Confucian and Buddhist ethical theories (Keown, 1992; Tiwald, 2010). Virtue-
based ethical theories treat character as fundamental to ethics, whereas deontology,
arguably the most prevalent ethical theory, focusses on principles. But what are the
differences between principles and virtues? The former is based on normative rules
that are universally valid, the latter addresses the question of what constitutes a good
person or character. While ethical principles equal obligations, virtues are ideals that
AI practitioners can aspire to. Deontology-inspired normative principles focus on
the action rather than the actor. Thus, principlism defines action-guiding principles,
whereas virtue ethics demands the development of specific positive character dispo-
sitions or character strengths.
Why are these dispositions of importance for AI practitioners? One reason is
that individuals, who display traits such as justice, honesty, empathy and the like,
acquire (public) trust. Trust, in turn, makes it easier for people to cooperate and
55 Page 4 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
work together, it creates a sense of community and it makes social interactions more
predictable (Schneier, 2012). Acquiring and maintaining the trust of other players
in the AI field, but also the trust of the general public, can be a prerequisite for pro-
viding AI products and services. After all, intrinsically motivated actions are more
trustworthy in comparison to those which are simply the product of extrinsically
motivated rule following behavior (Meara etal., 1996).
One has to admit that a lot of ongoing AI basic research or very specific, small
AI applications have such weak ethical implications that virtues or ethical values
have no relevance at all. But AI applications that involve personal data, that are part
of human–computer interaction or that are used on a grand scale clearly have ethi-
cal implications that can be addressed by virtue ethics. In the theoretical process
of transitioning from an “uncultivated” to a morally habituated state, “technomoral
virtues” like civility, courage, humility, magnanimity and others can be fostered and
acquired (Vallor, 2016; Harris 2008a; Kohen etal., 2019; Gambelin, 2020; Sison
etal., 2017; Neubert, 2017; Harris 2008b; Ratti & Stapleford, 2021). In philosophy,
virtue ethics traditionally comprises cardinal virtues, namely fortitude, justice, pru-
dence and moderation. Further, a list of six broad virtues that can be distilled from
religious texts, oaths and other virtue inventories was put together by Peterson and
Seligman (2004), whereas the virtues are wisdom, courage, humanity, justice, tem-
perance and transcendence. Furthermore, in her famous book “Technology and the
Virtues”, Vallor (2016, 2021) identified twelve technomoral virtues, namely honesty,
self-control, humility, justice, courage, empathy, care, civility, flexibility, perspec-
tive, magnanimity and wisdom. The selection was criticized in secondary literature
(Howard, 2018; Vallor, 2018) but remains arguably the most important virtue-based
approach in ethics of technology. In the more specific context of AI applications,
however, one has to sort out those virtues that are particularly important in the field
of AI ethics. Here, existing literature and preliminary works are spare (Constanti-
nescu etal., 2021; Neubert & Montañez, 2020).
Based on patterns and regularities of the ongoing discussion on AI ethics, an eth-
ics strategy that is based on virtues would constitute four basic AI virtues, where
each virtue corresponds to a set of principles (see Table1). The basic AI virtues are
justice, honesty, responsibility and care. But how exactly can these virtues be derived
from AI ethics principles? Why do exactly these four virtues suffice? When consult-
ing meta-studies on AI ethics guidelines that stem from the sciences, industry, as
well as governments (Fjeld etal., 2020; Hagendorff, 2020; Jobin etal., 2019), it
becomes clear that AI ethics norms comprise a certain set of reoccurring principles.
The mentioned meta-studies on AI ethics guidelines list these principles hierarchi-
cally, starting with the most frequently mentioned principles (fairness, transparency,
accountability, etc.) and ending at principles that are mentioned rather seldom, but
nevertheless repeatedly (sustainability, diversity, social cohesion etc.). When sifting
through all these principles, one can, by using a reductionist approach and cluster-
ing them into groups, distill four basic virtues that cover all of them (see Fig.1).
The decisive question for the selection of the four basic AI virtues was: Does virtue
A describe character dispositions that, when internalized by AI practitioners, will
intrinsically motivate them to act in a way that “automatically” ensures or makes it
more likely that the outcomes of their actions, among others, result in technological
Page 5 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
Table 1 List of basic AI virtues
Basic AI virtues Explanation Corresponding principles
Justice A strong sense of justice enables individuals to act fairly, meaning that
they refrain from having any prejudice or favoritism towards individu-
als based on their intrinsic or acquired traits in the context of decision
making. In AI ethics, justice is the one moral value that seems to be
prioritized the most. However, it is hitherto operationalized mainly in
mathematical terms, not with regard to actual character dispositions of
AI practitioners. Here, justice as a virtue could not just underpin motiva-
tions to develop fair machine learning algorithms, but also efforts to use
AI techniques only in those societal contexts where it is fair to apply
them. Eventually, justice affects algorithmic non-discrimination and bias
mitigation in data sets as well as efforts to avoid social exclusion, foster-
ing equality and ensuring diversity
Algorithmic fairness, non-discrimination, bias mitigation, inclusion, equal-
ity, diversity
Honesty Honesty is at the core of fulfilling a set of very important AI specific
ethical issues. It fosters not only organizational transparency, meaning to
provide information about financial or personnel related aspects regard-
ing AI development. It also promotes the willingness to provide explain-
ability or technical transparency regarding AI applications, for instance
by disclosing origins of training data, quality checks the data were
subject to, methods to find out how labels were defined etc. Moreover,
honesty enables to acknowledge errors and mistakes that were made in
AI research and development, allowing for collective learning processes
Organizational transparency, openness, explainability, interpretability, tech-
nological disclosure, open source, acknowledge errors and mistakes
55 Page 6 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
Table 1 (continued)
Basic AI virtues Explanation Corresponding principles
Responsibility For the AI sector, responsibility is of great importance and stands in place
of a host of other positive character dispositions. Mainly, responsibil-
ity builds the precondition for feeling accountable for AI technologies
and their outcomes. This is particularly relevant since AI technology’s
inherent complexity leads to responsibility diffusions that exacerbate
the assignment of wrongdoing. Diffusions of responsibility in complex
technological as well as social networks can cause individuals to detach
themselves from moral obligations, possibly leading to breeding grounds
for unethical behavior. Responsibility, seen as a character disposition,
is a counterweight to that since it leads professionals to actually feeling
liable for what they are doing, opposing negative effects of a diffusion of
responsibility
Responsibility, liability, accountability, replicability, legality, accuracy,
considering (long-term) technological consequences
Care Care means to develop a sense for others’ needs and the will to address
them. Care has a strong connection to empathy, which is the precondi-
tion for taking the perspective of others and understanding their feelings
and experiences. This way, care and empathy facilitate prosocial behav-
ior and, on the other hand, discourage individuals from doing harm. In
AI ethics, care builds the bedrock for motivating professionals to avoid
AI applications from causing direct or indirect harm, ensuring safety,
security, but also privacy preserving techniques. Moreover, care can
motivate AI practitioners to design AI applications in a way that they
foster sustainability, solidarity, social cohesion, common good, peace,
freedom and the like. Care can be seen as being the driving force of the
beneficial AI movement
Non-maleficence, harm, security, safety, privacy, protection, precaution,
hidden costs, beneficence, well-being, sustainability, peace, common
good, solidarity, social cohesion, freedom, autonomy, liberty, consent
Page 7 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
artefacts that meet the requirements that principle X specifies? Or, in short, does vir-
tue A translate into behavior that is likely to result in an outcome that corresponds to
the requirements of principle X? This question had to be applied for every principle
that was derived from the meta-studies, testing by how many different virtues they
can be covered. Ultimately, this process resulted in only four distinct virtues.
To name some examples: The principle of algorithmic fairness corresponds to
the virtue of justice. A just person will “automatically” be motivated to contribute
to machine outputs that do not discriminate against groups of people, independently
of external factors and guideline rules. The principle of transparency, as a second
example, corresponds to the virtue of honesty, because an honest person will “auto-
matically” be inclined to be open about mistakes, to not hide technical shortcom-
ings, to make research outcomes accessible and explainable. The principle of safe AI
would be a third example. Here, the virtue of care will move professionals to act in a
manner that they do not only acknowledge the importance of safety and harm avoid-
ance, but also act accordingly. Ultimately, the transition happens between deonto-
logical rules, principles or universal norms on the one hand and virtues, intrinsic
motives or character dispositions on the other hand. Nevertheless, both fields are
connected by the same objective, namely to come up with trustworthy, human-cen-
tered, beneficial AI applications. Just the means to reach this objective are different.
As said before, the four basic AI virtues cover all common principles of AI ethics
as described in prior discourses (Fjeld etal., 2020; Floridi etal., 2018; Hagendorff,
2020; Jobin etal., 2019; Morley etal., 2020). They are the precondition for putting
Fig. 1 Using meta-studies on AI ethics guidelines as sources to distill four basic AI virtues
55 Page 8 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
principles into practice by representing different motivational settings for steering
decision making processes in AI research and development in the right direction.
But stipulating those four basic AI virtues is not enough. Tackling ethics problems
in practice also needs second-order virtues that enable professionals to deal with
“bounded ethicality”.
4 Second‑Order AI Virtues—a Response toBounded Ethicality
When using a simple ethical theory, one can assume that individuals go through
three phases. First, individuals perceive that they are confronted with a moral deci-
sion they have to make. Secondly, they reflect on ethical principles and come up
with a moral judgment. And finally, they act accordingly to these judgments and
therefore act morally. But individuals do not actually behave this way. In fact, moral
judgments are in most cases not influenced by moral reasoning (Haidt, 2001). Moral
judgments are done intuitively, and moral reasoning is used in hindsight to justify
one’s initial reaction. In short, typically, moral action precedes moral judgment. This
leads to consequences for AI ethics. It shows that parts of current ethics initiatives
can be reduced to plain “justifications” for the status quo of technology develop-
ment—or at least they are adopted to it. For instance, the most commonly stressed
AI ethics principles are fairness, accountability, explainability, transparency, privacy
and safety (Hagendorff, 2020). However, these are issues for which a lot of technical
solutions already exist and where a lot of research is done anyhow. Hence, AI ethics
initiatives are simply reaffirming existing practices. On a macro level, this stands
in correspondence with the aforementioned fact that moral judgments do not deter-
mine, but rather follow or explain prior decision making processes.
Although explicit ethics training may improve AI practitioners’ intellectual
understanding of ethics itself, there are many limitations restricting ethical deci-
sion making in practice, no matter how comprehensive one’s knowledge on ethical
theories is. Many reasons for unethical behavior are resulting from environmental
influences on human behavior and limitations through bounded rationality or, to be
more precise, “bounded ethicality” (Bazerman & Tenbrunsel, 2011; Tenbrunsel &
Messick, 2004). Bounded ethicality is an umbrella term that is used in moral psy-
chology to name environmental as well as intrapersonal factors that can thwart ethi-
cal decision making in practice. Hence, in order to address bounded ethicality, AI
ethics programs are in need of specific virtues, namely virtues that help to “debias”
ethical decision making in order to overcome bounded ethicality.
The first step to successively dissolve bounded ethicality is to inform AI prac-
titioners not about the importance of machine biases, but psychological biases as
well as situational forces. Here, two second-order virtues come into play, namely
prudence and fortitude (see Table 2). In Aristotelian virtue ethics, prudence (or
phrónēsis) guides the enactment of individual virtues in unique moral situations,
meaning that a person can intelligently express virtuous behavior (Aristotle etal.,
2012). As a unifying intellectual virtue, prudence also gains center stage in modern
virtue-based approaches to engineering ethics (Frigo etal., 2021). In this paper, pru-
dence plays a similar role and is used in combination with another virtue, namely
Page 9 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
fortitude. While both virtues may help to overcome bounded ethicality, they are
at the same time enablers for living up to the basic virtues. Individual psychologi-
cal biases as well as situational forces can get in the way of acting justly, honestly,
responsibly or caringly. Prudence and fortitude are the answers to the many forces
that may restrict basic AI virtues, where prudence is aiming primarily at individ-
ual factors, while fortitude addresses supra-individual issues that can impair ethical
decision making in AI research and development.
In the following, a selection of some of the major factors of bounded ethicality
that can be tackled by prudence shall be described. This selection is neither exhaus-
tive nor does it go into much detail. However, it is meant to be a practical overview
that can set the scene for more in-depth subsequent analyses.
Clearly, the most obvious factors of bounded ethicality are psychological biases
(Cain & Detsky, 2008). It is common that people’s first and often only reaction to
moral problems is emotional. Or, in other words, taking up dual-process theory,
Table 2 List of second-order AI virtues
AI virtues Explanation Bounded ethicality
Prudence Prudence means practical wisdom.
In some philosophical theories, it
represents the ability to gauge and
reconcile complex and often com-
peting values and requirements.
Here, it stands for a high degree of
self-understanding, for the ability
to identify effects of bounded
ethicality on one’s own behavior as
well as for the sincerity to acknowl-
edge one’s own vulnerability to
unconscious cognitive biases. Pru-
dence is the counterweight to the
common limitations of the human
mind, to the hidden psychological
forces that impair ethical reasoning
and decision making
System 1 thinking, implicit biases, in-group favor-
itism, self-serving biases, value-action gaps,
moral disengagement, etc
Fortitude Fortitude means idealism or the will
to stick to moral ideals and moral
responsibilities, potentially against
all odds. For the AI sector, this
means that researchers and manag-
ers acquire the courage to speak
up when they come across moral
issues. This may sound obvious,
but in light of powerful situational
forces, peer influences or authori-
ties, speaking up and truly acting in
accordance to one’s own convic-
tions can become very difficult.
Fortitude helps to overcome these
difficulties
Situational forces, peer influences, authorities, etc
55 Page 10 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
their reaction follows system 1 thinking (Kahneman, 2012; Tversky & Kahneman,
1974), meaning an intuitive, implicit, effortless, automatic mode of mental informa-
tion processing. System 1 thinking predominates everyday decisions. System 2, on
the other hand, is a conscious, logical, less error-prone, but slow and effortful mode
of thinking. Although many decision making routines would require system 2 think-
ing, individuals often lack the energy to switch from system 1 to system 2. Ethical
decision making needs cognitive energy (Mead etal., 2009). This is why prudence
is such an important virtue, since it helps AI practitioners to transition from system
1 to system 2 thinking in ethical problems. This is not to say that the dual-process
theory is without criticism. Recently, cognitive scientists have challenged its valid-
ity (Grayot, 2020), even though they did not abandon it in toto. It still remains a
scientifically sound heuristic in moral psychology. Thus, system 2 thinking remains
strikingly close to critical ethical thinking, although it does obviously not necessar-
ily result in it (Bonnefon, 2018).
The transition from system 1 to system 2 thinking in ethical problems can also be
useful for mitigating another powerful psychological force, namely implicit biases
(Banaji & Greenwald, 2013), that can impair at least two basic AI virtues, namely
justice and care. Individuals have implicit associations, also called “ordinary prej-
udices”, that lead them to classify, categorize and perceive their social surround-
ings with accordance to prejudices and stereotypes. This effect is so strong that
even individuals who are absolutely sure to not be hostile towards minority groups
actually are exactly that. The reason for that lies in the fact that people succumb to
subconscious biases that reflect culturally established stereotypes or discrimination
patterns. Hence, unintentional discrimination cannot be unlearned without changing
culture, the media, the extent of exposure to people from minorities and the like.
Evidently, this task cannot be fulfilled by the AI sector. Nevertheless, implicit biases
can be tackled by increasing workforce diversity in AI firms and by using prudence
as a virtue to accept the irrefutable existence and problematic nature of implicit
biases as well as their influence on justice in the first place.
Another important bias that can compromise basic AI virtues and that can at the
same time be overcome by prudence is in-group favoritism (Efferson etal., 2008).
This bias causes people to sympathize with others who share their culture, organi-
zation, gender, skin color, etc. For AI practitioners, this means that AI applications
which have negative side-effects on outgroups, for instance the livelihoods of click-
workers in South-east Asia (Graham etal., 2017), are rated less ethically problem-
atic than AI applications that would have similar consequences for in-groups. More-
over, the current gender imbalance in the AI field might be prolonged by in-group
favoritism in human resource management. In-group favoritism mainly stifles char-
acter dispositions like justice and care. Prudence, on the other hand, is apt to work
against in-group favoritism by recognizing artificial group constructions as well as
definitions of who counts as “we” and who as “others”, bolstering not only fair deci-
sion making, but also abilities to empathize with “distant” individuals.
One further and important effect of bounded ethicality that can impair the reali-
zation of the basic AI virtues is self-serving biases. These biases cause revision-
ist impulses in humans, helping to downplay or deny past unethical actions while
memorizing ethical ones, resulting in a self-concept that depicts oneself as ethical.
Page 11 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
When one asks individuals to rate how ethical they think they are on a scale of 0 to
100 related to other individuals, the majority of them will give themselves a score of
more than 50 (Epley & Dunning, 2000). The same holds true when people are asked
to assess the organization they are a part of in relation to other organizations. Aver-
age scores are higher than 50, although actually the average score would have to be
50. What one can learn from this is that generally speaking, people overestimate
their ethicality. Moreover, self-serving biases cause people to blame other people
when things go wrong, but to view successes as being one’s own achievement. Oth-
ers are to blame for ethical problems, depicting the problems as being outside of
one’s own control. In the AI sector, self-serving biases can come into play when
attributing errors or inaccuracies in applications as being the result of others, when
reacting dismissive to critical feedback or feelings of concern, etc. Moreover, not
overcoming self-serving biases by prudence can mean to act unjustly and dishon-
estly, further compromising basic AI virtues.
Value-action gaps are another effect of bounded ethicality revealed by empirical
studies in moral psychology (Godin etal., 2005; Jansen & Glinow, 1985). Value-
action gaps occur in the discrepancy between people’s self-concepts or moral values
and their actual behavior. In short, the gaps mark the distance between what people
say and what people do. Prudence, on the other hand, can help to identify that dis-
tance. In the AI field, value-action gaps can occur on an organizational level, for
instance by using lots of ethics-related terms in corporate reports and press releases
while actually being involved in unethical businesses practices, lawsuits, fraud, etc.
(Loughran etal., 2009). Especially the AI sector is often accused of ethics-washing,
hence of talking much about ethics, but not acting accordingly (Hao, 2019). Like-
wise, value-action gaps can occur on an individual level, for instance by holding
AI safety or data security issues in high esteem while actually accepting improper
quality assurance or rushed development and therefore provoking technical vul-
nerabilities in machine learning models. Akin to value-action gaps are behavioral
forecasting errors (Diekmann etal., 2003). Here, people tend to believe that they
will act ethically in a given situation X, while when situation X actually occurs, they
do not behave accordingly (Woodzicka & LaFrance, 2001). They underestimate the
extent to which they will indeed stick to their ideals and intentions. All these effects
can interfere negatively with basic AI virtues, mostly with care, honesty and justice.
This is why prudence with regard to value-action gaps is of great importance.
The concept of moral disengagement is another important factor in bounded ethi-
cal decision making (Bandura, 1999). Techniques of moral disengagement allow
individuals to selectively turn their moral concerns on and off. In many day-to-day
decisions, people act contrary to their own ethical standards, but without feeling bad
about it or having a guilty conscience. The main techniques in moral disengage-
ment processes comprise justifications, where wrongdoing is justified as means to
a higher end; changes in one’s definition about what is ethical; euphemistic labels,
where individuals detach themselves from problematic action contexts by using lin-
guistic distancing mechanisms; denial of being personally responsible for particu-
lar outcomes, where responsibility is attributed to a larger group of people; the use
of comparisons, where own wrongdoings are relativized by pointing at other con-
texts of wrongdoings or the avoidance of certain information that refers to negative
55 Page 12 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
consequences of one’s own behavior. Again, prudence can help to identify cases of
moral disengagement in the AI field and act as a response to it. Addressing moral
disengagement with prudence can be a requirement to live up to all basic AI virtues.
In the following, a selection of some of the major factors of bounded ethicality
that can be tackled by fortitude shall be described. Here, supra-individual issues that
can impair ethical decision making in AI research and development are addressed.
Certainly, one of the most relevant factors one has to discuss in this context are situ-
ational forces. Numerous empirical studies in moral psychology have shown that
situational forces can have a massive impact on moral behavior (Isen & Levin, 1972;
Latané & Darley, 1968; Williams & Bargh, 2008). Situational forces can range from
specific influences like the noise of a lawnmower that significantly affects helping
behavior (Mathews & Canon, 1975) to more relevant factors like competitive orien-
tations, time constraints, tiredness, stress, etc., which are likely to alter or overwrite
ethical concerns (Cave & ÓhÉigeartaigh, 2018; Darley & Batson, 1973; Kouchaki
& Smith, 2014). Especially financial incentives have a significant influence on ethi-
cal behavior. In environments that are structured by economic imperatives, decisions
that clearly have an ethical dimension can be reframed as pure business decisions.
All in all, money has manifold detrimental consequences for decision making since
it leads to decisions that are proven to be less social, less ethical or less coopera-
tive (Gino & Mogilner, 2014; Gino & Pierce, 2009; Kouchaki etal., 2013; Palazzo
etal., 2012; Vohs etal., 2006). Ultimately, various finance law obligations or mon-
etary factual constraints that a company’s management has to comply to can con-
flict with or overwrite AI virtues. Especially in contexts like this, virtue ethics can
significantly be pushed into the background, although the perceived constraints
lead to immoral outcomes. In short, situational forces can have negative impacts on
unfolding all four basic AI virtues, namely justice, honesty, responsibility and care.
In general, critics of virtue ethics have pointed out that moral behavior is not deter-
mined by character traits, but social contexts and concrete situations (Kupperman,
2001). However, situationist accounts are in fact entirely compatible with virtue eth-
ics since it provides particular virtues like fortitude that are intended to counteract
situational forces (and that can explain why some individuals deviate from expected
behavior in classical psychological experiments like the Milgram experiment (Mil-
gram, 1963)). Fortitude is supposed to help to counteract situational pressure, allow-
ing the mentioned basic virtues to flourish.
Similar to and often not clearly distinguishable from situational forces are peer
influences (Asch, 1951, 1956). Individuals want to follow the crowd, adapt their
behavior to that of their peers and act similarly to them. This is also called con-
formity bias. Conformity biases can become a problem for two reasons: First, group
norms can possess unethical traits, leading for instance to a collective acceptance of
harm. Second, the reliance on group norms and the associated effects of conformity
bias induces a suppression of own ethical judgments. In other words, if one indi-
vidual starts to misbehave, for instance by cheating, others follow suit (Gino etal.,
2009). A similar problem occurs with authorities (Milgram, 1963). Humans have
an internal tendency for being obedient to authorities. This willingness to please
authorities can have positive consequences when executives act ethically them-
selves. If this is not the case, the opposite becomes true. For AI ethics, this means
Page 13 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
that social norms that tacitly emerge from AI practitioner’s behavioral routines as
well as managerial decisions can both bolster ethical as well as unethical working
cultures. In the case of the latter, the decisive factor is the way individuals respond
to inner normative conflicts with their surroundings. Do they act in conformity and
obedience even if it means to violate basic AI virtues? Or do they stick to their dis-
positions and deviate from detrimental social norms or orders? Fortitude, one of the
two second-order virtues, can ensure the appropriate mental strength to stick to the
right intentions and behavior, be it in cases where everyone disobeys a certain law
but oneself does not want to join in, where managerial orders instruct to bring a
risky product to the market as fast as possible but oneself insists on piloting it before
release or where under extreme time pressure one insists on devoting time to under-
stand and analyze training data sets.
5 Ethics Training—AI Virtues Come intoBeing
In traditional virtue ethics concepts, virtues emerge from habitual, repeated and
gradually refined practice of right and prudent actions (Aristotle et al., 2012). At
first, specific virtues are encouraged and practiced by performing acts that are
inspired by “noble” human role-models and that resemble other patterns, narratives
or social models of the virtue in question. Later, virtues are refined by taking the
particularity of given situations into account. Regarding AI virtues, the proceeding
is not much different (Bezuidenhout & Ratti, 2021). However, cultivating basic and
second-order AI virtues means achieving virtuous practice embedded in a specific
organizational and cultural context. A virtuous practice requires some sort of moral
self-cultivation that encompasses the acquirement of motivations or the will to take
action, knowledge on ethical issues, skills to identify them and moral reasoning to
make the right moral decisions (Johnson, 2017). One could reckon that especially
aforementioned skills or motivations are either innate or the result of childhood edu-
cation. But ethical dispositions can be changed by education in all stages of life,
for instance by powerful experiences, virtuous leaders or a certain work atmosphere
in organizations. To put it in a nutshell, virtues can be trained and taught in order
to foster ethical decision making and to overcome bounded ethicality. Most impor-
tantly, if ethics training imparts only explicit knowledge (or ethical principles), this
will very likely have no effect on behavior. Ethics training must also impart tacit
knowledge, meaning skills of social perception and emotion that cause individu-
als to automatically feel and want the right thing in a given situation (Haidt, 2006,
p.160).
The simplest form of ethics programs comprise ethics training sessions combined
with incentive schemes for members of a given organization that reward the abid-
ance of ethical principles and punish their violation. These ethics programs have
numerous disadvantages. First, individuals that are part of them are likely to only
seek to perform well on behavior covered by exactly these programs. Areas that
are not covered are neglected. That way, ethics programs can even increase unethi-
cal behavior by actually well-intended sanctioning systems (Gneezy & Rustichini,
2000). For instance, in case a fine is put on a specific unethical behavior, individuals
55 Page 14 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
who benefit from this behavior might simply weigh the advantage of the unethical
behavior against the disadvantage of the fine. If the former outweighs the latter, the
unethical behavior might even increase if a sanctioning system is in place. Ethical
decisions would simply be reframed as monetary decisions. In addition to that, indi-
viduals can become inclined to trick incentive schemes and reward systems. Moreo-
ver, those programs solely focus on extrinsic motivators and do not change intrinsic
dispositions and moral attitudes. All in all, ethics programs that comprise simple
reward and sanctioning systems—as well as corresponding surveillance and moni-
toring mechanisms—are very likely to fail.
A further risk of ethics programs or ethics training are reactance phenomena.
Reactance occurs when individuals protest against constraints of their personal free-
doms. As soon as ethical principles restrict the freedom of AI practitioners doing
their work, they might react to this restriction by trying to reclaim that very freedom
by all means (Dillard & Shen, 2005; Dowd etal., 1991; Hong, 1992). People want
to escape restrictions, thus the moment when such restrictions are put in place—
no matter whether they are justified from an ethical perspective or not—people
might start striving to break free from them. Ultimately, “forcing” ethics programs
on members of an organization is not a good idea. Ethics programs should not be
decoupled from the inner mechanisms and routines of an organization. Hence, in
order to avoid reactance and to fit ethics programs into actual structures and routines
of an organization, it makes sense to carefully craft specific, unique compliance
measures that take particular decision processes of AI practitioners and managers
into account. In addition to that, ethics programs can be implemented in organi-
zations with delay. This has the effect of a “future lock-in” (Rogers & Bazerman,
2008), meaning that policies achieve more support, since the time delay allows for
an elimination of the immediate costs of implementation, for individuals to prepare
for the respective measures and for a recognition of their advantages.
Considering all of that, what measures can actually support AI practitioners and
AI companies’ managers to strengthen AI virtues? Here, again, insights from moral
psychology as well as behavioral ethics research can be used (Hines etal., 1987;
Kollmuss & Agyeman, 2002; Treviño etal., 2006, 2014) to catalogue measures that
bolster ethical decision making as well as virtue acquisition (see Tables3 and 4).
The measures can be vaguely divided into those that tend to affect single individuals
and those that bring about or relate to structural changes in organizations. The fol-
lowing Table3 lists measures that relate to AI professionals on an individual level.
The following Table4 lists systemic measures that affect organizations mainly on
a structural level.
6 Discussion
Virtue ethics does not come without shortcomings. In general, it is criticized for
focusing on the “being” rather than the “doing”, meaning that virtue ethics is agent-
and not act-centered. Moreover, critics fault that on the one hand, virtuous persons
can perform wrong actions, and on the other hand, right actions can be performed
by persons who are not virtuous. However, this is a truism that could easily be
Page 15 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
transferred to other approaches in ethical theory, for instance by pointing at the fact
that normative rules can be disregarded or violated by individuals or that individu-
als can perform morally right actions without considering normative rules. Another
response to that critique stresses that it is one of virtue ethics’ major strength to
not universally define “right” and “wrong” actions. Virtue ethics can address the
question of “eudaimonia” without fixating axiological concepts of what is “right”.
Further, virtue ethics is criticized by pointing at its missing “codifiability”. Stipulat-
ing sets of virtues is arbitrary. However, this critique also holds true for every other
ethical theory. Their very foundations are always arbitrary. All in all, many points
of criticism that are brought into position in order to find faults in virtue ethics can
equally be brought into position against other ethical theories, such as deontology or
consequentialist ethics.
Moreover, a further point of critique concerns the lack of technical details of the
AI virtues approach. AI practitioners can censure the fact that the approach seems to
be even more disconnected from down-to-earth research and development than the
former, principled AI ethics initiatives. They also lacked technical details in many
places or, in cases they mentioned details, did so in a very shallow manner (Hagen-
dorff, 2020). The AI virtues concept, however, contains zero references to technical
details—but for a reason. It is naïve to believe that ethical research is apt for that
at all. Apart from the fact that a lot of ethical issues cannot be solved by technical
means in the first place, AI ethics is the wrong discipline to come up with technical
Table 3 Individual measures that bolster ethical decision making and virtue acquisition
Measures related to individuals Explanation
Knowledge about AI virtues AI professionals must be familiar with the six AI virtues and know
about their importance and implications
Knowledge about action strategies Professionals have to learn how they can mitigate ethically relevant
problems, for instance in the fields of fairness, robustness, explain-
ability, but also in terms of organizational diversity, clickwork
outsourcing, sustainability goals, etc
Locus of control Professionals should have the perception that they themselves are
able to influence and have a tangible impact on ethically relevant
issues. This also supports a sense of responsibility, meaning that
professionals hold themselves accountable for the consequences of
their decision making
Public commitment Professionals can explicitly communicate the willingness to take
action in ethical challenges. Publicly committing to stick to
particular virtues, ideals, intentions and moral resolutions causes
individuals to feel strongly obliged to actually do so when encoun-
tering respective choices
Audits and discussion groups With the help of colleagues, one can reflect and discuss professional
choices, ethical issues or other concerns in one’s daily routines
in order to receive critical feedback. Furthermore, fictious ethical
scenarios simulating particular contexts of decision making that
professionals may face can be used. Apart from scenario trainings,
organizations can grant professionals time for contemplation,
allowing time to read texts, e.g. about moral psychology or ethical
theory
55 Page 16 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
Table 4 Systemic measures that bolster ethical decision making and virtue acquisition
Systemic measures Explanation
Leader influences Managers play a key role as role models influencing employee’s atti-
tudes and behaviors. Their decisions have a particular legitimacy and
credibility, which makes employees imitating them very likely. This
way, managers, whose prosocial attitudes, fairness and behavioral
integrity are of utmost importance, can define ethical standards in
their organizations, since their way of making moral decisions trick-
les down to subordinate individuals (Treviño etal., 2014)
Ethical climate and culture Unlike ethics codes, which are proven to have no significant effect on
(un)ethical decision making in organizations, ethical climates do
have that effect (McNamara etal., 2018). Especially caring climates
are positively related to ethical behavior. On the other hand, self-
interested, egoistic climates are negatively associated with ethical
choice. Furthermore, ethical cultures, meaning informal norms,
language, rituals etc., also affect ethical decision making and can,
among other things, be significantly influenced by performance
management systems (Kish-Gephart etal., 2010)
Proportion of women Countless studies in empirical business ethics research indicate that
women are more sensitive to and less tolerant of unethical activi-
ties than their male counterparts (Loe etal., 2013). In short, gender
is a predictor to ethical behavior. This points out the importance of
raising the proportion of female employees. Especially in the AI
sector, male researchers currently strikingly outnumber females. This
lack of workforce diversity has consequences on the functionality
of software applications as well as implications on ethical outcomes
in AI organizations. Hence, raising the proportion of women in the
AI sector should pose one of the most effective measures to improve
ethical decision making on a grand scale. This is not to say that the
same does not hold true for other underrepresented demographics or
marginalized populations. Here, the paper only points at the hiring of
women, though. This is due to the fact that only for women and not
for other demographic groups, ample research shows that they are
less likely to engage in unethical behavior compared to men
Decreasing stress and pressure Reducing the amount of stress and time pressure in organizations can
have game-changing consequences for the organizations’ ethi-
cal climates (Darley & Batson, 1973; Selart & Johansen, 2011).
De-stressing professionals, slowing down processes and, by that,
setting cognitive resources free promote a transition from system 1
to system 2 thinking in decision making situations. This way, simply
speaking, individuals are encouraged to think before they act, which
can ultimately improve ethicality in organizations
Openness for critique Critical voices from the public can point at blind spots or specific
shortcomings of organizations. Being open to embrace external cri-
tique as an opportunity to reflect upon an organization’s own routines
and goals with the associated willingness to potentially realign them
can significantly improve its own trustworthiness, reputation and
public perception. Eventually, this can contribute to the overall suc-
cess of an organization
Page 17 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
details on privacy preserving machine learning, explainability, sustainable model
training, etc. Instead, AI practitioners themselves are the ones who can do that. But
they also need to be motivated to consult literature, tools and frameworks on these
technical details. And virtues are the basis for this motivation. The least thing ethics
codes’ principles can do is to point at particular technical papers or methods on how
to achieve fair, safe, explainable, privacy preserving, etc. AI, but only virtues moti-
vate to actually use these methods. Hence, it is not a weakness of the virtues frame-
work presented in this paper to not contain any references to technical details—it is,
in fact, the expression of an appropriate unpretentiousness about its competencies.
Another critical issue a virtue-based ethics framework must address is the fact
that it focuses only on AI practitioners and not a wider socio-economic context or
systemic changes. Regarding the latter, ethics discourses can play an important role
in inspiring laws or political objectives. However, ethics as a philosophical enter-
prise that involves the study of principles, values or virtues cannot unfold the same
efficacy as binding legal norms that comprise concrete duties, that are able to resolve
disputes and that are established by democratic institutions. Hence, when talking
about efficiency gains in applying ethics or about ethics’ practical turn, this should
never cause the impression that ethics acquires a similar or even the same steer-
ing effect or enforceability that binding legal norms or similar systemic measures
possess. Ultimately, trustworthy AI will be the result of both strands, ethics as well
as law. Both strands interact and inspire each other. However, especially virtue eth-
ics with its focus on individual dispositions is perhaps less apt to inspire systemic
changes or legal norms than principlism.
In addition to that, the problem of unethical AI usage is not per se caused by indi-
viduals in research and development. Cases in which AI applications cause harm can
result from multifactorial, dynamic events that are not directly intended by anyone.
Unforeseen technological consequences cannot be attributed to a lack of virtuous
behavior. One could argue that one of the basic AI virtues, namely care, also implies
the willingness to assess long-term technological consequences, but this does obvi-
ously not guarantee that harmful technological consequences can be strictly avoided.
It would put too much responsibility on individual AI practitioners to blame them for
all ethical issues that are tied to the use of AI. Within the general scheme of things,
AI practitioners are powerful, but also not omnipotent players who are accompa-
nied by many other agents who have direct or indirect influences on AI technologies.
Responsibilities are in many cases widely shared between groups of AI practition-
ers, meaning researchers, engineers, managers and other domain experts. However,
this distribution of responsibilities does not mean that they somehow vanish. It is a
known effect of moral disengagement that responsibility diffusion can cause indi-
viduals to detach themselves from moral obligations. The virtue-based framework
presented here is supposed to counteract this.
Another shortcoming one has to discuss in the context of a virtue-based approach
in AI ethics revolves around effects of elitism. When AI practitioners, once edu-
cated in the basic and second-order AI virtues, become solely responsible for their
actions, who will have moral authority over them? Or, when extending the scope of
this question, one can also ask: What authority does the author of the framework
presented in this paper have to say what virtues practitioners should develop? A
55 Page 18 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
possible reply would be to refer to discourse ethics where communicative rational-
ity is used to agree on the validity of particular moral norms or, in this case, vir-
tues (Habermas, 2001). However, this paper follows this methodology only in a very
indirect, remote manner. It derives the authority for the selection of virtues from
the fact that they are not the result of subjective preferences, but meta-studies on AI
ethics that are by themselves the result of a global discourse on AI ethics. However,
“global” in this case is somewhat misleading, since the geographic distribution of
origin countries of AI ethics guidelines is rather biased towards economically devel-
oped countries (Jobin etal., 2019). Hence, especially African and South-American
countries are not represented in the AI ethics discourse. This also has an influence
on the selection of the presented AI virtues, meaning that they are likely to have a
tendency to represent a Western perspective.
7 Conclusion
Hitherto, all the major AI ethics initiatives choose a principled approach. They aim
at having an effect on AI research and development by stipulating a list of rules
and standards. But, as more and more papers from AI-metaethics show (Hagendorff,
2020; Lauer, 2020; Mittelstadt, 2019; Rességuier & Rodrigues, 2020), this approach
has specific shortcomings. The principled approach in AI ethics has no reinforce-
ment mechanisms, it is not sensitive to different contexts and situations, it some-
times fails to address the technical complexity of AI, it uses terms and concepts that
are often too abstract to be put into practice, etc. In order to improve the two last-
named shortcomings, AI ethics recently underwent a practical turn, stressing its will
to put principles into practice. But the typologies and guidelines on how to put AI
ethics into practice stick to the principled approach altogether (Hallensleben etal.,
2020; Morley etal., 2020). However, a hitherto largely underrepresented approach,
namely virtue ethics, seems to be a promising addition to AI ethics’ principlism.
The goal of this paper was to outline how virtues can support putting AI ethics
into practice. Virtue ethics focuses on an individual’s character development. Char-
acter dispositions provide the basis for professional decision making. On the one
hand, the paper considered insights from moral psychology on the many pitfalls the
motivation of moral behavior has. On the other hand, it used virtue instead of deon-
tological ethics to promote and foster not only four basic AI virtues, but also two
second-order AI virtues that can help to circumvent “bounded ethicality” and one’s
vulnerability to unconscious biases. The basic AI virtues comprise justice, honesty,
responsibility and care. Each of these virtues motivates a kind of professional deci-
sion making that builds the bedrock for fulfilling all the AI specific ethics principles
discussed in literature. In addition to that, the second-order AI virtues, namely pru-
dence and fortitude, can be used to overcome specific effects of bounded ethical-
ity that can stand in the way of the basic AI virtues, meaning biases, value-action
gaps, moral disengagement, situational forces, peer influences and the like. Lastly,
the paper described framework conditions and organizational measurements that can
help to realize ethical decision making and virtue training in the AI field. Equipped
Page 19 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
with this information, organizations dealing with AI research and development
should be able to effectively put AI ethics into practice.
Author contribution TH is the sole author.
Funding Open Access funding enabled and organized by Projekt DEAL. This research was supported by
the Cluster of Excellence “Machine Learning—New Perspectives for Science” funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—
Reference Number EXC 2064/1—Project ID 390727645.
Data Availability Not applicable.
Declarations
Ethics Approval and Consent to Participate Not applicable
Consent for Publication Not applicable
Competing Interests The author declares no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is
not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission
directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen
ses/ by/4. 0/.
References
Abdalla, M., &Abdalla, M. (2020). The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on
academic integrity. arXiv, 1–9.
Anscombe, G. E. M. (1958). Modern moral philosophy. Philosophy, 33(124), 1–19.
Aristotle, Barlett, R. C., & Collins, S. D. (2012). Aristotle’s Nicomachean ethics. University of Chicago
Press.
Asch, S. (1951). Effects of group pressure upon the modification and distortion of judgment. In H. S.
Guetzkow (Ed.), Groups, leadership and men: Research in human relations (pp. 177–190). Pitts-
burgh: Russell & Russell.
Asch, SE. (1956). Studies of independence and conformity: I. A minority of one against a unanimous
majority. Psychological Monographs: General and Applied, 70(9), 1–70.
Banaji, M. R., & Greenwald, A. G. (2013). Blindspot: Hidden biases of good people. Delacorte Press.
Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social
Psychology Review, 3(3), 193–209.
Bazerman, M. H., & Tenbrunsel, A. E. (2011). Blind spots: Why we fail to do what’s right and what to do
about it. Princeton University Press.
Bezuidenhout, L., & Ratti, E. (2021). What does it mean to embed ethics in data science? An integrative
approach based on microethics and virtues. AI & SOCIETY - Journal of Knowledge, Culture and
Communication, 36(3), 939–953.
55 Page 20 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
Bonnefon, J.-F. (2018). The pros and cons of identifying critical thinking with system 2 processing.
Topoi, 37(1), 113–119.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., etal. (2018). The malicious use
of artificial intelligence: Forecasting, prevention, and mitigation. arXiv, 1–101.
Cain, D. M., & Detsky, A. S. (2008). Everyone’s a little bit biased (even physicians). JAMA, 299(24),
2893–2895.
Cave, S., &ÓhÉigeartaigh, S. S. (2018). An AIrace for strategic advantage: Rhetoric and risks 1–5.
Cavoukian, A. (2011). Privacy by design: The 7 foundational principles: Implementation and mapping of
fair information practices. https:// iapp. org/ media/ pdf/ resou rce_ center/ Priva cy% 20by% 20Des ign%
20-% 207% 20Fou ndati onal% 20Pri ncipl es. pdf. Accessed 21 June 2018.
Cavoukian, A., Taylor, S., & Abrams, M. E. (2010). Privacy by design: Essential for organizational
accountability and strong business practices. Identity in the Information Society, 3(2), 405–413.
Constantinescu, M., Voinea, C., Uszkai, R., & Vică, C. (2021). Understanding responsibility in respon-
sible AI. Dianoetic virtues and the hard problem of context. Ethics and Information Technology,
23(4), 803–814.
Darley, J. M., & Batson, C. D. (1973). “From Jerusalem to Jericho”: A study of situational and disposi-
tional variables in helping behavior. Journal of Personality and Social Psychology, 27(1), 100–108.
Diekmann, K. A., Tenbrunsel, A. E., & Galinsky, A. D. (2003). From self-prediction to self-defeat:
Behavioral forecasting, self-fulfilling prophecies, and the effect of competitive expectations. Jour-
nal of Personality and Social Psychology, 85(4), 672–683.
Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Informa-
tion Technology, 20(1), 1–3.
Dillard, J. P., & Shen, L. (2005). On the nature of reactance and its role in persuasive health communica-
tion. Communication Monographs, 72(2), 144–168.
Dowd, E. T., Milne, C. R., & Wise, S. L. (1991). The therapeutic reactance scale: A measure of psycho-
logical reactance. Journal of Counseling & Development, 69(6), 541–545.
Efferson, C., Lalive, R., & Fehr, E. (2008). The coevolution of cultural groups and ingroup favoritism.
Science, 321(5897), 1844–1849.
Eitel-Porter, R. (2020). Beyond the promise: Implementing ethical AI. AI and Ethics, 1–8.
Epley, N., & Dunning, D. (2000). Feeling “holier than thou”: Are self-serving assessments produced by
errors in self- or social prediction? Journal of Personality and Social Psychology, 79(6), 861–875.
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., &Srikumar, M. (2020). Principled artificial intelligence:
Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein
Center Research Publication No. 2020–1. SSRN Electronic Journal, 1–39.
Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical.
Philosophy & Technology, 32(2), 185–193.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., etal. (2018). AI4People—
An ethical framework for a good AI society: Opportunities, risks, principles, and recommenda-
tions. Minds and Machines, 28(4), 689–707.
Frigo, G., Marthaler, F., Albers, A., Ott, S., & Hillerbrand, R. (2021). Training responsible engineers.
Phronesis and the role of virtues in teaching engineering ethics. Australasian Journal of Engineer-
ing Education, 26(1), 25–37.
Future of Life Institute. (2017). Asilomar AIprinciples. Future of life institute. https:// futur eofli fe. org/ ai-
princ iples/. Accessed 23 October 2018.
Gambelin, O. (2020). Brave: What it means to be an AI ethicist. AI and Ethics, 1–5.
Gino, F., Ayal, S., & Ariely, D. (2009). Contagion and differentiation in unethical behavior: The effect of
one bad apple on the barrel. Psychological Science, 20(3), 393–398.
Gino, F., & Mogilner, C. (2014). Time, money, and morality. Psychological Science, 25(2), 414–421.
Gino, F., & Pierce, L. (2009). The abundance effect: Unethical behavior in the presence of wealth. Organ-
izational Behavior and Human Decision Processes, 109(2), 142–155.
Gneezy, U., & Rustichini, A. (2000). A fine is a price. The Journal of Legal Studies, 29(1), 1–17.
Godin, G., Conner, M., & Sheeran, P. (2005). Bridging the intention-behaviour ‘gap’: The role of moral
norm. The British Journal of Social Psychology, 44(Pt 4), 497–512.
Graham, M., Hjorth, I., & Lehdonvirta, V. (2017). Digital labour and development: Impacts of global
digital labour platforms and the gig economy on worker livelihoods. Transfer: European Review of
Labour and Research, 23(2), 135–162.
Grayot, J. D. (2020). Dual process theories in behavioral economics and neuroeconomics: A critical
review. Review of Philosophy and Psychology, 11(1), 105–136.
Page 21 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
Habermas, J. (2001). Moral consciousness and communicative action. Cambridge (Mass.): MIT.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(3),
457–461.
Hagendorff, T. (2021a). Blind spots in AI ethics. AI and Ethics, 1–17.
Hagendorff, T. (2021). Forbidden knowledge in machine learning: Reflections on the limits of research
and publication. AI & SOCIETY - Journal of Knowledge, Culture and Communication, 36(3),
767–781.
Hagendorff, T. (2021). Linking human and machine behavior: A new approach to evaluate training data
quality for beneficial machine learning. Minds and Machines, 31, 563–593.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment.
Psychology Review, 108(4), 814–834.
Haidt, J. (2006). The happiness hypothesis: Putting ancient wisdom and philosophy to the test of modern
science. Arrow Books.
Hallensleben, S., Hustedt, C., Fetic, L., Fleischer, T., Grünke, P., Hagendorff, T., etal. (2020). From prin-
ciples to practice: An interdisciplinary framework to operationalise AIethics. Gütersloh: Bertels-
mann Stiftung, 1–56.
Hao, K. (2019). In 2020, let’s stop AI ethics-washing and actually do something. https:// www. techn ology
review. com/s/ 614992/ ai- ethics- washi ng- time- to- act/. Accessed 7 January 2020.
Harris, C. E. (2008). The good engineer: Giving virtue its due in engineering ethics. Science and Engi-
neering Ethics, 14(2), 153–164.
Hines, J. M., Hungerford, H. R., & Tomera, A. N. (1987). Analysis and synthesis of research on responsi-
ble environmental behavior: A meta-analysis. The Journal of Environmental Education, 18(2), 1–8.
Hong, S.-M. (1992). Hong’s psychological reactance scale: A further factor analytic validation. Psycho-
logical Reports, 70(2), 512–514.
Howard, D. (2018). Technomoral civic virtues: A critical appreciation of Shannon Vallor’s technology
and the virtues. Philosophy & Technology, 31(2), 293–304.
Hursthouse, R. (2001). On virtue ethics. Oxford University Press.
Isen, A. M., & Levin, P. F. (1972). Effect of feeling good on helping: Cookies and kindness. Journal of
Personality and Social Psychology, 21(3), 384–388.
Jansen, E., & von Glinow, M. A. (1985). Ethical ambivalence and organizational reward systems. The
Academy of Management Review, 10(4), 814–822.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine
Intelligence, 1(9), 389–399.
Johnson, D. G. (2017). Can engineering ethics be taught? The Bridge, 47(1), 59–64.
Kahneman, D. (2012). Thinking, fast and slow. Penguin.
Keown, D. (1992). The nature of Buddhist ethics. Palgrave MacMillan.
Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, and bad barrels:
Meta-analytic evidence about sources of unethical decisions at work. The Journal of Applied Psy-
chology, 95(1), 1–31.
Kohen, A., Langdon, M., & Riches, B. R. (2019). The making of a hero: Cultivating empathy, altruism,
and heroic imagination. Journal of Humanistic Psychology, 59(4), 617–633.
Kollmuss, A., & Agyeman, J. (2002). Mind the gap: Why do people act environmentally and what are the
barriers to pro-environmental behavior? Environmental Education Research, 8(3), 239–260.
Kouchaki, M., & Smith, I. H. (2014). The morning morality effect: The influence of time of day on uneth-
ical behavior. Psychological Science, 25(1), 95–102.
Kouchaki, M., Smith-Crowe, K., Brief, A. P., & Sousa, C. (2013). Seeing green: Mere exposure to money
triggers a business decision frame and unethical outcomes. Organizational Behavior and Human
Decision Processes, 121(1), 53–61.
Kupperman, J. J. (2001). The indispensability of character. Philosophy, 76(296), 239–250.
Latané, B., & Darley, J. M. (1968). Group inhibition of bystander intervention in emergences. Journal of
Personality and Social Psychology, 10(3), 215–221.
Lauer, D. (2020). You cannot have AI ethics without ethics. AI and Ethics, 1–5.
Loe, T. W., Ferrell, L., & Mansfield, P. (2013). A review of empirical studies assessing ethical decision
making in business. In A. C. Michalos & D. C. Poff (Eds.), Citation classics from the journal of
business ethics (pp. 279–301). Springer, Netherlands.
Loughran, T., McDonald, B., & Yun, H. (2009). A wolf in sheep’s clothing: The use of ethics-related
terms in 10-K reports. Journal of Business Ethics, 89(S1), 39–49.
MacIntyre, A. C. (1981). After virtue: A study in moral theory. University of Notre Dame Press.
55 Page 22 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Virtue‑Based Framework toSupport Putting AI Ethics into…
1 3
Mathews, K. E., & Canon, L. K. (1975). Environmental noise level as a determinant of helping behavior.
Journal of Personality and Social Psychology, 32(4), 571–577.
McNamara, A., Smith, J., &Murphy-Hill, E. (2018). Does ACM’s code of ethics change ethical decision
making in software development? In G. T. Leavens, A. Garcia, & C. S. Păsăreanu (Eds.) (pp. 1–7).
New York,: ACM Press.
Mead, N. L., Baumeister, R. F., Gino, F., Schweitzer, M. E., & Ariely, D. (2009). Too tired to tell the
truth: Self-control resource depletion and dishonesty. Journal of Experimental Social Psychology,
45(3), 594–597.
Meara, N. M., Schmidt, L. D., & Day, J. D. (1996). Principles and virtues. The Counseling Psychologist,
24(1), 4–77.
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal Psychology, 67, 371–378.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11),
501–507.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how. An overview of AI ethics
tools, methods and research to translate principles into practices. Science and Engineering Ethics,
26, 2141–2168.
Neubert, M. J. (2017). Teaching and training virtues: Behavioral measurement and pedagogical
approaches. In A. J. G. Sison, G. R. Beabout, & I. Ferrero (Eds.), Handbook of virtue ethics in
business and management (pp. 647–655). Springer, Netherlands.
Neubert, M. J., & Montañez, G. D. (2020). Virtue as a framework for the design and use of artificial intel-
ligence. Business Horizons, 63(2), 195–204.
Nussbaum, M. (1993). Non-relative virtues: An Aristotelian approach. In M. Nussbaum & A. Sen (Eds.),
The quality of life (pp. 242–269). Oxford University Press.
Ochigame, R. (2019). The invention of “ethical AI”: How big tech manipulates academia to avoid regula-
tion. https:// thein terce pt. com/ 2019/ 12/ 20/ mit- ethic al- ai- artifi cial- intel ligen ce/. Accessed 7 January
2020.
Oetzel, M. C., & Spiekermann, S. (2014). A systematic methodology for privacy impact assessments: A
design science approach. European Journal of Information Systems, 23(2), 126–150.
Palazzo, G., Krings, F., & Hoffrage, U. (2012). Ethical blindness. Journal of Business Ethics, 109(3),
323–338.
Peterson, C., & Seligman, M. E. P. (2004). Character strengths and virtues: A handbook and classifica-
tion. American Psychological Association.
Ratti, E., & Stapleford, T. A. (Eds.). (2021). Science, technology, and virtues: Contemporary perspec-
tives. Oxford University Press.
Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the
teeth of ethics. Big Data & Society, 7(2), 1–5.
Rogers, T., & Bazerman, M. H. (2008). Future lock-in: Future implementation increases selection of
‘should’ choices. Organizational Behavior and Human Decision Processes, 106, 1–20. https:// doi.
org/ 10. 1016/j. obhdp. 2007. 08. 001
Schneier, B. (2012). Liars & outliers: Enabling the trust that society needs to thrive. John Wiley & Sons.
Schwitzgebel, E. (2009). Do ethicists steal more books? Philosophical Psychology, 22(6), 711–725.
Schwitzgebel, E., & Rust, J. (2014). The moral behavior of ethics professors: Relationships among self-
reported behavior, expressed normative attitude, and directly observed behavior. Philosophical
Psychology, 27(3), 293–327.
Selart, M., & Johansen, S. T. (2011). Ethical decision making in organizations: The role of leadership
stress. Journal of Business Ethics, 99(2), 129–143.
Sison, AJG., Beabout, GR., &Ferrero, I. (Eds.). (2017). Handbook of virtue ethics in business and man-
agement. Dordrecht: Springer Netherlands.
Tenbrunsel, A. E., & Messick, D. M. (2004). Ethical fading: The role of self-deception in unethical
behavior. Social Justice Research, 17(2), 223–236.
Theodorou, A., & Dignum, V. (2020). Towards ethical and socio-legal governance in AI. Nature Machine
Intelligence, 2(1), 10–12.
Tiwald, J. (2010). Confucianism and virtue ethics: Still a fledgling in Chinese and comparative philoso-
phy. Comparative Philosophy: An International Journal of Constructive Engagement of Distinct
Approaches toward World Philosophy, 1, 2.
Treviño, L. K., den Nieuwenboer, N. A., & Kish-Gephart, J. J. (2014). (Un)ethical behavior in organiza-
tions. Annual Review of Psychology, 65, 635–660.
Page 23 of 24 55
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
T.Hagendorff
1 3
Treviño, L. K., Weaver, G. R., & Reynolds, S. J. (2006). Behavioral ethics in organizations: A review.
Journal of Management, 32(6), 951–990.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science,
185(4157), 1124–1131.
Vakkuri, V., Kemell, K-K., &Abrahamsson, P. (2019a). AI ethics in industry: A research framework.
arXiv, 1–10.
Vakkuri, V., Kemell, K-K., Kultanen, J., Siponen, M., & Abrahamsson, P. (2019b). Ethically aligned
design of autonomous systems: Industry viewpoint and an empirical study. arXiv, 1–17.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford
University Press.
Vallor, S. (2018). Technology and the virtues: A response to my critics. Philosophy & Technology, 31(2),
305–316.
Vallor, S. (2021). Twenty-first-century virtue: Living well with emerging technologies. In E. Ratti & T.
A. Stapleford (Eds.), Science, technology, and virtues: Contemporary perspectives (pp. 77–96).
Oxford University Press.
Vohs, K. D., Mead, N. L., & Goode, M. R. (2006). The psychological consequences of money. Science,
314(5802), 1154–1156.
Wagner, B. (2018). Ethics as an escape from regulation: From ethics-washing to ethics-shopping? In M.
Hildebrandt (Ed.), Being profiled: Cogitas ergo sum (pp. 84–89). Amsterdam University Press.
Whittlestone, J., Nyrup, R., Alexandrova, A., &Cave, S. (2019). The role and limits of principles in AI
ethics. In V. Conitzer, G. Hadfield, & S. Vallor (Eds.) (pp. 195–200). New York, NY, USA: ACM.
Williams, L. E., & Bargh, J. A. (2008). Experiencing physical warmth promotes interpersonal warmth.
Science, 322(5901), 606–607.
Woodzicka, J. A., & LaFrance, M. (2001). Real versus imagined gender harassment. Journal of Social
Issues, 57(1), 15–30.
Zicari, RV. (2020). Z-inspection: A holistic and analytic process to assess ethical AI. Mindful use of AI.
http://z- inspe ction. org/ wp- conte nt/ uploa ds/ 2020/ 10/ Zicari. Lectu re. Octob er15. 2020. pdf. Accessed
24 November 2020.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
55 Page 24 of 24
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com
... Trust, in turn, makes it easier for people to cooperate and work together, it creates a sense of community and it makes social interactions more predictable." (Hagendorff 2022). ...
... One could therefore say that if trust is there, one can use it to discover "blind spots" (Hagendorff 2022), i.e., power shifts that have hardly been visible so far, but which are momentous for the ethical design of health organizations and treat them ethically. ...
... In the following, Kaptein's CEVM is to be enriched with the principle-related biomedical virtues of Beauchamp and Childress 2019, Vallor´s taxonomy of "technomoral virtues" (Vallor 2016), Hagendorff´s listing of basic and second-order AI virtues (Hagendorff 2022) and the catalogue of virtues Hähnel 2016 has compiled for public health actors to stakeholders in healthcare organizations. Kaptein's organizational virtues are equated in their normative relevance with the individual virtues elaborated by Beauchamp and Childress, Vallor, Hähnel, and Hagendorff. ...
Article
Full-text available
The emergence of new digital technologies in modern work organizations is also changing the way employees and employers communicate, design work processes and responsibilities, and delegate. This paper takes an interdisciplinary—namely sociological and philosophical—perspective on the use of AI in healthcare work organizations. Using this example, structural power relations in modern work organizations are first examined from a sociological perspective, and it is shown how these structural power relations, decision-making processes, and areas of responsibility shift when AI is used. In the subsequent ethical part, opportunities for a fairer organization of work, but also dangers due to possibly changed power relations are elaborated and evaluated by presenting a realistic scenario from everyday clinical practice. After combining a proceduralist account of organizational ethics with a virtue-ethical approach, it is argued that certain organizational and character dispositions are necessary for employers and employees to meet the challenge of changing structural power relations in the future. With the same goal, a summative sociological perspective discusses challenges to workplace co-determination.
... A interseção entre ética e IA vem sendo estudada a partir de diferentes perspectivas, desde as abordagens de Garcia-Murillo e MacInnes (2024), de Bankins e Formosa (2023) e de Bankins (2021) envolvendo as implicações éticas da aplicação de IA no trabalho e no seu significado, até um exame mais crítico sobre o que a literatura vem denominando "IA ética" (Munn, 2023;Waelen, 2022;Hagendorff, 2022a;Rochel & Evéquoz, 2021;Franzke, Muis & Schäfer, 2021;Siau & Wang, 2020;Hagendorff, 2020). Trata-se, no entanto, de um terreno fértil para a investigação em ainda maior extensão e profundidade à luz da contribuição que pode a filosofia emprestar, mormente se reconhecidas as distintas vertentes teóricas da ética (Hagendorff, 2022b;Wernaart, 2022;Hanna & Kazim, 2021;Buchanan, 2020 (OCDE, 2024, p. 7). Assim, o senso de IA pode ser associado entre outros aspectos, a um esforço orientado para produzir a capacidade de gerar artificialmente operações lógicas e aprender com a experiência, também para realizar decisões ou subsidiar a deliberação (Banking & Formosa, 2023;Rochel & Evéquoz, 2021;Wang, 2019;Buchanan, 2005). ...
... É neste ponto que se busca compreender o atravessamento do tema da IA com a ética. Sendo assim, a ética exige ser tomada aqui em sua essência conceitual de fundo filosófico como exame situado de natureza crítico-reflexiva e de extensão radical incidente sobre os valores fundantes que presidem o pensamento e a ação do sujeito acerca do que seja o justo, o certo e o bom, com o intuito de justificar escolhas e deliberações de forma consciente e esclarecida (Bergue, 2022a;2022b). ...
... Assinala-se do excerto, indicando a não somente possível, mas necessária atuação do agente nos espaços de discricionariedade (Bovens & Zouridis, 2002), além do destaque à dimensão ética da ação administrativa, especialmente o imperativo da busca do "equilíbrio entre a legalidade e a finalidade" (Brasil, 1994, Cap (Hagendorff, 2022b). Nesses termos, as linhas de investigação convencionais parecem alimentar a expectativa de constituir uma IA ética subordinada a tais princípios, o que também se assenta no pressuposto tecnicista da inevitabilidade da evolução da tecnologia a que aludem Hanna e Kazim (2021, p. 407) como "determinismo tecnológico" e, nesse contexto, no imperativo de aperfeiçoar a capacidade técnica dos algoritmos de IA de produzirem resultados éticos. ...
Article
Full-text available
Objetivo: Investigar a ética como exercício do pensamento crítico-reflexivo no uso da inteligência artificial (IA), em particular nas tomadas de decisão de gestores do serviço público. Método/abordagem: Ensaio teórico que examina os conceitos de ética e de conduta expressos na literatura especializada que relaciona inteligência artificial e ética na administração pública. Contribuições teóricas/práticas/sociais: O trabalho demonstra a necessidade de repensar o conceito dominante de ética, dando ênfase às políticas de educação; a eficácia limitada da abordagem da codificação de condutas e princípios balizadores, seja aos profissionais desenvolvedores destas tecnologias, seja nos algoritmos. Originalidade/relevância: Residem na proposição do entendimento da ética para além de um conjunto de princípios e valores a orientar a conduta dos agentes, passando a destacar o conceito como o exercício de um juízo crítico-reflexivo, radical e situado incidente sobre os fundamentos conformadores dos parâmetros morais compartilhados envolvidos nos processos de tomada de decisão com destaque para o complexo campo da administração pública.
... 41 The term "anthropocentrism" is used to describe a perspective that is human-centric in nature. This perspective places a particular focus on the manner in which humans should interact with and treat AI. 42 Academics endeavour to adopt the human-centred perspective, arguing that AI development should remain under human manipulation and that humans should be held accountable for AI errors. 43 More specifically, the development of AI should be guided by a human-centric approach, with the overarching goal of enhancing human well- 34 Lior, Anat. ...
... Episteme. 2023: 1-17.42 Id.43 Shneiderman, Ben. ...
Article
Erroneous results generated by artificial intelligence (AI) have opened up new questions of who is responsible for AI errors in legal scholarship. I support the prevailing academic view that human subjects should be held responsible for AI errors. However, I argue that the underlying reason is not pertained to the reliability of AI, but rather the inability of humans to establish a trusting relationship with AI. The term ‘Trustworthy AI’ is just a metaphor, which presents a sense of trust; AI itself is not trustworthy. The first section outlines the academic debate on the responsibility of AI. It contends that the perspective of these debates has shifted from the characteristics of AI, such as autonomy and explainability, to a human-centred perspective, which is how humans should develop AI. The assumption of responsibility depends on the existence of a trust relationship because when people believe that an individual can fulfil his or her responsibilities, they are willing to hand over power, resources or tasks to that individual. It applies a virtue jurisprudence-based approach to explain why humans cannot establish a trust relationship with AI To establish such a relationship, one subject must indicate to the other that its behaviour is based on specific moral motivation and that it can be held moral responsibility. Nevertheless, AI lacks moral motivation and moral responsibility. The third section reconsiders the scope of responsible subjects for AI errors. It posits that accountability should be limited to the individuals who are direct beneficiaries of the AI product. Finally, it argues that the scope of responsibility for AI errors should be disparate pursuant to the risk level of the AI. For high-risk AI, responsible subjects must fulfil both the obligations under the AI Act and the obligation to provide technical authentication. Keywords: AI Law, Trustworthy AI, AI Act, Legal Responsibility, Virtue Jurisprudence
... There is a massively renewed interest in computer ethics since the advent of generative AI (Silvia 2023, Cockelbergh 2020 including an interest in virtue ethics (Hagendorff 2022, Vallor 2016, Vallor 2017. Interestingly, such virtue ethics has been proposed for system designers and computer engineers and for the resulting systems, e.g. an AI agent (Farina et al. 2022). ...
... This is why there is a strong call, for example in AI to move 'from what to how' (e.g. Hagendorff 2022, Prem 2023). This has resulted in an effort to make principles more concrete and list possible actions to make systems 'more ethical'. ...
Chapter
In the past the media used to present entrepreneurs in information technology as economic leaders, influential innovators, and generally as role models for a promising next generation. More recently, however, controversies about the power of large IT firms, their questionable business ethics, and an increasingly critical debate about surveillance capitalism created a much less favourable image of the IT entrepreneur. These developments also motivated a whole new brand of creators, entrepreneurs, and activists to address ethical challenges of the IT world. The paper takes a closer look at highly motivated individuals who either use IT for social good or to fight the abuses and shortcomings of digitization. It analyses their motivation and reasons to for the good cause with examples from their life histories. The article concludes with a discussion of entrepreneurial IT virtues in relation to virtue ethics and argues that more and more systematic analyses of virtuous IT exemplars can help lead the way to an improved information technology virtuousness including in entrepreneurship.
... М.К. Нуссбаум справедливо обращала внимание на этот момент и настаивала на необходимости использования термина «нео-аристотелизм» для избежания неправильных ассоциаций и интерпретаций [Nussbaum, 1999, 200-201] 3 ; другие нео-аристотелианцы также отмечали эту особенность в своих трудах [Gardiner, 2005;Crisp, 1996, 2-3;Irwin, 1996, 39], но не настаивали на отказе от использования понятия «этика добродетели» в связи с его исторической привязкой к аристотелевскому этическому дискурсу. На наш взгляд, сложность с термином «этика добродетели» решается довольно просто -с помощью прописывания методологии исследования со стороны авторов. ...
... На наш взгляд, сложность с термином «этика добродетели» решается довольно просто -с помощью прописывания методологии исследования со стороны авторов. Отдельным положительным моментом в решении этой трудности может стать и повсеместное введение в оборот таких терминов, как «нео-аристотелизм», «старая и новая этика добродетели» 4 [Gardiner, 2005;Шохин, 2014, 19], современная этика добродетели 5 . ...
Article
The article examines the methodological limitations of the application of (contemporary) virtue ethics in the field of artificial intelligence. The first limitation lies in the theoretical vagueness of “virtue ethics”, since it is only instrumentally interpreted in case of applied ethics’ issues by many researchers: for instance, they often refer only to the principle of golden mean that helps to understand the nature of virtues and to articulate them for AI. This approach is fundamentally wrong, since it selectively considers the main ideas of (con­temporary) virtue ethics and uses only those that approve it. The second limitation is that in Aristotelianism and neo-Aristotelianism AI cannot be considered as a moral agent sensu stricto. In this regard, AI is a system that can only imitate the behaviour of a moral agent who is the bearer of a certain set of virtues. Despite the initial negative diagnosis, there are still two approaches in neo-Aristotelianism that can bring light to the possibility of AI ethics. Ex­emplarism allows us to take a different look at the role of social robots in human life: they can be seen as moral exemplars or narratives that can contribute to the moral development of their owners and change their moral portrait for the better or worse. The capability approach created by M.K. Nussbaum also focuses on the impact of AI on the quality of human life: in this context, we are highlighting what changes basic human capabilities can undergo and how the minimal level of social justice and the respect of human dignity are ensured while using AI technologies.
... Virtue-based frameworks are also now available to support in this regard. Studies have put forth four "basic AI virtues" -justice, honesty, responsibility, and care, and two "second-order AI virtues" prudence and fortitude (Hagendorff, 2022 This multi-dimensional analysis ensures comprehensive consideration of potential consequences. ...
Chapter
Integrating Ethical Principles into the development and deployment processes becomes essential for management leaders as AI rapidly transforms workplaces. Ethical AI and Decision-Making ensure the alignment of AI applications with human values and societal goals. Fairness, transparency, accountability, privacy, societal impact, and human values are critical ethical principles that guide AI systems. Ethical decision-making models and methodologies offer structured frameworks for balancing competing ethical considerations. AI Ethics Boards provide governance and risk management. Interdisciplinary collaboration, Stakeholder engagement, and Inclusive processes bring diverse perspectives. Risk assessment, Governance Frameworks and mitigation strategies address potential harms and promote Responsible AI practices. By implementing ethical decision-making practices, promoting transparency and accountability, and engaging in responsible AI governance, organizations and leaders can benefit from AI while minimizing ethical risks and maximizing societal benefits.
... Whereas some scholars propose a complementary approach that is based on virtue ethics and constitute the prerequisite for ethical decision making in the AI field. It defines the concept of "basic AI virtues", which include justice, honesty, responsibility and care, all of which represent specific motivational settings and describe measures for successfully cultivating of these virtues in organizations dealing with AI research and development [5].However, some scholars believe that the AI ethics is a Just like a critical theory that aims to diagnose as well as change society and is fundamentally concerned with human emancipation and empowerment [6]. ...
Article
Full-text available
This paper puts forth Central Asian AI ethics principles and proposes a layered strategy tailored for the development of ethical principles in the field of artificial intelligence (AI) in Central Asian countries. This approach includes the customization of AI ethics principles to resonate with local nuances, the formulation of national and regional-level AI ethics principles, and the implementation of sector-specific principles. While countering the narrative of ineffectiveness of the AI ethics principles, this paper underscores the importance of stakeholder collaboration, provides a comprehensive framework, and emphasizes the need for responsible AI practices. By adopting this approach, Central Asian region can contribute towards the regional integration and global discourse on AI ethics while promoting the responsible use of AI technology in their respective countries.
Article
Full-text available
The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.
Article
Full-text available
This paper critically discusses blind spots in AI ethics. AI ethics discourses typically stick to a certain set of topics concerning principles evolving mainly around explainability, fairness, and privacy. All these principles can be framed in a way that enables their operationalization by technical means. However, this requires stripping down the multidimensionality of very complex social constructs to something that is idealized, measurable, and calculable. Consequently, rather conservative, mainstream notions of the mentioned principles are conveyed, whereas critical research, alternative perspectives, and non-ideal approaches are largely neglected. Hence, one part of the paper considers specific blind spots regarding the very topics AI ethics focusses on. The other part, then, critically discusses blind spots regarding to topics that hold significant ethical importance but are hardly or not discussed at all in AI ethics. Here, the paper focuses on negative externalities of AI systems, exemplarily discussing the casualization of clickwork, AI ethics’ strict anthropocentrism, and AI’s environmental impact. Ultimately, the paper is intended to be a critical commentary on the ongoing development of the field of AI ethics. It makes the case for a rediscovery of the strength of ethics in the AI field, namely its sensitivity to suffering and harms that are caused by and connected to AI technologies.
Chapter
Full-text available
In this chapter, I argue that in translating ethical principles for digital technologies into ethical practices, even the best efforts may be undermined by some unethical risks. Five of them are already encountered or foreseeable in the international debate about digital ethics: (1) ethics shopping; (2) ethics bluewashing; (3) ethics lobbying; (4) ethics dumping; and (5) ethics shirking.
Article
Full-text available
During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for the concept of moral responsibility. The paper starts by highlighting the important difficulties in assigning responsibility to either technologies themselves or to their developers. Top-down and bottom-up approaches to moral responsibility are then contrasted, as we explore how they could inform debates about Responsible AI. We highlight the limits of the former ethical approaches and build the case for classical Aristotelian virtue ethics. We show that two building blocks of Aristotle’s ethics, dianoetic virtues and the context of actions, although largely ignored in the literature, can shed light on how we could think of moral responsibility for both AI and humans. We end by exploring the practical implications of this particular understanding of moral responsibility along the triadic dimensions of ethics by design, ethics in design and ethics for designers.
Article
Full-text available
Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up to now, those qualities are solely measured in technical terms, but not in ethical ones, despite the significant role of training and annotation data in supervised machine learning. This is the first study to fill this gap by describing new dimensions of data quality for supervised machine learning applications. Based on the rationale that different social and psychological backgrounds of individuals correlate in practice with different modes of human–computer-interaction, the paper describes from an ethical perspective how varying qualities of behavioral data that individuals leave behind while using digital technologies have socially relevant ramification for the development of machine learning applications. The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from, establishing an innovative filter regime to transition from the big data rationale n = all to a more selective way of processing data for training sets in machine learning. The overarching aim of this research is to promote methods for achieving beneficial machine learning applications that could be widely useful for industry as well as academia.
Article
Full-text available
The main goal of engineering ethics courses is to improve students' ethical competence by developing skills such as ethical sensitivity, awareness, analysis and judgement. In this paper, we propose that virtue ethics helps achieve these skills by bridging the gap between the epistemic and ethical aspects involved in teaching engineering ethics as well as that between academic knowledge (in both ethics and engineering) and its application in engineering practice (particularly design). To clarify why learning about virtues can deeply enrich students' ethical thinking and competence, we specifically consider the virtue of practical wisdom, phronesis. This is the ability to decide how to achieve a certain end and how to reflectively judge ethically good conduct. At the core of the paper, we put forward a theoretical argument for including phronesis in teaching ethics within innovation courses. Training this virtue will help engineering students in dealing with the various uncertainties that will emerge from their future engineering practices. With regard to implementing our proposal, we propose to integrate practical wisdom in "semi-technical" courses where theoretical in-class learning is combined with practical design experiences. We discuss the structure, aims and assessment methods of an integrated product development course that we deem preferable to and potentially more effective than a stand-alone engineering ethics class. An engineering virtue ethics can help engineering students to develop a personal reflective way of thinking about concrete courses of action in engineering practice, and may have beneficial ripple effects on their lives, society and the environment.
Article
Full-text available
In the past few years, scholars have been questioning whether the current approach in data ethics based on the higher level case studies and general principles is effective. In particular, some have been complaining that such an approach to ethics is difficult to be applied and to be taught in the context of data science. In response to these concerns, there have been discussions about how ethics should be “embedded” in the practice of data science, in the sense of showing how ethical issues emerge in small technical choices made by data scientists in their day-to-day activities, and how such an approach can be used to teach data ethics. However, a precise description of how such proposals have to be theoretically conceived and could be operationalized has been lacking. In this article, we propose a full-fledged characterization of ‘embedding’ ethics, and how this can be applied especially to the problem of teaching data science ethics. Using the emerging model of ‘microethics’, we propose a way of teaching daily responsibility in digital activities that is connected to (and draws from) the higher level ethical challenges discussed in digital/data ethics. We ground this microethical approach into a virtue theory framework, by stressing that the goal of a microethics is to foster the cultivation of moral virtues. After delineating this approach of embedding ethics in theoretical detail, this article discusses a concrete example of how such a ‘micro-virtue ethics’ approach could be practically taught to data science students.
Chapter
This chapter identifies the growing difficulty of making ethical decisions—choices that aim at the “good life”—in our present human condition, one in which the unpredictable, complex, and destabilizing effects of emerging technologies on a global scale make the shape of the human future increasingly opaque and hard to fathom. The chapter suggests that this twenty-first-century challenge for ethics, which we can identify as a state of acute technosocial opacity, is best addressed from a particular philosophical tradition: virtue ethics. It argues that the classical traditions of Aristotelian, Confucian, and Buddhist virtue ethics offer us more resources for managing this contemporary problem than do other, more modern moral theories. The chapter concludes that only the cultivation of distinctly technomoral virtues will preserve humanity’s chances to live well with emerging technologies, and flourish in an increasingly opaque future.