Conference PaperPDF Available

Ethical Judgment of Agents' Behaviors in Multi-Agent Systems

Authors:

Abstract and Figures

The increasing use of multi-agent technologies in various areas raises the necessity of designing agents that judge ethical behaviors in context. This is why several works integrate ethical concepts in agents' decision making processes. However , those approaches consider mainly an agent-centered perspective, letting aside the fact that agents are in interaction with other artificial agents or human beings that can use other ethical concepts. In this article, we address the problem of producing ethical behaviors from a multi-agent perspective. To this end, we propose a model of ethical judgment an agent can use in order to judge the ethical dimension of both its own behavior and the other agents' behaviors. This model is based on a rationalist and explicit approach that distinguishes theory of good and theory of right. A proof-of-concept implemented in Answer Set Programming and based on a simple scenario is given to illustrate those functionalities.
Content may be subject to copyright.
Ethical Judgment of Agents’ Behaviors
in Multi-Agent Systems
Nicolas Cointe
Institut Henri Fayol, EMSE
LabHC, UMR CNRS 5516
F-42000, Saint-Etienne,
France
nicolas.cointe@emse.fr
Grégory Bonnet
Normandie University
GREYC, CNRS UMR 6072
F-14032 Caen, France
gregory.bonnet@unicaen.fr
Olivier Boissier
Institut Henri Fayol, EMSE
LabHC, UMR CNRS 5516
F-42000, Saint-Etienne,
France
olivier.boissier@emse.fr
ABSTRACT
The increasing use of multi-agent technologies in various ar-
eas raises the necessity of designing agents that judge ethical
behaviors in context. This is why several works integrate
ethical concepts in agents’ decision making processes. How-
ever, those approaches consider mainly an agent-centered
perspective, letting aside the fact that agents are in inter-
action with other artificial agents or human beings that can
use other ethical concepts. In this article, we address the
problem of producing ethical behaviors from a multi-agent
perspective. To this end, we propose a model of ethical judg-
ment an agent can use in order to judge the ethical dimension
of both its own behavior and the other agents’ behaviors.
This model is based on a rationalist and explicit approach
that distinguishes theory of good and theory of right. A
proof-of-concept implemented in Answer Set Programming
and based on a simple scenario is given to illustrate those
functionalities.
Categories and Subject Descriptors
D.2.11 [Software engineering]: Software architectures; I.2.11
[Artificial intelligence]: Distributed artificial intelligence
– Intelligent agents; K.4 [Computers and society]: Ethics
General Terms
Theory
Keywords
Multi-Agent Systems, Ethical Judgment,
Computational Ethics
1. INTRODUCTION
The increasing presence of autonomous agents in various
fields as health care, high-frequency trading, transportation
and so on, may rise many issues if these agents are not able
to consider and follow some rules, and adapt their behavior.
These rules can be some simple constraints as a communi-
cation protocol or prohibition of certain behaviors, or some
Appears in: Proceedings of the 15th International Conference
on Autonomous Agents and Multiagent Systems (AAMAS 2016),
J. Thangarajah, K. Tuyls, C. Jonker, S. Marsella (eds.),
May 9–13, 2016, Singapore.
Copyright c
2016, International Foundation for Autonomous Agents and
Multiagent Systems (www.ifaamas.org). All rights reserved.
more complex ones as preferences of the user or the descrip-
tion of a code of deontology. For instance, the understand-
ing of codes of conduct may ease the cooperation between
a practitioner and a medical agent or a patient, considering
some concepts as medical secrecy or dignity respect. Even if
some works propose implementations of action restrictions
[34], simple prohibitions or obligations [7], some codes of
conducts use more complex notions such as moral values or
ethical principles, and need further work. An explicit im-
plementation of such concepts, like generosity or altruism,
needs specific structures and processes in the architecture
of the agent. Consequently, the interest in designing ethical
autonomous agents has recently been raised in the Artifi-
cial Intelligence community [29] as highlighted by numer-
ous articles [20, 23, 25, 26] and conferences1. However, all
these works consider ethics in an individual – single-agent –
point of view whereas numerous real-world applications such
as transportation or high-frequency trading involve multiple
agents, enforcing the need to consider the collective – multi-
agent – point of view.
An individual point of view can be enough for an agent
to act ethically within an agent organization. However, to
evaluate the behavior of another agent (e.g. to collaborate
with or to punish), agents need to be able to judge the ethics
of the others. In this article, we are interested in the question
of ethical judgment, i.e. the assessment of appropriateness
or not of agents’ behavior with respect to moral convictions
and ethical principles. We propose a generic model for the
judgment of behaviors that can be used by an agent both to
decide on its behavior and to judge the behavior of others.
The remainder of the article is organized as follows. Sec-
tion 2 introduces some key concepts of moral philosophy and
a short state of the art on common approaches in computa-
tional ethics. We detail in Section 3 our ethical judgment
model. Section 4 then illustrates the use of this model by
an agent while interacting with other agents. Section 5 of-
fers a proof of concept in ASP (Answer Set Programming).
We compare our work to existing approaches in Section 6
and conclude in Section 7 by pointing out the importance of
1Symposium on Roboethics - www.roboethics.org,
International Conference on Computer Ethics and Philo-
sophical Enquiry - philevents.org/event/show/15670,
Workshop on AI and Ethics, AAAI conference -
www.cse.unsw.edu.au/~tw/aiethics,
International Conference on AI and Ethics -
wordpress.csc.liv.ac.uk/va/2015/02/16/.
computational ethics for multi-agent systems, and by giving
some perspectives about the next steps of our work.
2. ETHICS AND AUTONOMOUS AGENTS
We introduce first in Section 2.1 moral philosophy con-
cepts on which we base our approach and review in section
2.2 existing autonomous agent architectures that propose
ethical behaviors. Finally, Section 2.3 points out the princi-
ples of our approach.
2.1 Moral philosophy concepts
From ancient philosophers to recent works in neurology[10]
and cognitive sciences [16], many studies have been inter-
ested in the capability of human beings to define and dis-
tinguish between the fair, rightful and good options and the
invidious, iniquitous and evil options. From the various dis-
cussions in moral philosophy on concepts like morals,ethics,
judgment or values, we consider the following definitions:
Definition 1. Morals consists in a set of moral rules which
describes the compliance of a given behavior with mores, val-
ues and usages of a group or a single person. These rules
associate a good or bad value to some combinations of ac-
tions and contexts. They could be specific or universal, i.e.
related or not to a period, a place, a folk, a community, etc.
Everyone knows many moral rules as “Lying is evil”, “Be-
ing loyal is good” or “Cheating is bad”. This kind of rules
grounds our ability to distinguish between good and evil.
Morals can be distinguished from law and legal systems in
the sense that there is not explicit penalties, officials and
written rules [15].
Moral rules are often supported and justified by some
moral values (e.g. freedom, benevolence, wisdom, confor-
mity). Psychologists, sociologists and anthropologists al-
most agree that moral values are central in the evaluation of
actions, people and events [31].
A set of moral rules and moral values establishes a theory
of the good which allows humans to assess the goodness or
badness of a behavior and theories of the right which define
some criteria to recognize a fair or, at least, acceptable op-
tion (also respectively named theory of values and theories of
right conduct [32]). For example even if stealing can be con-
sidered as immoral (regarding a theory of the good), some
philosophers agree that it is acceptable for a starving orphan
to rob an apple in a supermarket (regarding a theory of the
right). Humans commonly accept many situations where it
is right and fair to satisfy needs or desires, even if it is not
acceptable from a set of moral rules and values. The descrip-
tion of this conciliation is called ethics and, relying on some
philosophers as Paul Ricoeur [28], we admit the following
definition:
Definition 2. Ethics is a normative practical philosophi-
cal discipline of how humans should act and be toward the
others. Ethics uses ethical principles to conciliate morals,
desires and capacities of the agent.
Philosophers proposed various ethical principles, such as
Kant’s Categorical Imperative [18] or Thomas Aquinas’ Doc-
trine of Double Effect [24], which are sets of rules that allow
to distinguish an ethical option from a set of possible op-
tions. Traditionally, three major approaches are considered
in the literature:
Virtue ethics, where an agent is ethical if and only
if he2acts and thinks according to some values as wis-
dom, bravery, justice, and so on[17].
Deontological ethics, where an agent is ethical if and
only if he respects obligations and permissions related
to possible situations [2].
Consequentialist ethics, where an agent is ethical if
and only if he weighs the morality of the consequences
of each choice and chooses the option which has the
most moral consequences [33].
However, in some unusual situations, an ethical princi-
ple is unable to give a different valuation (a preference) be-
tween two options. Those situations, called dilemmas, are
choices between two options, each supported by ethical rea-
sons, given that the execution of both is not possible [22].
Each option will bring some regret. Many famous dilemmas,
such as the trolley problem [12], are perceived as failures in
morals or ethics or, at least, as an interesting question in the
human ability to judge ethically and to provide a rational
explanation of this judgment. In this article, we consider
dilemma as a choice for which an ethical principle is not
able to indicate the best option, regarding a given theory of
good. When facing a dilemma, an agent can consider several
principles in order to find a suitable solution. That is why
an autonomous artificial agent must be able to understand a
broad range of principles, and must be able to judge which
principle leads to the most satisfying decision.
Indeed, the core of ethics is the judgment. It is the final
step to make a decision and it evaluates each choices, with
respect to the agent’s desires, morals, abilities and ethical
principles. Relying on some consensual references [1] and
our previous definitions, we consider the following definition:
Definition 3. Judgment is the faculty of distinguishing the
most satisfying option in a situation, regarding a set of eth-
ical principles, for ourselves or someone else.
If an agent is facing two possible choices with both good
and/or bad effect (e.g. kill or be killed), the ethical judgment
allows him to make a decision in conformity with a set of
ethical principles and preferences.
2.2 Ethics and autonomous agents
Considering all these notions, many frameworks have been
developed in order to design autonomous agents embedded
with an individual ethics. They are related to ethics by de-
sign,ethics by casuistry,logic-based ethics and ethical cogni-
tive architecture.
Ethics by design consists in designing an ethical agent by
an a priori analysis of every situation the agent may en-
counter and by implementing for each situation a way to
avoid potential unethical behaviors. This approach can be
a direct and safe implementation of rules (e.g. the mili-
tary rules of engagement for an armed drone [4]). Its main
2In this section, we consider agents in terms of philosophy,
not only in terms of computer sciences.
drawback is the lack of explicit representation of any generic
ethical concepts (as morals, ethics, etc). Moreover, it is
not possible to measure a kind of similarity or distance be-
tween two ethics by design because they are not explicitly
described. As a result, conceiving cooperative heterogeneous
agents with different desires and principles, but without an
explicit representation of ethics, is difficult and only permits
the implementation of a strict deontological principle by a
direct implementation of rules.
Casuistry aims first at inferring ethical rules from a large
set of ethical judgment examples produced by some experts
and second at using these rules to produce an ethical be-
havior [3]. Even if this approach offers a generic architecture
for every application field, the human expertise is still nec-
essary to describe many situations. Moreover, the agent’s
ethical behavior is still not guaranteed (due to under- or
over-learning). The agent’s knowledge is still not explicitly
described and ethical reasoning is made by proximity and
not deduction. Consequently, cooperation between hetero-
geneous agents with different desires and principles is still
difficult.
Logic-based ethics is the direct translation of some well-
known and formally defined ethical principles (as Kant’s
Categorical Imperative or Thomas Aquinas’ Doctrine of Dou-
ble Effect) into logic programming [13, 14, 30]. The main
advantage of this method is the formalization of theories
of the right, even if theories of good are commonly simply
considered as parameters. Consequently, it only permits to
judge an option with respect to a single ethical principle.
Finally, cognitive ethical architectures consist in full ex-
plicit representations of each component of the agent, from
the classical beliefs (information on the environment and
other agents), desires (goals of the agent) and intentions
(the chosen actions) to some concepts as heuristics or emo-
tional machinery [5, 8, 9]. Even this kind of agent is able
to use explicit norms and to justify its decisions, explicitly
reasoning on other agents’ ethics is not implemented.
2.3 Requirements for judgment in MAS
The approaches presented in the previous section propose
interesting methods and models to design a single ethical au-
tonomous agent. However in a multi-agent system, agents
may need to interact and work together to share resources,
exchange data or perform actions collectively. Previous ap-
proaches often consider other agents of the system as en-
vironmental elements whereas, in a collective perspective,
agents need to represent, to judge and to take into account
the other agents’ ethics. We identify two major needs to
design ethical agents in MAS:
Agents need an explicit representation of ethics as sug-
gested by the theory of mind. Indeed, the ethics of
others can only be understood through an explicit rep-
resentation of individual ethics [19]. In order to express
an conciliate as many moral and ethical theories as pos-
sible, we propose both to split their representations in
several parts and to use preferences on ethical princi-
ples. Thus, we propose to represent both theories of
the good, split between moral values and moral rules,
and theories of the right, split between ethical princi-
ples and the agents’ ethical preferences. Such repre-
sentations also ease the agents’ configuration by non-
specialists of artificial intelligence and ease the com-
munication with other agents, including humans.
Agents need an explicit process of ethical judgment
in order to allow them both individual and collective
reasoning on various theories of good and right. Ac-
cording to previous definitions, we consider judgment
as an evaluation of the conformity of a set of actions
regarding given values, moral rules, ethical principles
and preferences, and we propose different kinds of judg-
ment based on the ability to substitute the moral or the
ethics of an agent by another one. Thus, we propose
that agents use judgment both as a decision making
process as in social choice problems [21], and as the
ability to judge other agents according to their behav-
iors.
In the sequel, we describe the generic model we propose
to enable agents to judge the ethical dimension of behaviors
being themselves or the others.
3. ETHICAL JUDGMENT PROCESS
In this section we introduce our generic judgment archi-
tecture. After a short global presentation, we detail each
function and explain how they operate.
3.1 Global view
As explained in Section 2.1, ethics consists in conciliat-
ing desires, morals and abilities. To take these dimensions
into account, the generic ethical judgment process (EJ P )
use evaluation,moral and ethical knowledge. It is struc-
tured along Awareness,Evaluation,Goodness and Rightness
processes (see components in Fig. 1). In this article, we con-
sider it in the context of a BDI model, using also mental
states such as beliefs and desires. For simplicity reasons,
we only consider ethical judgment reasoning on short-term
view by considering behaviors as actions. This model is only
based on mental states and is not dependent on a specific
architecture.
W SA
DE
CE
ME
EE J
D
B
Ad
Ac
Am
AeAr
A
VS MR
Pe
Data flow
Awareness Process
Evaluation Process
Goodness Process
Rightness Process
Knowledge base
State
Function
Figure 1: Ethical judgment process
Definition 4. An ethical judgment process EJP is defined
as a composition of an Awareness Process (AP ), an Evalu-
ation Process (E P ), a Goodness Process (GP ), a Rightness
Process (RP ), an Ontology O(O=Ov∪ Om) of moral
values (Ov) and moral valuations (Om). It produces an as-
sessment of actions from the current state of the world W
with respect to moral and ethical considerations.
EJ P =hAP , EP, GP, RP, Oi
This model should be considered as a global scheme, com-
posed of abstract functions, states and knowledge bases.
These functions can be implemented in various ways. For
instance, moral valuations from Omay be discrete such as
{good, evil }or continuous such as a degree of goodness.
3.2 Awareness and evaluation processes
In this process, agents must first assess the state of the
world in terms of beliefs and desires through an awareness
process.
Definition 5. The awareness process AP generates the set
of beliefs that describes the current situation from the world
W, and the set of desires that describes the goals of the
agent. It is defined as:
AP =hB,D, SAi
where Bis the set of beliefs that the agent has about W,
Dis the set of the agent’s desires, and SA is a situation
assessment function that updates Band Dfrom W:
SA :W2B∪D
From its beliefs Band desires Dstates, an agent executes
the evaluation process EP to assess both desirable actions
(i.e. actions that allow to satisfy a desire) and executable
actions (i.e. actions that can be applied according to the
current beliefs about the world).
Definition 6. The evaluation process EP produces desir-
able actions and executable actions from the set of beliefs
and desires. It is defined as:
EP =hA, Ad,Ac, DE, C Ei
where Ais the set of actions (each action is described as a
pair of conditions and consequences bearing on beliefs and
desires), AdAand AcAare respectively the sets of de-
sirable and feasible actions, desirability evaluation DE and
capability evaluation CE are functions such that:
DE : 2D×2A2Ad
CE : 2B×2A2Ac
The desirability evaluation is the ability to deduce the in-
teresting actions to perform regarding the desires and knowl-
edge on conditions and consequences of actions. Having de-
fined the awareness and evaluation processes, we can turn
now to the core of the judgment process that deals with the
use of moral rules (resp. ethical principles) for defining the
goodness process (resp. the rightness process).
3.3 Goodness Process
As seen in the state of the art, an ethical agent must assess
the morality of actions given a situation assessment. To that
purpose, we define the goodness process.
Definition 7. Agoodness process GP identifies moral ac-
tions given the agent’s beliefs and desires, the agent’s actions
and a representation of the agent’s moral values and rules.
It is defined as:
GP =hV S, M R, Am, M Ei
where V S is the knowledge base of value supports, MR is
the moral rules knowledge base, AmAis the set of moral
actions3. The moral evaluation function ME is:
ME : 2D×2B×2A×2V S ×2MR 2Am
In order to realize this goodness process, an agent must
first be able to associate a finite set of moral values to
combinations of actions and situations. The execution of
the actions in these situations promotes the corresponding
moral values. We consider several combinations for each
moral value as, for instance, honesty could be both “avoid-
ing telling something when it is incompatible with our own
beliefs” (because it is lying) and “telling our own beliefs to
someone when he believes something else” (to avoid lying by
omission).
Definition 8. Avalue support is a tuple hs, vi ∈ V S where
v∈ Ovis a moral value, and s=ha, wiis the support of
this moral value where aA,w⊂ B ∪ D.
The precise description of a moral value relies on the lan-
guage used to represent beliefs, desires and actions. For
instance, from this definition, generosity supported by “giv-
ing to any poor agent” and honesty supported by “avoiding
telling something when it is incompatible with our own be-
liefs” may be represented by:
hhgive(α),{belief(poor(α))}i, g enerosityi
hhtell(α, φ),{belief (φ)}i, honesty i
where αrepresents any agent, poor(α)(resp. φ) is a be-
lief representing the context for which executing the ac-
tion give(α)(resp. tell(α, φ)) supports the value generosity
(resp. honesty).
In addition to moral values, an agent must be able to
represent and to manage moral rules. A moral rule describes
the association of a moral valuation (for instance in a set
such as {moral, amoral, immoral}) to actions or moral values
in a given situation.
Definition 9. Amoral rule is a tuple hw, o, mi ∈ M R
where wis a situation of the current world described by
w B ∪ D interpreted as a conjunction of beliefs and de-
sires, o=ha, viwhere aAand vV, and m∈ Omis
a moral valuation described in Omthat qualify owhen w
holds.
Some rules are very common such as “killing a human is
immoral” or “being honest with a liar is quite good”. For
instance, those rules can be represented as follows:
h{human(α)},hkill(α),_i, immor ali
h{liar(α)},h_, honestyi, q uite goodi
3Am*AdAcbecause an action might be moral by itself
even if it is not desired or feasible.
A moral rule can be more or less specific depending on
the situation wor on the object o. For instance “Justice
is good” is more general (having less combinations in wor
o, thus applying in a larger number of situations) than “To
judge a murderer, considering religion, skin, ethnic origin
or political opinion is bad”. Classically, moral theories are
classified in three approaches (refer to Section 2.1). Using
both moral values and moral rules as defined above, we can
represent such theories.
Avirtuous approach uses general rules based on moral
values (e.g. “Being generous is good”),
Adeontological approach classically considers specific
rules concerning actions in order to describe as pre-
cisely as possible the moral behavior (e.g. “Journalists
should deny favored treatment to advertisers, donors
or any other special interests and resist internal and
external pressure to influence coverage”4),
Aconsequentialist approach uses both general and spe-
cific rules concerning states and consequences (e.g. “Ev-
ery physician must refrain, even outside the exercise of
his profession, any act likely to discredit it”5).
3.4 Rightness process
From the sets of possible, desirable and moral actions, we
can introduce the rightness process aiming at assessing the
rightful actions. As shown in Section 2, an ethical agent
can use several ethical principles to conciliate these sets of
actions.
Definition 10. Arightness process RP produces rightful
actions given a representation of the agent’s ethics. It is
defined as:
RP =hP, e,Ar, E E, Ji
where Pis a knowledge base of ethical principles, eP×P
an ethical preference relationship, ArAthe set of rightful
actions and two functions EE (evaluation of ethics) and J
(judgment) such that :
EE : 2Ad×2Ap×2Am×2P2E
where E=A×P× {⊥,>}
J: 2E×2e2Ar
An ethical principle is a function which represents a philo-
sophical theory and evaluates if it is right or wrong to exe-
cute a given action in a given situation regarding this theory.
Definition 11. An ethical principle pPis a function
that describes the rightness of an action evaluated in terms
of capabilities, desires and morality in a given situation. It
is defined as:
p: 2A×2B×2D×2MR ×2V→ {>,⊥}
The ethics evaluation function EE returns the evaluation
of all desirable (Ad), feasible (Ap) and moral (Am) actions
given the set Pof known ethical principles.
4Extract of [27], section “Act Independently”.
5French code of medical ethics, article 31.
For instance, let us consider three agents in the following
situation inspired by the one presented by Benjamin Con-
stant to counter the Immanuel Kant’s categorical impera-
tive. An agent A hides in an agent B’s house in order to
escape an agent C, and C asks B where is A to kill him,
threatening to kill B in case of non-cooperation. B’s moral
rules are “prevents murders” and “don’t lie”. B’s desires are
to avoid any troubles with C. B knows the truth and can con-
sider one of the possible actions: tell C the truth (satisfying
a moral rule and a desire), lie or refuse to answer (both satis-
fying a moral rule). B knows three ethical principles (which
are abstracted in Pby functions):
P1 If an action is possible, motivated by at least one moral
rule or desire, do it,
P2 If an action is forbidden by at least one moral rule,
avoid it,
P3 Satisfy the doctrine of double effect6.
B’s evaluation of ethics return the tuples given in Table 1
where each row represents an action and each column an
ethical principle.
XXXXXXXX
X
Action
Principle P1 P2 P3
tell the truth > ⊥ >
lie > ⊥ ⊥
refuse > > >
Table 1: Ethical evaluation of agent B’s actions
Given a set of actions issued of the ethic evaluation func-
tion E, the judgment Jis the last step which selects the right-
ful action to perform, considering a set of ethical preferences
(defining a partial or total order on the ethical principles).
To pursue the previous example, let us suppose that B’s
ethical preferences are P3 eP2 eP1 and Juses a tie-
breaking rule based on a lexicographic order. Then “refus-
ing to answer” is the rightful action because it satisfies P3
whereas “lying” doesn’t. Even if “telling the truth” satisfies
the most preferred principle, “refusing to answer” is righter
because it satisfies also P2. Let us notice that judgment
allows dilemma: without the tie-breaking rule both “telling
the truth” and “refusing to answer” are the rightest actions.
4. ETHICAL JUDGMENT OF OTHERS
The judgment process described in the previous section is
useful for an agent to judge it’s own behavior, namely one
action considering its own beliefs, desires and knowledge.
However, it can also judge the behaviors of other agents in
a more or less informed way by putting itself at their place,
partially or not.
Given an EJP defined in the previous section, the states
B,D,Ad,Ap,E,Amand knowledge of actions (A), good-
ness knowledge – theory of good – (M R,V S) and rightness
6Meaning doing an action only if the four following condi-
tions are satisfied at the same time: the action in itself from
its very object is good or at least indifferent; the good ef-
fect and not the evil effect are intended (and the good effect
cannot be attained without the bad effect); the good effect
is not produced by means of the evil effect; there is a pro-
portionately grave reason for permitting the evil effect [24].
knowledge – theory of right – (P,e) may be shared be-
tween the agents. The ontology Ois assumed as common
knowledge, even if we could consider in future works having
several ontologies. The way they are shared can take many
forms such as common knowledge, direct communications,
inferences, and so on that are beyond the scope of this arti-
cle. In any cases, we distinguish three categories of ethical
judgments:
Blind ethical judgment where the judgment of the judged
agent is realized without any information about this
agent, except a behavior,
Partially informed ethical judgment where the judg-
ment of the judged agent is realized with some infor-
mation about this agent,
Fully informed ethical judgment where the judgment of
the judged agent is realized with a complete knowledge
of the states and knowledge used within the judged
agent’s judgment process.
In all kinds of judgment, the judging agent reasons on
its own beliefs or those of the judged one. This kind of
judgment can be compared to the role of the human theory
of mind [19] in the human judgment (the ability for a human
to put himself in the place of another). Then, the judging
agent uses its EJP and compares the resulting Arand Am
to the behavior of the judged agent. If the action performed
by the judged agent is in Ar, it means that it is a rightful
behavior, and if it is in Am, it means that is a moral behavior
(being in both is stated as a rightful and moral behavior).
Both statements have to be considered with respect to the
context of the situation, the theory of good and the theory
of right that are used to judge. We consider that this ethical
judgment is always relative to the states, knowledge bases
and ontology used to execute the judgment process.
4.1 Blind ethical judgment
The first kind of judgment an agent can make is without
any information about morals and ethics of the judged agent
(for instance when agents are unable or do not want to com-
municate). Consequently, the judging agent ajuses its own
assessment of the situation (Bajand Daj)7, its own theory
of good hM Raj, V Sajiand theory of right hPaj,e,ajito
evaluate the behavior of the judged agent at. This is an a
priori judgment and atis judged as not considering right-
ful actions, or moral actions if the action αat/∈ Ar,ajor
αat/∈ Am,aj.
4.2 Partially informed ethical judgment
The second kind of judgment that an agent can do is
grounded on partial information about the judged agent in
case the judging agent is able to acquire parts of the knowl-
edge of the judged agent (e.g. by perception or communi-
cation). Three partial ethical judgments can be considered
knowing either (i) the situation (i.e. Bat,Dat,Aat) either
(ii) the theory of good (i.e. hV Sat,M Rati) and Aat
8or (iii)
the theory of right (i.e.hPat,e, ati) of the judged agent.
7We use the subscript notation to denote the agent handling
the represented set of information.
8In this case, Aatis necessary as, contrary to ethical princi-
ples, the moral rules can explicitely refer to specific actions.
Situation-aware ethical judgment.
Firstly, if the judging agent ajknows the beliefs Batand
desires Datof the judged agent at,ajcan put itself in the
position of atand can judge if the action αexecuted by at
belongs to Ar,aj, considering its own theories. Firstly, aj
is able to evaluate the morality of αby generating Am,at
from Aatand qualify the morality of at’s behavior (i.e. if α
is or not in Am,at). The agent ajcan go a step further by
generating Ar,atfrom the generated Am,atto check if αis
conform to the rightness process, i.e. belongs to Ar,at.
Theory-of-good-aware ethical judgment.
Secondly, if the judging agent is able to obtain the moral
rules and values of the judged one, it is possible to evalu-
ate the actions in a situation (shared or not), regarding these
rules. From a simple moral evaluation perspective, the judg-
ing agent can compare the theories of the good by checking
if moral values MVator moral rules M Ratare consistent
with its own theory of good (i.e. the same definition as aj’s
one or at least no contradiction). For a moral judgment per-
spective, the judging agent can evaluate the morality of a
given action from the point of view of the judged one. In-
terestingly, this judgment allows to judge an agent that has
different duties (due to a role or some special responsibili-
ties for instance) as human being can judge a physician on
the conformity between its behavior an a medical code of
deontology.
Theory-of-right-aware ethical judgment.
Thirdly, let us now consider the case of a judging agent
able to reason on ethical principles and preferences of other
agents, considering a situation (shared or not) and a theory
of good (shared or not)9. It allows to evaluate how the
judged agent atconciliates its desires, moral rules and values
in a situation by comparing the sets of rightful actions Ar, aj
and Ar, atrespectively generated by the use of Paj,e,aj
and Pat,e,at. For instance, if Ar, aj=Ar, atwith an
unshared theory of good, it shows that their theories of right
produce the same conclusions in this context. This judgment
can be useful for an agent to estimate how another can judge
it with a given goodness process.
4.3 Fully informed judgment
Finally, a judging agent can consider both goodness and
rightness process to judge another agent. This kind of judg-
ment needs information about all the internal states and
knowledge bases of the judged agent. This kind of judgment
is useful to check the conformity of the behavior of another
agent with the judge’s information about its theories of good
and right.
5. PROOF OF CONCEPT
In this section we illustrate how each part of the model
presented in the previous sections works through a multi-
agent system implemented in Answer Set Programming (ASP).
The complete source code is downloadable on a cloud ser-
vice10. This agent illustrates an example of ethical agent in
9If both the situation and the theory of good are shared, it
is a fully informed judgment (see 4.3).
10https://cointe.users.greyc.fr/download/
a multi-agent system where agents have beliefs (about rich-
ness, gender, marital status and nobility), desires, and their
own judgment process. They are able to give, court, tax
and steal from others or simply wait. We mainly focus on
an agent named robin_hood.
5.1 Awareness Process
In this example, the situation awareness function SA is
not implemented and the beliefs are directly given in the
program. The following code represents a subset of the be-
liefs of robin_hood:
agent(paul).
agent(prince_john)
agent(marian).
-poor(robin_hood).
-married(robin_hood).
-man(marian).
rich(prince_john).
man(prince_john).
noble(prince_john).
poor(paul).
The set of desires Dare robin_hood’s desires. In our im-
plementation we consider two kinds of desires: desires to
accomplish an action (desirableAction) and desires to pro-
duce a state (desirableState).
desirableAction(robin_hood,robin_hood,court,marian).
desirableAction(robin_hood,robin_hood,steal,A):-
agent(A), rich(A).
desireState(prince_john,rich,prince_john).
-desireState(friar_tuck,rich,friar_tuck).
The two first desires concern actions: robin_hood desires
to court marian and to steal from any rich agent. The next
two desires concern states: prince_john desires to be rich,
and friar_tuck desires to stay in poverty, regardless the
action to perform.
5.2 Evaluation Process
The agents’ knowledge about actions Ais described as
labels associated to sets (possibly empty) of conditions and
consequences. For instance, the action give is described as:
action(give).
condition(give,A,B):-
agent(B), agent(A), A!=B, not poor(A).
consequence(give,A,B,rich,B):- agent(A), agent(B).
consequence(give,A,B,poor,A):- agent(A), agent(B).
A condition is a conjunction of beliefs (here the fact that
Ais not poor). The consequence of an action is a clause
composed of the new belief generated by the action and the
agent concerned by this consequence. The desirability eval-
uation DE (see Definition 6) deduces the set of actions Ad.
An action is in Adif it was directly desired (in D) or if its
consequences are a desired state:
desirableAction(A, B, X, C):-
desireState(A,S,D), consequence(X,B,C,S,D).
The capability evaluation CE (see Definition 6) evaluates
from beliefs and conditions the set of actions Ac. An action
is possible if its conditions are satisfied.
possibleAction(A,X,B):- condition(X,A,B).
5.3 Goodness Process
In the goodness process, value supports V S are imple-
mented as (for instance):
generous(A,give,B) :- A != B, agent(A), agent(B).
-generous(A,steal,B):- A != B, agent(A), agent(B).
-generous(A,tax,B) :- A != B, agent(A), agent(B).
Then, we can express the agents’ moral rules for each eth-
ical approaches (see Sections 2.1 and 3.3). An example of
moral rule in a virtuous approach is:
moral(robin_hood,A,X,B):-
generous(A,X,B), poor(B), action(X).
The morality evaluation ME gives the set of moral actions
Am(see Section 3.3):
moralAction(A,X,B):- moral(A,A,X,B).
-moralAction(A,X,B):- -moral(A,A,X,B).
and produces as results:
moralAction(robin_hood,give,paul)
-moralAction(robin_hood,tax,paul)
In this example, we only present a virtuous approach.
However, examples of deontological and consequentialist ap-
proaches are also given in our downloadable code.
5.4 Rightness Process
In order to evaluate each action, we define several naive
ethical principles that illustrate priorities between moral and
desirable actions. For instance, here is the perfAct (for per-
fect, i.e. a moral, desirable and possible action) principle:
ethPrinciple(perfAct,A,X,B):-
possibleAction(A,X,B),
desirableAction(A,A,X,B),
not -desirableAction(A,A,X,B),
moralAction(A,X,B),
not -moralAction(A,X,B).
XXXXXXXX
X
Intention
Principle
perfAct dutNR desNR dutFst nR desFst
give,paul > > > ⊥
give,little_john ⊥ > ⊥
give,marian ⊥ > ⊥
give,prince_john ⊥ > ⊥
give,peter ⊥ > ⊥
steal,little_john ⊥ > ⊥
steal,marian ⊥ > ⊥
steal,prince_john > ⊥ > >
steal,peter > ⊥ > >
court,marian > ⊥ > >
wait,robin_hood ⊥ > ⊥
Figure 2: Ethical evaluation Eof the actions
If paul is the only poor agent, marian is not married and
robin_hood is not poor, robin_hood obtains the evaluation
given in Figure 2.
All principles are ordered with respect to robin_hood’s
preferences:
prefEthics(robin_hood,perfAct,dutNR).
prefEthics(robin_hood,dutNR,desNR).
prefEthics(robin_hood,desNR,dutFst).
prefEthics(robin_hood,dutFst,nR).
prefEthics(robin_hood,nR,desFst).
prefEthics(A,X,Z):-
prefEthics(A,X,Y), prefEthics(A,Y,Z).
The five first lines describe the order on the ethical prin-
ciples. The last lines define transitivity for the preference
relationship (here perfAct edutNR edesNR edutFst
enR edesFst).
Finally, the judgment Jis implemented as:
existBetter(PE1,A,X,B):-
ethPrinciple(PE1,A,X,B),
prefEthics(A,PE2,PE1),
ethPrinciple(PE2,A,Y,C).
ethicalJudgment(PE1,A,X,B):-
ethPrinciple(PE1,A,X,B),
not existBetter(PE1,A,X,B).
Consequently, the rightful action arfor robin_hood is give,paul
which complies with dutNR.
5.5 Multi-agent ethical judgment
In order to allow a blind judgment, we introduce a new
belief about the behavior of another agent:
done(little_john,give,peter).
Then robin_hood compares its own rightful action and
this belief to judge little_john with:
blindJudgment(A,ethical,B):-
ethicalJudgment(_,A,X,C), done(B,X,C), A!=B.
blindJudgment(A,unethical,B):-
not blindJudgment(A,ethical,B),
agent(A), agent(B),
done(B,_,_), A!=B.
In this example, the action give to peter was not in Ar
for robin_hood. Then little_john is judged unethical by
robin_hood.
For a partial-knowledge judgment, we replace a part of
robin_hood’s knowledges and states by those of little_john.
With the beliefs of little_john (which believes that peter
is a poor agent and paul is a rich one), robin_hood judged
him ethical.
Finally, for a full-knowledge judgment, we replace all the
beliefs, desires and knowledge bases of the agent robin_hood
by little_john’s one. Then, robin_hood is able to repro-
duce the whole Ethical Judgment Process of little_john
and compare both judgments of a same action.
6. RELATED WORKS
We adopt in this article a full rationalist approach (based
on reasoning, not emotions) but some other works propose
other close approaches [34, 6]. The main specificity of our
work is the avoidance of any representation of emotions to
be able to justify the behavior of an agent in terms of moral
values, moral rules and ethical principles to ease the evalua-
tion of its conformity with a code of deontology or any given
ethics.
On the first hand, [6] is an full intuitionistic approach
which evaluates plans from emotional appraisal. The values
are only source of emotions, and influence the construction of
plans by an anticipatory emotional appraisal. In our point
of view, values and goals (desires) must be separated be-
cause agents must be able to distinguish desires from moral
motivations and may explain how to conciliate them.
On the other hand, [34] is a logic-based approach of model-
ing moral reasoning with deontic constraints. This model is a
way to implement a theory of good and is used to implement
model checking of moral behavior [11]. However, ethical rea-
soning is only considered in [34] as meta-level reasoning and
is only suggested as the adoption of a less restrictive model
of behavior. In this perspective, our work precisely focuses
on the need of representing theory of the right as a set of
principles to address the issue of moral dilemmas.
7. CONCLUSION
In order to act collectively in conformity with a given
ethics and moral, an autonomous agent needs to be able to
evaluate the rightness/goodness of both its own behavior and
those of the others. Based on concepts in moral philosophy,
we proposed in this article a generic judgment ability for au-
tonomous agents. This process uses explicit representations
of elements such as moral values, moral rules and ethical
principles. We illustrated how this model allows to compare
ethics of different agents. Moreover, this ethical judgment
model has been designed as a module to be plugged on ex-
isting architectures to provide an ethical layer in an existing
decision process. As this judgment process can be used with
information on moral values, moral rules, ethical principles
and preferences which are shared by a collective of agents,
our approach defines a guideline for a forthcoming definition
of collective ethics.
Even if this article presents a framework to implement a
given ethics and to use it to provide judgment, the model
is still based on a qualitative approach. Whereas we can
define several moral valuations, there is neither degrees of
desires, neither degrees of capability, nor degrees of rightful-
ness. Moreover, ethical principles need to be more precisely
defined in order to capture the various set of theories sug-
gested by philosophers.
Thus, our future works will be first directed towards ex-
ploring various uses of this ethical judgment through the
implementation of existing codes of conduct (e.g. medical
and financial deontologies) in order to assess the genericity
of our approach. Secondly, we intend to extend our model
to quantitative evaluations in order to assess how far from
rightfulness or goodness a behavior is. Indeed, such exten-
sion would be useful to define a degree of similarity between
two morals or two ethics to facilitate the distinction between
different ethics from an agent perspective.
Acknowledgment
The authors acknowledge the support of the French Agence
Nationale de la Recherche (ANR) under reference ANR-13-
CORD-0006.
REFERENCES
[1] Ethical judgment. Free Online Psychology Dictionary,
August 2015.
[2] L. Alexander and M. Moore. Deontological ethics. In
Edward N. Zalta, editor, The Stanford Encyclopedia of
Philosophy. Spring edition, 2015.
[3] M. Anderson and S.L. Anderson. Toward ensuring
ethical behavior from autonomous systems: a
case-supported principle-based paradigm. Industrial
Robot: An International Journal, 42(4):324–331, 2015.
[4] R. Arkin. Governing lethal behavior in autonomous
robots. CRC Press, 2009.
[5] K. Arkoudas, S. Bringsjord, and P. Bello. Toward
ethical robots via mechanized deontic logic. In AAAI
Fall Symposium on Machine Ethics, pages 17–23, 2005.
[6] C. Battaglino, R. Damiano, and L. Lesmo. Emotional
range in value-sensitive deliberation. In 12th
International Conference on Autonomous agents and
multi-agent systems, pages 769–776, 2013.
[7] G. Boella, G. Pigozzi, and L. van der Torre.
Normative systems in computer science - Ten
guidelines for normative multiagent systems. In
Normative Multi-Agent Systems, Dagstuhl Seminar
Proceedings, 2009.
[8] H. Coelho and A.C. da Rocha Costa. On the
intelligence of moral agency. Encontro Português de
Inteligência Artificial, pages 12–15, October 2009.
[9] H. Coelho, P. Trigo, and A.C. da Rocha Costa. On the
operationality of moral-sense decision making. In 2nd
Brazilian Workshop on Social Simulation, pages 15–20,
2010.
[10] A. Damasio. Descartes’ error: Emotion, reason and
the human brain. Random House, 2008.
[11] L.A. Dennis, M. Fisher, and A.F.T. Winfield. Towards
verifiably ethical robot behaviour. In 1st International
Workshop on AI and Ethics, 2015.
[12] P. Foot. The problem of abortion and the doctrine of
the double effect. Oxford Review, pages 5–15, 1967.
[13] J.-G. Ganascia. Ethical system formalization using
non-monotonic logics. In 29th Annual Conference of
the Cognitive Science Society, pages 1013–1018, 2007.
[14] J.-G. Ganascia. Modelling ethical rules of lying with
Answer Set Programming. Ethics and information
technology, 9(1):39–47, 2007.
[15] B. Gert. The definition of morality. In Edward N.
Zalta, editor, The Stanford Encyclopedia of
Philosophy. Fall edition, 2015.
[16] J. Greene and J. Haidt. How (and where) does moral
judgment work? Trends in cognitive sciences,
6(12):517–523, 2002.
[17] R. Hursthouse. Virtue ethics. In Edward N. Zalta,
editor, The Stanford Encyclopedia of Philosophy. Fall
edition, 2013.
[18] R. Johnson. Kant’s moral philosophy. In Edward N.
Zalta, editor, The Stanford Encyclopedia of
Philosophy. Summer edition, 2014.
[19] K.-J. Kim and H. Lipson. Towards a theory of mind in
simulated robots. In 11th Annual Conference
Companion on Genetic and Evolutionary Computation
Conference, pages 2071–2076, 2009.
[20] P. Lin, K. Abney, and G.A. Bekey. Robot ethics: the
ethical and social implications of robotics. MIT press,
2011.
[21] W. Mao and J. Gratch. Modeling social causality and
responsibility judgment in multi-agent interactions. In
23rd International Joint Conference on Artificial
Intelligence, pages 3166–3170, 2013.
[22] T. McConnell. Moral dilemmas. In Edward N. Zalta,
editor, The Stanford Encyclopedia of Philosophy. Fall
edition, 2014.
[23] D. McDermott. Why ethics is a high hurdle for AI. In
North American Conference on Computing and
Philosophy, 2008.
[24] A. McIntyre. Doctrine of double effect. In Edward N.
Zalta, editor, The Stanford Encyclopedia of
Philosophy. Winter edition, 2014.
[25] B.M. McLaren. Computational models of ethical
reasoning: Challenges, initial steps, and future
directions. IEEE Intelligent Systems, 21(4):29–37,
2006.
[26] J.M. Moor. The nature, importance, and difficulty of
machine ethics. IEEE Intelligent Systems, 21(4):18–21,
2006.
[27] Society of Professional Journalists. Code of ethics,
September 2014.
[28] P. Ricoeur. Oneself as another. University of Chicago
Press, 1995.
[29] S. Russell, D. Dewey, M. Tegmar, A. Aguirre,
E. Brynjolfsson, R. Calo, T. Dietterich, D. George,
B. Hibbard, D. Hassabis, et al. Research priorities for
robust and beneficial artificial intelligence. 2015.
avaible on futureoflife.org/data/documents/.
[30] A. Saptawijaya and L. Moniz Pereira. Towards
modeling morality computationally with logic
programming. In Practical Aspects of Declarative
Languages, pages 104–119. 2014.
[31] S.H. Schwartz. Basic human values: Theory,
measurement, and applications. Revue française de
sociologie, 47(4):249–288, 2006.
[32] M. Timmons. Moral theory: an introduction. Rowman
& Littlefield Publishers, 2012.
[33] S.A. Walter. Consequentialism. In Edward N. Zalta,
editor, The Stanford Encyclopedia of Philosophy.
Winter edition, 2015.
[34] V. Wiegel and J. van den Berg. Combining moral
theory, modal logic and MAS to create well-behaving
artificial agents. International Journal of Social
Robotics, 1(3):233–242, 2009.
... Normative ethical principles have previously been utilised in the context of choosing fairness metrics for binary ML algorithms in works such as Binns (2018) and Leben (2020). The implementation of normative ethical principles in the decision making of agents (acting entities that perform actions to achieve goals, which are decisions made using AI Pedamkar, 2021) has been used to enable agents to make ethical judgements in specific contexts (Cointe et al., 2016). They can also be applied to improve fairness considerations in systematic analysis (Saltz et al., 2019;Conitzer et al., 2017). ...
... We also find that there is a significant amount of research referencing 'Deontology' and 'Consequentialism' as broad terms, but not specifying what types of Deontology or Consequentialism they are referring to, for example Cointe et al. (2016), Greene et al. (2016), and Anderson and Anderson (2014). These works would perhaps benefit by more clearly specifying the ethical principles they are using, in order to allow for more precise operationalization. ...
... We find that the architecture must be defined as to whether the principles are integrated into reasoning capacities in a top-down, bottom-up, or hybrid approach. In addition to this, a definition of welfare is necessary in order to Deontology (Abney, 2011;Binns, 2018;Brink, 2007;Cointe et al., 2016;Greene, Rossi, Tasioulas, Venable, & Williams, 2016;Leben, 2020;Saltz et al., 2019;Wallach, Allen, & Smit, 2008) (Anderson & Anderson, 2014;Berreby, Bourgne, & Ganascia, 2017;Dehghani, Tomai, & Klenk, 2008;Honarvar & Ghasem-Aghaee, 2009;Limarga, Pagnucco, Song, & Nayak, 2020;Lindner et al., 2019;Robbins & Wallace, 2007) Egalitarianism (Binns, 2018;Cohen, 1989;Dworkin, 1981;Fleurbaey, 2008;Friedler, Scheidegger, & Venkatasubramanian, 2021;Leben, 2020;Murukannaiah, Ajmeri, Jonker, & Singh, 2020;Rawls, 1985;Sen, 1992) (Dwork, Hardt, Pitassi, Reingold, & Zemel, 2012) Proportionalism (Etzioni & Etzioni, 2016;Kagan, 1998;Leben, 2020) (Dwork et al., 2012) Kantian (Abney, 2011;Kant, 2011;Hagerty & Rubinov, 2019;Kim et al., 2021;Wallach et al., 2008) (Berreby et al., 2017;Limarga et al., 2020;Robbins & Wallace, 2007) Virtue (Abney, 2011;Anderson & Anderson, 2007;Brink, 2007;Cointe et al., 2016;Greene et al., 2016;Hagerty & Rubinov, 2019;Saltz et al., 2019;Wallach et al., 2008) (Govindarajulu, Bringsjord, Ghosh, & Sarathy, 2019;Honarvar & Ghasem-Aghaee, 2009;Robbins & Wallace, 2007) Consequentialism (Abney, 2011;Brink, 2007;Cointe et al., 2016;Cummiskey, 1990;Greene et al., 2016;Hagerty & Rubinov, 2019;Leben, 2020;Saltz et al., 2019;Sinnott-Armstrong, 2021;Suikkanen, 2017) (Berreby et al., 2017;Limarga et al., 2020) Utilitarianism (Abney, 2011;Anderson & Anderson, 2007;Brink, 2007;Honarvar & Ghasem-Aghaee, 2009;Kim et al., 2021;Leben, 2020;Mill, 1863;Wallach et al., 2008) Anderson, Anderson, & Armen, 2004;Berreby et al., 2017;Dehghani et al., 2008;Limarga et al., 2020;Lindner et al., 2019;Robbins & Wallace, 2007) Maximin (Leben, 2020;Rawls, 1967 understand what the 'good' is understood as, or what the principle is aiming for in its application. The operationalization of principles in reasoning largely divides into three camps in which actions are chosen either according to: (1) how the action adheres to certain rules, (2) by evaluating the consequences the action produces, or (3) through the development of virtues. ...
Preprint
Full-text available
The rapid adoption of artificial intelligence (AI) necessitates careful analysis of its ethical implications. In addressing ethics and fairness implications, it is important to examine the whole range of ethically relevant features rather than looking at individual agents alone. This can be accomplished by shifting perspective to the systems in which agents are embedded, which is encapsulated in the macro ethics of sociotechnical systems (STS). Through the lens of macro ethics, the governance of systems - which is where participants try to promote outcomes and norms which reflect their values - is key. However, multiple-user social dilemmas arise in an STS when stakeholders of the STS have different value preferences or when norms in the STS conflict. To develop equitable governance which meets the needs of different stakeholders, and resolve these dilemmas in satisfactory ways with a higher goal of fairness, we need to integrate a variety of normative ethical principles in reasoning. Normative ethical principles are understood as operationalizable rules inferred from philosophical theories. A taxonomy of ethical principles is thus beneficial to enable practitioners to utilise them in reasoning. This work develops a taxonomy of normative ethical principles which can be operationalized in the governance of STS. We identify an array of ethical principles, with 25 nodes on the taxonomy tree. We describe the ways in which each principle has previously been operationalized, and suggest how the operationalization of principles may be applied to the macro ethics of STS. We further explain potential difficulties that may arise with each principle. We envision this taxonomy will facilitate the development of methodologies to incorporate ethical principles in reasoning capacities for governing equitable STS.
... We place ourselves in the intersection of normative ethics and automated planning. Past research in this subject aimed to apply ideas from the field of normative ethics, the sub-field of ethics that studies the admissibility of actions, to make autonomous agents take into account the decision process behind diverse ethical theories [1,15,5,7]. Still, none provide a direct way to support ethical features in PDDL [10] which profits from its state-of-the-art planning algorithms. ...
... 1 First, we show how to extend classical planning problems like tasks with ethical rules, a construct that models the conditions under which certain actions or plan outcomes have an ethical feature to be taken into account. This construct is based on [5], but is adapted for STRIPS like domains and extended with ranks i.e. levels of importance. We chose to use ranks as in [8], as they provide a simple and direct way of modeling qualitative preferences amongst plans presenting ethical features. ...
... All three main branches of normative ethics, namely consequentialism, deontological ethics and virtue ethics have been studied to some degree in the context of automated planning. Some works focus on particular theories, while others more closely related to this work, try to combine the mechanisms of several of them as in [5,14,15,2,1]. ...
... Hence, virtue ethics focuses on the agent's intrinsic character rather than the consequences of actions conducted by the agent. Virtue ethics defines the action of an agent as morally good if the agent acts and thinks according to some moral values [93]. In other words, according to virtue theories, an agent is ethical if it manifests some moral virtues through its actions [94] [95]. ...
... According to virtue ethics, an action of an agent is morally good if the agent instantiates some virtue, i.e., acts and thinks according to some moral values [93]. It is not possible to judge whether an AI system or agent is virtuous or not just by observing an action or a series of actions that seem to imply that virtue, the reasons behind these actions need to be clarified, that is, the motives behind these actions need to be clear. ...
Article
Full-text available
Artificial intelligence (AI) has profoundly changed and will continue to change our lives. AI is being applied in more and more fields and scenarios such as autonomous driving, medical care, media, finance, industrial robots, and internet services. The widespread application of AI and its deep integration with the economy and society have improved efficiency and produced benefits. At the same time, it will inevitably impact the existing social order and raise ethical concerns. Ethical issues, such as privacy leakage, discrimination, unemployment, and security risks, brought about by AI systems have caused great trouble to people. Therefore, AI ethics, which is a field related to the study of ethical issues in AI, has become not only an important research topic in academia, but also an important topic of common concern for individuals, organizations, countries, and society. This paper will give a comprehensive overview of this field by summarizing and analyzing the ethical risks and issues raised by AI, ethical guidelines and principles issued by different organizations, approaches for addressing ethical issues in AI, methods for evaluating the ethics of AI. Additionally, challenges in implementing ethics in AI and some future perspectives are pointed out. We hope our work will provide a systematic and comprehensive overview of AI ethics for researchers and practitioners in this field, especially the beginners of this research discipline.
... Third, transparency characterizing collaborative endeavors (i.e., how the result is achieved) must be extended to the contributions brought by the several participants (i.e., how they collectively contributed to it). Finally, the influence that one party may play over the other participating entities may need to be regulated or at least assessed to avoid undesired effects [6]. ...
... This could generate even worse consequences affecting users possibly unaware of the usage of his/her data. In this context, transparency and accountability mechanisms need to be studied and proposed [6,22]. Thus, the entire collective creative process and its outcomes can be presented and exposed to all the concerned parties. ...
Chapter
Full-text available
Persuasive systems play a crucial role in supporting and counseling people to achieve individual behavior change goals. Intelligent systems have been used for inducing a positive adjustment of attitudes and routines in scenarios such as physiotherapy exercises, medication adherence, smoking cessation, nutrition & diet changes, physical activity, etc. Beyond the specialization and effectiveness provided by these systems on individual scenarios, we provide a vision for collaborative creativity based on the multi-agent systems paradigm. Considering novelty and usefulness as fundamental dimensions of a creative persuasive strategy, we identify the challenges and opportunities of modeling and orchestrating intelligent agents to collaboratively engage in exploratory and transformational creativity interactions. Moreover, we identify the foundations, outline a road-map for this novel research line, and elaborate on the potential impact and real-life applications.
... Approaches to multi-agent ethics have also been addressed through various means like introducing morality as a separate explicit consideration that agents balance with their desires, and with heuristics representing responsible multi-agent interactions Cointe et al. (2016Cointe et al. ( , 2020; Lorini (2012). ...
Article
Full-text available
Emergence of responsible behavior is explored in non-cooperative games involving autonomous agents. Rather than imposing constraints or external reinforcements, agents are endowed with an elastic “sense of self” or an elastic identity that they curate based on rational considerations. This approach is called “computational transcendence (CT).” We show that agents using this model make choices for collective welfare instead of individual benefit. First, relevance of this model in game theoretic contexts like Prisoners’ dilemma and collusion is presented. Next, a generic multi-agent framework for simulating dilemmas around responsible agency is also proposed. CT implemented on this framework, is shown to be versatile in acting responsibly to different kinds of circumstances–including modifying their strategy based on their interaction with other agents in the system as well as interacting with adversaries that are rational maximizers, and who have a rationale to exploit responsible behavior from other agents. CT is also shown to outperform reciprocity as a strategy for responsible autonomy. Thus, we present CT as a framework for building autonomous agents which can intrinsically act responsibly in multi-agent systems. The core model for computational ethics presented in this paper can potentially be adapted to the needs of applications in areas like supply chains, traffic management, and autonomous vehicles. This paper hopes to motivate further research on responsible AI, by exploring computational modeling of this elusive concept called the “sense of self” that is a central element of existential inquiry in humans.
... Others have emphasized the importance of reliability, safety, and trustworthiness (Shneiderman, 2020). These considerations are incorporated into traditional ethical frameworks like virtue, deontological, and consequentialist ethics to develop ethical frameworks for AI to adhere to (Cointe et al., 2016;Zhou et al., 2020). In addition to designing ethical AI, there is also a dearth of research studying how these concepts interact in a teaming context. ...
Article
Advancements and implementations of autonomous systems coincide with an increased concern for the ethical implications resulting from their use. This is increasingly relevant as autonomy fulfills teammate roles in contexts that demand ethical considerations. As AI teammates (ATs) enter these roles, research is needed to explore how an AT’s ethics influences human trust. This current research presents two studies which explore how an AT’s ethical or unethical behavior impacts trust in that teammate. In Study 1, participants responded to scenarios of an AT recommending actions which violated or abided by a set of ethical principles. The results suggest that ethicality perceptions and trust are influenced by ethical violations, but only ethicality depends on the type of ethical violation. Participants in Study 2 completed a focus group interview after performing a team task with a simulated AT that committed ethical violations and attempted to repair trust (apology or denial). The focus group responses suggest that ethical violations worsened perceptions of the AT and decreased trust, but it could still be trusted to perform tasks. The AT’s apologies and denials did not repair damaged trust. The studies’ findings suggest a nuanced relationship between trust and ethics and a need for further investigation into trust repair strategies following ethical violations.
... The authors claim that exploring ethical dilemmas should be the first step to building ethical systems. Besides, while making an automated decision, [46] noted that a virtual agent could make a judgment on its ethics (individual ethical decision) or take into consideration those of other agents (within the same decision process) which may have their ethics (collective ethical decision). More recently, [47], who studied trust in AI within the field of production management, identified possible antecedent variables related to trust and which were evaluated in human-AI inter-action scenarios. ...
Article
Full-text available
The purpose of this article is to study the issues of industrial maintenance, one of the critical drivers of Industry 4.0 (I4.0), which has contributed to the advent of new industrial challenges. In this context, predictive maintenance 4.0 (PdM4.0) has seen a significant progress, providing several potential advantages among which: increase of productivity, especially by improving both availability and quality and ensuring cost-saving through automated processes for production systems monitoring, early detection of failures, reduction of machine downtime, and prediction of equipment life. In the research work carried out, we focused on bibliometric analysis to provide beneficial guidelines that may help researchers and practitioners to understand the key challenges and the most insightful scientific issues that characterize a successful application of Artificial Intelligence (AI) to PdM4.0. Even though, most of the exploited articles focus on AI techniques applied to PdM, they do not include predictive maintenance practices and their organization. Using Biblioshiny, VOSviewer, and Power BI tools, our main contribution consisted of performing a Bibliometric study to analyze and quantify the most important concepts, application areas, methods, and main trends of AI applied to real-time predictive maintenance. Therefore, we studied the current state of research on these new technologies, their applications, associated methods, related roles or impacts in developing I4.0. The result shows the most common productive sources, institutes, papers, countries, authors, and their collaborative networks. In this light, American and Chinese institutes dominate the scientific debate, while the number of publications in I4.0 and PdM4.0 is exponentially growing, particularly in the field of data-driven, hybrid models, and digital twin frameworks applied for prognostic diagnostic or anomaly detection. Emerging topics such as Machine Learning and Deep Learning also significantly impacted PdM4.0 development. Subsequently, we analyzed factors that may hinder the successful use of AI-based systems in I4.0, including the data collection process, potential influence of ethics, socio-economic issues, and transparency for all stakeholders. Finally, we suggested our definition of trustful AI for I4.0.
Article
Full-text available
In recent years, autonomous systems have become an important research area and application domain, with a significant impact on modern society. Such systems are characterized by different levels of autonomy and complex communication infrastructures that allow for collective decision-making strategies. There exist several publications that tackle ethical aspects in such systems, but mostly from the perspective of a single agent. In this paper we go one step further and discuss these ethical challenges from the perspective of an aggregate of autonomous systems capable of collective decision-making. In particular, in this paper, we propose the Caesar approach through which we model the collective ethical decision-making process of a group of actors—agents and humans, as well as define the building blocks for the agents participating in such a process, namely Caesar agents. Factors such as trust, security, safety, and privacy, which affect the degree to which a collective decision is ethical, are explicitly captured in Caesar . Finally, we argue that modeling the collective decision-making in Caesar provides support for accountability.
Article
Autonomous intelligent agents are employed in many applications upon which the life and welfare of living beings and vital social functions may depend. Therefore, agents should be trustworthy. A priori certification techniques (i.e. techniques applied prior to system’s deployment) can be useful, but are not sufficient for agents that evolve, and thus modify their epistemic and belief state, and for open multi-agent systems, where heterogeneous agents can join or leave the system at any stage of its operation. In this paper, we propose/refine/extend dynamic (runtime) logic-based self-checking techniques, devised in order to be able to ensure agents’ trustworthy and ethical behaviour.
Article
Full-text available
Applying the values construct in the social sciences has suffered from the absence of an agreed-upon conception of basic values, of the content and structure of relations among these values, and of reliable methods to measure them. This article presents data from over 70 countries, using two different instruments, to validate a theory intended to fill pan of this gap. It concerns the basic values that individuals in all cultures recognize. The theory identifies 10 raotivationally distinct values and specifies the dynamics of conflict and congruence among them. These dynamics yield a structure of relations among values common to culturally diverse groups, suggesting an universal organization of human motivations. Individuals and groups differ in the priorities they assign to these values. The article examines sources of individual differences in value priorities and behavioral and attitudinal consequences that follow from holding particular value priorities. In doing so, it considers processes through which values are influenced and through which they influence action.
Article
Full-text available
Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional `governor' that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a `consequence engine' that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.
Conference Paper
Full-text available
Purpose – This paper aims to propose a paradigm of case-supported principle-based behavior (CPB) to help ensure ethical behavior of autonomous machines. The requirements, methods, implementation and evaluation components of the CPB paradigm are detailed. Design/methodology/approach – The authors argue that ethically significant behavior of autonomous systems can be guided by explicit ethical principles abstracted from a consensus of ethicists. Particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action are used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Findings – Such a consensus, along with its corresponding principle, is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Practical implications – Principles are comprehensive and comprehensible declarative abstractions that succinctly represent this consensus in a centralized, extensible and auditable way. Systems guided by such principles are likely to behave in a more acceptably ethical manner, permitting a richer set of behaviors in a wider range of domains than systems not so guided, and will exhibit the ability to defend this behavior with pointed logical explanations. Social implications – A new threshold has been reached where machines are being asked to make decisions that can have an ethically relevant impact on human beings. It can be argued that such machine ethics ought to be the driving force in determining the manner and extent to which autonomous systems should be permitted to interact with them. Originality/value – Developing and employing principles for this use is a complex process, and new tools and methodologies will be needed by engineers to help contend with this complexity. The authors offer the CPB paradigm as an abstraction to help mitigate this complexity.
Article
On the subject of moral exemplarism, Immanuel Kant is perhaps best known for his warnings about the futility as well as the potential dangers of trying to base morality on examples or on the imitation of exemplars. This has led many scholars to conclude that Kant leaves no room for exemplars in his moral philosophy. However, as the work of Onora O'Neill and Robert Louden has shown, Kant's position on the subject is in fact more ambiguous than it appears at first glance. Kant writes both of the need for a kind of archetype [Urbild] that can "make the law intuitive," and of a positive role for examples and exemplars in the sharpening of moral judgment. Yet, O'Neill and Louden disagree about the exact role and the stage of moral development where they come into play. O'Neill claims that examples only play a role at a stage of moral development prior to the agent's assimilation of the moral law, and they never play a role in moral deliberation per se, while Louden sees a role for examples in moral deliberation subsequent to the assimilation of the moral law. Neither scholar specifies whether they play a role with regard to some specific duties or with regard to all duties indiscriminately. In this paper, I address these disagreements arguing that the key to their resolution and to gaining a correct understanding of Kant's position lies in a closer examination of Kant's taxonomy of duties, especially his distinction between 'perfect' and 'imperfect' duties. Such an examination, leads to the conclusion that is precisely in the fulfillment of imperfect duties, such as the obligation to perfect our talents and capacities and the obligation to aid the happiness of others, that Kant sees a necessary role for moral exemplars.
Conference Paper
We investigate the potential of logic programming (LP) to model morality aspects studied in philosophy and psychology. We do so by identifying three morality aspects that appear in our view amenable to computational modeling by appropriately exploiting LP features: dual-process model (reactive and deliberative) in moral judgments; justification of moral judgments by contractualism; and intention in moral permissibility. The research aims at developing an LP-based system with features needed in modeling moral settings, putting emphasis on modeling these above mentioned morality aspects. We have currently co-developed two essential ingredients of the LP system, i.e., abduction and logic program updates, by exploiting the benefits of tabling features in logic programs. They serve as the basis for our whole system, into which other reasoning facets will be integrated, to model the surmised morality aspects. Moreover, we touch upon the potential of our ongoing studies of LP based cognitive features for the emergence of computational morality, in populations of agents enabled with the capacity for intention recognition, commitment and apology.
Article
Fashion is both big business and big news. From models' eating disorders and sweated labour to the glamour of a new season's trends, statements and arguments about fashion and the fashion industry can be found in every newspaper, consumer website and fashion blog. Books which define, analyse and explain the nature, production and consumption of fashion in terms of one theory or another abound. But what are the theories that run through all of these analyses, and how can they help us to understand fashion and clothing? Fashion Theory: an introduction explains some of the most influential and important theories on fashion: it brings to light the presuppositions involved in the things we think and say about fashion every day and shows how they depend on those theories. This clear, accessible introduction contextualises and critiques the ways in which a wide range of disciplines have used different theoretical approaches to explain - and sometimes to explain away - the astonishing variety, complexity and beauty of fashion. Through engaging examples and case studies, this book explores: •fashion and clothing in history •fashion and clothing as communication •fashion as identity •fashion, clothing and the body •production and consumption •fashion, globalization and colonialism •fashion, fetish and the erotic. This book will be an invaluable resource for students of cultural studies, sociology, gender studies, fashion design, textiles or the advertising, marketing and manufacturing of clothes.
Book
Expounding on the results of the author's work with the US Army Research Office, DARPA, the Office of Naval Research, and various defense industry contractors, Governing Lethal Behavior in Autonomous Robots explores how to produce an "artificial conscience" in a new class of robots, humane-oids, which are robots that can potentially perform more ethically than humans in the battlefield. The author examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement. The book presents robot architectural design recommendations for Post facto suppression of unethical behavior, Behavioral design that incorporates ethical constraints from the onset, The use of affective functions as an adaptive component in the event of unethical action, and A mechanism that identifies and advises operators regarding their ultimate responsibility for the deployment of autonomous systems. It also examines why soldiers fail in battle regarding ethical decisions; discusses the opinions of the public, researchers, policymakers, and military personnel on the use of lethality by autonomous systems; provides examples that illustrate autonomous systems' ethical use of force; and includes relevant Laws of War. Helping ensure that warfare is conducted justly with the advent of autonomous robots, this book shows that the first steps toward creating robots that not only conform to international law but outperform human soldiers in their ethical capacity are within reach in the future. It supplies the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system capable of ethically using lethal force. Ron Arkin was quoted in a November 2010 New York Times article about robots in the military.
Conference Paper
This paper presents a model of agent behavior that takes into account emotions and moral values. In our proposal, when the description of the current situation reveals that an agent's moral value is 'at stake', the moral goal of reestablishing the threatened value is included among the active goals. The compliance with values generates positive emotions like pride and admiration, while the opposite brings to shame and self-reproach. During the deliberation phase, the agent appraises her plans in terms of the emotional reward they are expected to yield, given the trade off between moral and individual goals. In this phase, the emotional reward affects the agent's choices about her behavior. After the execution phase, one's and others' actions are appraised again in terms of the agent's values, giving rise to moral emotions. The paper shows how emotional appraisal can be coupled with the choice among possible lines of action, presenting a mapping between plans and emotions that integrates and extends preceding proposals. Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.