Conference PaperPDF Available

Abstract and Figures

Inspired by psychological and evolutionary studies, we present two theoretical models wherein agents have the potential to express guilt, with the ambition to study the role of this emotion in the promotion of pro-social behaviour. We show that the inclusion of the emotion of guilt, in the sense arising from actual harm done to others from inappropriate action or inaction, is worthwhile to incorporate in evolutionary game theory models of cooperation, for it can increase cooperation by correcting and inhibiting defection. The abstract study thereof profitably transpires to concrete considerations in the design of artificial multi-agent populations. To achieve this goal, analytical and numerical methods from evolutionary game theory have been employed, but not shown in too fine detail here, to identify that reasonable conditions exist for which enhanced cooperation emerges within the context of the iterated prisoners dilemma. Guilt is modelled explicitly as two features, i.e. a counter that keeps track of the number of transgressions and a threshold that dictates when alleviation (through for instance apology and self-punishment) is required for an emotional agent. Such alleviation introduces an effect on the payoff of the agent experiencing guilt. We show that when the system consists of agents that resolve their own guilt without considering the co-player's attitude towards guilt alleviation then cooperation does not emerge. In that case, agents expressing no guilt or having no incentive to alleviate the guilt they experience easily dominate the guilt prone ones. On the other hand, when the guilt prone focal agent requires that guilt only needs to be alleviated when guilt alleviation is also manifested by a defecting co-player, then cooperation may thrive. This observation proves consistent in a gener-alised model discussed in this article. In summary, our analysis provides important insights into the design of multi-agent and cognitive agent systems, wherein the inclusion of guilt modelling can improve agents' cooperative behaviour and overall benefit.
Content may be subject to copyright.
Evolutionary Game Theory Modelling of Guilt
Lu´
ıs Moniz Pereira 1and Tom Lenaerts 2and Luis A. Martinez-Vaquero 3and The Anh Han 4
Abstract. Inspired by psychological and evolutionary studies, we
present two theoretical models wherein agents have the potential to
express guilt, with the ambition to study the role of this emotion in
the promotion of pro-social behaviour. We show that the inclusion of
the emotion of guilt, in the sense arising from actual harm done to
others from inappropriate action or inaction, is worthwhile to incor-
porate in evolutionary game theory models of cooperation, for it can
increase cooperation by correcting and inhibiting defection. The ab-
stract study thereof profitably transpires to concrete considerations
in the design of artificial multi-agent populations. To achieve this
goal, analytical and numerical methods from evolutionary game the-
ory have been employed, but not shown in too fine detail here, to
identify that reasonable conditions exist for which enhanced cooper-
ation emerges within the context of the iterated prisoners dilemma.
Guilt is modelled explicitly as two features, i.e. a counter that keeps
track of the number of transgressions and a threshold that dictates
when alleviation (through for instance apology and self-punishment)
is required for an emotional agent. Such alleviation introduces an
effect on the payoff of the agent experiencing guilt. We show that
when the system consists of agents that resolve their own guilt with-
out considering the co-player’s attitude towards guilt alleviation then
cooperation does not emerge. In that case, agents expressing no guilt
or having no incentive to alleviate the guilt they experience easily
dominate the guilt prone ones. On the other hand, when the guilt
prone focal agent requires that guilt only needs to be alleviated when
guilt alleviation is also manifested by a defecting co-player, then co-
operation may thrive. This observation proves consistent in a gener-
alised model discussed in this article. In summary, our analysis pro-
vides important insights into the design of multi-agent and cognitive
agent systems, wherein the inclusion of guilt modelling can improve
agents’ cooperative behaviour and overall benefit.
1 INTRODUCTION
“...what do you think, if a person does something very bad, do
they have to be punished?”...”You know the reason I think they
should be punished?”...”It’s because of how bad they are going
to feel, in themselves. Even if nobody did see them and nobody
ever knew. If you do something very bad and you are not pun-
ished you feel worse, and feel far worse, than if you are.” [Page
55 of “The love of a Good Woman” by Alice Munro (Nobel
Prize in Literature 2013) in ”Family Furnishings-Selected Sto-
ries 1995-2014”, Vintage Intl. Edition, 2015]
1Universidade Nova de Lisboa, Portugal, email: lmp@fct.unl.pt
2MLG, Universit´
e Libre de Bruxelles and AI lab,Vrije Universiteit Brussel,
Belgium, email: tom.lenaerts@ulb.ac.be
3Institute of Cognitive Sciences and Technologies, Rome, Italy, email:
fnxabraxas@gmail.com
4Teesside University, UK, email: T.Han@tees.ac.uk
Presently there is a general mounting interest on machine ethics
[22] and recent research monographs have been addressing its is-
sues [17]. One concerns the computational modelling of human emo-
tions, amongst which we find guilt and its role in minimising so-
cial conflicts [14]. Guilt is defined in the online Merriam-Webster
dictionary as “The feeling of culpability especially for imagined of-
fences or a sense of inadequacy”, which implies that guilt follows
from introspection: An individual experiencing guilt will detect this
emotional state, and can act upon it. Guilt is an evolved pervasive
feature in human cultures, which can lead to enhanced cooperation
via changes in behaviour or upon apology (cf. background references
below). Frank argued that guilt may provide a useful mechanism, if
operationalised properly, to miminise social conflict and promote co-
operation [3]. Notwithstanding the importance of this emotion for the
evolution of cooperation, no in-depth numerical or analytical models
have been provided to confirm or refute the hypothesis that this emo-
tion has evolved to ensure stable social relationships. Hence, it is nat-
ural to enquire how it might enhance cooperation in evolving artifi-
cial multi-agent systems, by means of machine implemented models
of guilt. With that in mind, we avail ourselves of Evolutionary Game
Theory (EGT) [12, 21] to conclude that under certain conditions co-
operation can be enhanced by a modicum of guilt in a population of
autonomous agents.
A distinct evolutionary and population sensitive EGT model of
guilt has been explored in [20]. They focus on behaviours associated
with guilt, such as apology, but do not however explicitly represent
any self fitness changes from the experience of the guilt emotion, like
we do in our models. Moreover, their guilt prone agents (GP) do not
initiate defection like ours do, but defect only in reaction to another’s
defection, though they will then feel guilty for having done so. In-
stead, we crucially associate guilt with self-punishment, and show
how this affecting of fitness can be conducive to a population benefi-
cial Evolutionary Stable Strategy (ESS) state [21], one towards which
the population evolves to play the strategy, and which state cannot be
invaded by a small number of agents using a different strategy. This
is the case in our improved (second) model, where self-punishment is
only enacted if the other party is not recognised to be guilty too. In [4]
(non-evolutionary) utilitarian game theory is employed to model the
behaviour resulting from guilt, not by introducing self-punishment
but by introducing a guilt aversion level term into a player’s utility
function, which takes into account the agent’s history of previous
pairwise interactions and individually learning from it. In contrast,
our moral stance to guilt is not utilitarian, in the sense that no in-
dividual measure of greater good is being explicitly optimised. We
rely instead on social learning in a population’s emergent evolution,
without recourse to individual histories. Hence our approach and re-
sults are thus distinct from previous ones in important ways. Next we
frame our hypotheses on guilt within EGT and define our models and
methods. Thence we proceed to the presentation of results, and wrap
up with some justified conclusions and future work.
2 EVOLUTIONARY GAME THEORY MODEL
FOR GUILT
Considering the foregoing, an attempt to introduce guilt in EGT mod-
els of cooperation seems unavoidable. The issue concerning guilt
within such models is whether its presence is more worthwhile than
its absence, with respect to a possibly advantageous emergence of
cooperation. One can introduce guilt explicitly in models to show
that it is worthwhile, in further support of its appearance on the evo-
lutionary scene. Indeed, one may focus on emotions, like guilt, as
being strategies in abstract evolutionary population games, sans spe-
cific embodiment nor subjective quale [18].
We can test this hypothesis via one model spelled out below,
whose details can be found in [16]. In it guilt is tied to intention
recognition, since it will have evolved as a fear about the detection of
harm done (see above). The prediction is that guilt will facilitate and
speed-up the emergence of cooperation. In spite of its initial heav-
ier cost, in time that cost will be recuperated within the guilt-ridden
population, via inhibition of defection as a result of guilt avoidance.
Furthermore, one’s timely recognition of another’s prior give away
guilt signs, on account of her actual intent to harm, can prevent one’s
self-punishing guilt in cases it would be uncalled for. The base hy-
pothesis is thus that when there exists guilt in the starting population
then the most frequent stationary distribution includes the incorpora-
tion of guilt and enhances overall cooperation. For which parameters
of guilt this happens can be analytically determined experimentally.
2.1 Models and methods
A behavioural quantification of guilt provides us with a basis to de-
fine our evolving agents: Guilt is part of an agent’s representation
or genotype, i.e. they will all be equipped with a guilt threshold G,
with G[0,+], and a transient guilt level, g(g0). Initially
gis set to 0for every agent. If an agent feels guilty after an action
that she considers as wrong, then the agent’s gis increased (by 1).
When greaches the agent’s guilt threshold, i.e. gG, the agent
can (or not) act to alleviate her current guilt level. We assume here
that guilt alleviation can be achieved through a sincere apology to
the co-player or, otherwise, through self-punishment if it is not pos-
sible to apologise [1, 6]. Different from prior work [8, 15], we do
not assume here that apology leads to a benefit for the co-player,
considering it only as an honest signal of the experiencing of guilt.
In general, the cost of guilt alleviation is modelled by a so-called
guilt cost γ(γ0). Whenever the agent punishes herself, by pay-
ing γ,gis decreased (by 1). Using this genotype definition, one can
imagine different types of agents with different Gthresholds, such as
those who never feel guilty (the unemotional ones, with G= +)
or those who are very emotional, feeling guilty immediately after a
wrongdoing (with G= 0).
The objective of this work is to show that agents expressing this
emotion, despite the disadvantage of the costly guilt-alleviation acts,
are evolutionary viable, can dominate agents not expressing the emo-
tion and that they induce sustained social interactions, all of which
will be shown in the context of the Iterated Prisoner’s Dilemma
(IPD). To set the stage for future work we first focus on two ex-
treme behaviours, i.e. G= 0 and G= +, as will be explained in
more detail later. These results are generalisable to situations where
G > 0yet less than the number of rounds in the IPD, since when
Gis larger this would correspond to G= +. We use a stochas-
tic evolutionary model incorporating frequency-dependent selection
and mutation to identify when agents with guilt are evolutionary sta-
ble [21]. More importantly, we will show that for guilt to be evolu-
tionary viable, it should be reactive to the guilt-driven behaviour of
the co-player: If the other party is not behaving properly and/or does
not show guilt-alleviating behaviour then the focal agent’s guilt is
alleviated automatically or even non-existing. Pure self-punishment
without social considerations will not allow for guilt to evolve at
the individual level. In this sense, our work contrasts with for in-
stance that of Gadou et al. [4] which takes an utilitarian perspec-
tive to model the behaviour resulting from guilt, not by introducing
self-punishment but by introducing a guilt aversion level term into
a player’s utility function, which ignores the social role of guilt [3].
From a multi-agent perspective, considering socio-technical systems
including autonomous agents, our results confirm that decision mak-
ing conflicts can be reduced when including emotions to guide par-
ticipants to socially acceptable behaviours.
2.2 Iterated prisoner’s dilemma (IPD)
Social interactions are modelled in this article as symmetric two-
player games defined by the payoff matrix
C D
C R, R S, T
D T, S P, P
A player who chooses to cooperate (C) with someone who defects
(D) receives the sucker’s payoff S, whereas the defecting player
gains the temptation to defect, T. Mutual cooperation (resp., defec-
tion) yields the reward R(resp., punishment P) for both players. De-
pending on the ordering of these four payoffs, different social dilem-
mas arise [12, 21]. Namely, in this work we are concerned with the
PD, where T > R > P > S. In a single round, it is always best
to defect, because less risky, but cooperation may be rewarding if the
game is repeated. In IPD, it is also required that mutual cooperation
is preferred over an equal probability of unilateral cooperation and
defection (2R > T +S); otherwise alternating between cooperation
and defection would lead to a higher payoff than mutual coopera-
tion. The PD is repeated for a number of rounds, where the number
of rounds is modelled by .
2.3 Guilt modelling in IPD
Starting from the definition of the agent-based guilt feature in the
Introduction, we will focus in the current work only on two basic
types of (extreme) guilt thresholds:
G= +: In this type of agents the guilt level gwill never reach
the threshold no mater how many times they defect; hence, they
never need to reduce g, and consequently never pay the guilt cost
γ. Experiencing no guilt feeling, these agents are dubbed (guilt-)
unemotional.
G= 0: whenever this type of agents defects, it becomes true that
g > G; hence, the agents need to act immediately to reduce g, thus
paying γ. These agents always feel guilty after a wrongdoing, viz.
defection, and are dubbed (guilt-) emotional agents.
Besides the guilt threshold, an agent’s strategy is described by what
she plays in a PD (C or D) and, when the agent’s ongoing guilt level g
reaches the threshold G, by whether the agent changes her behaviour
from D to C. Hence, there are five possible strategies, thus labeled:
1. Unemotional cooperator (C): always cooperates, unemotional (i.e.
G= +)
2. Unemotional defector (D): always defects, unemotional (i.e. G=
+)
3. Emotional cooperator (CGC): always cooperates, emotional (i.e.
G= 0)
4. Emotional non-adaptive defector (DGD): always defects, feels
guilty after a wrongdoing (i.e. G= 0), but keeps behaviour.
5. Emotional adaptive defector (DGC): defects initially, feels guilty
after a wrongdoing (i.e. G= 0), and behaviour goes from D to C.
In order to understand when guilt can emerge and promote coop-
eration, our EGT modelling study below analyses whether and when
emotional strategies, i.e. those with G= 0, can actually overcome
the disadvantage of the incurred costs or fitness reduction associ-
ated with the guilt feeling and its alleviation, and in consequence
disseminate throughout the population. Namely, in the following we
aim to show that, in order to evolve, guilt alleviation through self-
punishment can only be evolutionarily viable when only the focal
agent misbehaves. In other words, an emotional guilt-based response
only makes sense when the other is not attempting to harm you too.
To that aim, we analyse two different models, which differ in the way
guilt influences the preferences of the focal agents, where the prefer-
ences are determined by the payoffs in the matrices (1) and (2).
In the first model, an agent’s ongoing guilt level gincreases when-
ever the agent defects, regardless of what the co-player does. The
payoff matrix for the five strategies C, D, CGC, DGD, and DGC, can
be written as follows
C D CGC DGD DGC
C R S R S S+RΘ
D T P T P P+TΘ
CGC R S R S S+RΘ
DGD T γ P γ T γ P γP+TΘ
γ
DGC Tγ+RΘ
Pγ+SΘ
Tγ+RΘ
Pγ+S(Θ)
Pγ+RΘ
,(1)
where we use Θ=Ω1just for the purpose of a neater represen-
tation. Note that the actions C and CGC are essentially equivalent;
both considered for the sake of completeness of the strategies set.
The entries in the matrix are derived as follows. For instance, when
a C player interacts with another C (resp. D) player, it always obtains
payoff R(resp. S), in all the rounds of the IPD, so it obtains the
same payoff on average, as indicated in the payoff matrix. When C
interacts with DGC, it obtains Sin the first round and then Rin the
remaining 1rounds (thus it obtains S+R(Ω1)
on average), as
the DGC player feels guilty after defecting in the first round, thereby
switching to C. Respectively, DGC obtains Tin the first round and
then Rin the remaining 1rounds, i.e. T+R(Ω1)
on average.
As in this model DGC does not take into account the co-player’s
attitude towards guilt alleviation, when interacting with D it defects
in the first round then changes to C, even when the co-players shows
no sign of guilt feeling.
In the second model, an agent feels guilty when defecting if the co-
player acted pro-socially or was observed to feel guilty after defec-
tion, viz. through exercising self-punishment or apologising. Thus in
this second model, guilt has a particular social aspect that is missing
from the first model. In particular, DGC does not change behaviour
to C if the co-player played D and did not try to alleviate her guilt as
a result of her bad behaviour. Now, the payoff matrix is rewritten:
C D CGC DGD DGC
C R S R S S+RΘ
D T P T P P
CGC R S R S S+RΘ
DGD T γ P T γ P γP+TΘ
γ
DGC Tγ+RΘ
PTγ+RΘ
Pγ+SΘ
Pγ+RΘ
.(2)
The difference can be seen in the new payoff obtained by DGC when
playing with D. It no longer changes from D to C after defecting in
the first round, thus obtaining Pin all the rounds. Notice the differ-
ences in the payoff matrices for the interactions between the emo-
tional strategies that defect, i.e. DGD and DGC, and the unemotional
defector D.
Frequency
Frequency
Guilt cost, Ɣ
CDDGD
CGC DGC
(a) (b)
(c) (d)
Figure 1. Frequency of each strategy as a function of the guilt cost, γ, for
the two models, and for different PD game configurations (see below). In the
first model (panels a and b), D always dominates the population. In the
second model (panels c and d), for an intermediate value of γ, DGC is the
most frequent strategy; but when it is too small or too large, DGD is
dominant. Parameters: β= 1;N= 100;Ω = 10; In panels (a) and (c):
T= 2, R = 1, P = 0, S =1; In panels (b) and (d):
T= 4,3=1, P = 0, S =1.
2.4 Results
We have elsewhere [16] derived analytical conditions (not proffered
here) for when DGC can be a viable strategy, which is risk-dominant
when playing against defection strategies (i.e. D and DGD). We have
shown that though the DGC strategy is always dominated by defec-
tive strategies in the first model, there is a wide range of parame-
ters in which DGC dominates both defection strategies in the second
model, thereby resulting in high levels of cooperation. Namely, we
have shown that, as long as the guilt cost γsatisfies the following
condition
T+PRS
2< γ < (Ω 1)(RP),(3)
then DGC strategy can dominate all the defective strategies. This
condition indicates that, on the one hand, the guilt cost should not
be too small in order to ensure guilt has a sufficiently strong effect
on emotional players, encouraging guilt alleviation and behavioural
change. On the other hand, this cost should not be too large, allowing
DGC to compete against unemotional D players who never pay the
guilt cost after defecting.
To support the analytical results, we have also provided numerical
simulation results, see Figure 1 5. Furthermore, those results have
5This figure was reproduced from Ref. [16].
been generalised to consider non-extreme or radical guilt modelling
(i.e. when 0< G < ), showing that the obtained results are robust
beyond the context of radical guilt strategies (for details see [16]).
Guilt, depending on an agent’s strategy, may result in self-
punishment, with effect on fitness, and on a change in behaviour. In
the first model of guilt, a guilt prone agent is insensitive to whether
the co-player also feels guilt on defection. This model does not af-
ford cooperation enhancement because guilt prone agents are then
free-ridden by non-guilt prone ones. In our second model, guilt is
not triggered in an agent sensitive to the defecting co-player not ex-
periencing guilt too, for instance through telltale signs of eye con-
tact avoidance or frowning (see [19] page 60). It is this latter model
that shows the improvement on cooperation brought about by the
existence of guilt in the population, and how it becomes pervasive
through the usual EGT phenomenon of social imitation. Another suc-
cessful variation of this model allows to stipulate guilt accumulation
coupled with a triggering threshold.
3 CONCLUSIONS AND FUTURE WORK
For sure, we conclude, evolutionary biology and anthropology, like
the cognitive sciences too [2, 5, 7, 11, 23], have much to offer in
view of rethinking machine ethics, namely for the guilt emotion,
evolutionary game theory simulations of computational morality, and
functionalism to the rescue [18].
On the basis of psychological and evolutionary understandings of
guilt, and inspired by these, this paper proffers and studies two an-
alytical models of guilt, within a system of multi-agents adopting a
combination of diverse guilty and non-guilty strategies. To do so, it
employs the methods and techniques of EGT, in order to identify the
conditions under which there does emerge an enhanced cooperation,
improving on the case where there is absence of guilt.
Players evaluate others by their actions of cooperation or defec-
tion, whether in the IPD or other models of cooperation. Notwith-
standing, they care not simply whether game partners cooperate but
pay attention to their decision-making process too. More trust is as-
cribed to cooperators who have not even considered defecting at all.
To quote Kant, “In law a man is guilty when he violates the rights
of others. In ethics he is guilty if he only thinks of doing so.” [13].
Hence, detecting another’s proclivity to cheat, albeit checked by
guilt, allots intention recognition an important role to play even when
the intention is not carried out [9, 10].
Our results provide important insights for the design of self-
organised and distributed MAS: if agents are equipped with the ca-
pacity for guilt feeling even if it might appear to lead to disad-
vantage, that drives the system to an overall more cooperative out-
come wherein agents become willing to take reparative actions after
wrongdoings.
In future research, the model shall be complicated via our exist-
ing EGT models comprising apology, revenge, and forgiveness, by
piggybacking guilt onto them [8, 15, 18], namely associating experi-
encing guilt with joint commitment defection ( [24], pp. 108-111).
Last but not least: Currently we only consider one type of emo-
tional strategy playing against unemotional strategy. It is possible
that strategies with multiple guilt threshold are co-present in the pop-
ulation. We envisage that different types might dominate in different
game configurations, which we will analyse in future work.
ACKNOWLEDGEMENTS
LMP acknowledges support from FCT/MEC NOVA LINCS PEst
UID/CEC/04516/2013. LAMV and TL from Fonds voor Weten-
schappelijk Onderzoek - FWO grant nr. G.0391.13N. TL also
from Fondation de la Recherche Scientifique - FNRS grant FRFC
nr. 2.4614.12. TAH from Teesside URF funding (11200174).
REFERENCES
[1] Bert Brown, ‘Face saving and face restoration in negotiation’, in Negoti-
ations: Social-Psychological Perspectives, ed., D. Druckman, 275–300,
SAGE Publications, (1977).
[2] P. Churchland, Braintrust: What Neuroscience Tells Us about Morality,
Princeton University Press, Princeton, NJ, 2011.
[3] Robert H. Frank, Passions Within Reason: The Strategic Role of the
Emotions, Norton and Company, 1988.
[4] Benoit Gaudou, Emiliano Lorini, and Eunate Mayor, ‘Moral guilt: An
agent-based model analysis’, in Advances in Social Simulation, vol-
ume 229 of Advances in Intelligent Systems and Computing, 95–106,
Springer, (2014).
[5] M. S. Gazzaniga, The Ethical Brain: The Science of Our Moral Dilem-
mas, Harper Perennial, New York, 2006.
[6] Erving Goffman, Interaction Ritual: : essays in face-to-face behavior,
Random House, 1967.
[7] J. Greene, Moral Tribes: Emotion, Reason, and the Gap Between Us
and Them, The Penguin Press HC, New York, NY, 2013.
[8] T. A. Han, L. M. Pereira, F. C. Santos, and T. Lenaerts, ‘Why Is It So
Hard to Say Sorry: The Evolution of Apology with Commitments in the
Iterated Prisoner’s Dilemma’, in Proceedings of the 23nd international
joint conference on Artificial intelligence (IJCAI’2013). AAAI Press,
(2013).
[9] T. A. Han, F. C. Santos, T. Lenaerts, and L. M. Pereira, ‘Synergy be-
tween intention recognition and commitments in cooperation dilem-
mas’, Scientific reports,5(9312), (2015).
[10] The Anh Han, Intention Recognition, Commitments and Their Roles in
the Evolution of Cooperation: From Artificial Intelligence Techniques
to Evolutionary Game Theory Models, volume 9, Springer SAPERE
series, 2013.
[11] M. D. Hauser, Moral Minds: How Nature Designed Our Universal
Sense of Right and Wrong, Little Brown, London, UK, 2007.
[12] J. Hofbauer and K. Sigmund, Evolutionary Games and Population Dy-
namics, Cambridge University Press, 1998.
[13] Moshe Hoffman, Erez Yoeli, and Carlos David Navarrete, ‘Game the-
ory and morality’, in The Evolution of Morality, 289–316, Springer,
(2016).
[14] Stacy Marsella and Jonathan Gratch, ‘Computationally modeling hu-
man emotion’, Communications of the ACM,57(12), 56–67, (2014).
[15] Luis A Martinez-Vaquero, The Anh Han, Lu´
ıs Moniz Pereira, and Tom
Lenaerts, ‘Apology and forgiveness evolve to resolve failures in coop-
erative agreements’, Scientific reports,5(10639), (2015).
[16] L. M. Pereira, T. Lenaerts, L. A. Martinez-Vaquero, and T. A. Han,
‘Social manifestation of guilt leads to stable cooperation in multi-agent
systems’, in 16th Intl. Conf. on Autonomous Agents and Multiagent Sys-
tems, p. 9 pages. International Foundation for Autonomous Agents and
Multiagent Systems, (May 2017 (Accepted)).
[17] L. M. Pereira and A. Saptawijaya, Programming Machine Ethics, vol-
ume 26 of SAPERE series, Springer, 2016.
[18] Lu´
ıs Moniz Pereira, ‘Software sans emotions but with ethical discern-
ment’, in Morality and Emotion: (Un)conscious Journey into Being,
ed., Sara Grac¸a Dias Da Silva, 83–98, Routledge, (2016).
[19] Jesse J. Prinz, The Emotional Construction of Morals, Oxford Univer-
sity Press, 2007.
[20] Sarita Rosenstock and Cailin O’Connor. When it’s good to feel bad:
Evolutionary models of guilt and apology, 2016. working paper.
[21] Karl Sigmund, The Calculus of Selfishness, Princeton University Press,
2010.
[22] TheEconomist. March of the machines - a special report on artificial
intelligence, June 25, 2016.
[23] M. Tomasello, A Natural History of Human Thinking, Harvard Univer-
sity Press, Cambridge, MA, 2014.
[24] Michael Tomasello, A Natural History of Human Morality, Harvard
University Press, 2016.
... Inasmuch as emotions in humans are causally connected to behaviors, however, we can use these models to gain insight into what functional role emotions might play. If we see a behavior X selected for in environment Y, and we know that emotion Z causes that behavior in humans, we 1 For work doing just this see Han et al. (2013), Martinez-Vaquero et al. (2015), O'Connor (2016), Pereira et al. (2016Pereira et al. ( , 2017a, and Lenaerts et al. (2017). 2 Relatedly, Huttegger et al. (2015) show how in the general case of costly signaling, if signals are hard to fake the costs necessary to guarantee them are lower. 3 Games can also represent information that agents have about each other and the structure of the game, but in evolutionary scenarios, this element is less important. ...
... In addition, there is interest in the AI and computation communities in understanding the role something like guilt might play in artificial systems (Pereira et al., 2016(Pereira et al., , 2017a. In this case, the goal of an evolutionary model of guilt is not to accurately represent the historical pathways by which guilt might have evolved, but to show possibilities for how to stabilize guilt and cooperation in an evolving system. ...
Article
Full-text available
We use techniques from evolutionary game theory to analyze the conditions under which guilt can provide individual fitness benefits, and so evolve. In particular, we focus on the benefits of guilty apology. We consider models where actors err in an iterated prisoner’s dilemma and have the option to apologize. Guilt either improves the trustworthiness of apology or imposes a cost on actors who apologize. We analyze the stability and likelihood of evolution of such a “guilt-prone” strategy against cooperators, defectors, grim triggers, and individuals who offer fake apologies, but continue to defect. We find that in evolutionary models guilty apology is more likely to evolve in cases where actors interact repeatedly over long periods of time, where the costs of apology are low or moderate, and where guilt is hard to fake. Researchers interested in naturalized ethics, and emotion researchers, can employ these results to assess the plausibility of fuller accounts of the evolution of guilt.
... In addition, there is interest in the AI and computation communities in understanding the role something like guilt might play in artificial systems (Pereira et al., 2016(Pereira et al., , 2017a. In this case, the goal of an evolutionary model of guilt is not to accurately represent the historical pathways by which guilt might have evolved, but to show possibilities for how to stabilize guilt and cooperation in an evolving system. ...
Article
Full-text available
We use techniques from evolutionary game theory to analyze the conditions under which guilt can provide individual fitness benefits, and so evolve. In particular, we focus on the benefits of guilty apology. We consider models where actors err in an iterated prisoner's dilemma and have the option to apologize. Guilt either improves the trustworthiness of apology, or imposes a cost on actors who apologize. We analyze the stability and likelihood of evolution of such a 'guilt-prone' strategy against cooperators, defectors, grim-triggers, and individuals who offer fake apologies, but continue to defect. We find that in evolutionary models guilty apology is more likely to evolve in cases where actors interact repeatedly over long periods of time, where the costs of apology are low or moderate, and where guilt is hard to fake. Researchers interested in naturalized ethics, and emotion researchers, can employ these results to assess the plausibility of fuller accounts of the evolution of guilt.
Preprint
Why are we good? Why are we bad? Questions regarding the evolution of morality have spurred an astoundingly large interdisciplinary literature. Some significant subset of this body of work addresses questions regarding our moral psychology: how did humans evolve the psychological properties which underpin our systems of ethics and morality? Here I do three things. First, I discuss some methodological issues, and defend particularly effective methods for addressing many research questions in this area. Second, I give an in-depth example, describing how an explanation can be given for the evolution of guilt---one of the core moral emotions---using the methods advocated here. Last, I lay out which sorts of strategic scenarios generally are the ones that our moral psychology evolved to `solve', and thus which models are the most useful in further exploring this evolution.
Article
Why are we good? Why are we bad? Questions regarding the evolution of morality have spurred an astoundingly large interdisciplinary literature. Some significant subset of this body of work addresses questions regarding our moral psychology: how did humans evolve the psychological properties which underpin our systems of ethics and morality? Here I do three things. First, I discuss some methodological issues, and defend particularly effective methods for addressing many research questions in this area. Second, I give an in-depth example, describing how an explanation can be given for the evolution of guilt---one of the core moral emotions---using the methods advocated here. Last, I lay out which sorts of strategic scenarios generally are the ones that our moral psychology evolved to `solve', and thus which models are the most useful in further exploring this evolution.
Article
Full-text available
Here I summarize the main points in my 2016 book, A Natural History of Human Morality. Taking an evolutionary point of view, I characterize human morality as a special form of cooperation. In particular, human morality represents a kind of we > me orientation and valuation that emanates from the logic of social interdependence, both at the level of individual collaboration and at the level of the cultural group. Human morality emanates from psychological processes of shared intentionality evolved to enable individuals to function effectively in ever more cooperative lifeways.
Article
Full-text available
We use techniques from evolutionary game theory to analyze the conditions under which guilt can provide individual fitness benefits, and so evolve. In particular, we focus on the benefits of guilty apology. We consider models where actors err in an iterated prisoner's dilemma and have the option to apologize. Guilt either improves the trustworthiness of apology, or imposes a cost on actors who apologize. We analyze the stability and likelihood of evolution of such a 'guilt-prone' strategy against cooperators, defectors, grim-triggers, and individuals who offer fake apologies, but continue to defect. We find that in evolutionary models guilty apology is more likely to evolve in cases where actors interact repeatedly over long periods of time, where the costs of apology are low or moderate, and where guilt is hard to fake. Researchers interested in naturalized ethics, and emotion researchers, can employ these results to assess the plausibility of fuller accounts of the evolution of guilt.
Conference Paper
Full-text available
Inspired by psychological and evolutionary studies, we present here theoretical models wherein agents have the potential to express guilt with the ambition to study the role of this emotion in the promotion of pro-social behaviour. To achieve this goal, analytical and numerical methods from evolutionary game theory are employed to identify the conditions for which enhanced cooperation emerges within the context of the iterated prisoners dilemma. Guilt is modelled explicitly as two features, i.e. a counter that keeps track of the number of transgressions and a threshold that dictates when allevi-ation (through for instance apology and self-punishment) is required for an emotional agent. Such an alleviation introduces an effect on the payoff of the agent experiencing guilt. We show that when the system consists of agents that resolve their guilt without considering the co-player's attitude towards guilt alleviation then cooperation does not emerge. In that case those guilt prone agents are easily dominated by agents expressing no guilt or having no incentive to alleviate the guilt they experience. When, on the other hand, the guilt prone focal agent requires that guilt only needs to be alleviated when guilt alleviation is also manifested by a defecting co-player, then cooperation may thrive. This observation remains consistent for a generalised model as is discussed in this article. In summary, our analysis provides important insights into the design of multi-agent and cogni-tive agent systems where the inclusion of guilt modelling can improve agents' cooperative behaviour and overall benefit.
Book
Full-text available
A précis of Michael Tomasello
Chapter
Full-text available
In this article we analyze the influence of a concrete moral emotion (i.e. moral guilt) on strategic decision making. We present a normal form Prisoner’s Dilemma with a moral component. We assume that agents evaluate the game’s outcomes with respect to their ideality degree (i.e. how much a given outcome conforms to the player’s moral values), based on two proposed notions on ethical preferences: Harsanyi’s and Rawls’. Based on such game, we construct and agent-based model of moral guilt, where the intensity of an agent’s guilt feeling plays a determinant role in her course of action. Results for both constructions of ideality are analyzed.
Article
Full-text available
Making agreements on how to behave has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal for the first time under which conditions revenge, apology and forgiveness can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoners Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. In short, even when to err is human, revenge, apology and forgiveness are evolutionarily viable strategies which play an important role in inducing cooperation in repeated dilemmas.
Article
Full-text available
There has been rapid growth of cross-disciplinary research employing computational methods to understand human behavior, as well as facilitate interaction between people and machines, with work on computational models of emotion becoming an important component. Modeling appraisal theory in an agent provides an interesting perspective on the relation between emotion and rationality. Appraisal theory argues that emotion serves to generalize stimulus response by providing more general ways to characterize types of stimuli in terms of classes of viable organism responses. Assessments such as desirability, coping potential, unexpectedness, and causal attribution are clearly relevant to any social agent, whether deemed emotional or not. Having been characterized in a uniform fashion, the appraisal results coordinate system-wide coping responses that serve to guide the agent's specific responses to the eliciting event, essentially helping the agent find its ecological niche. Models designed to inform intelligent systems might avoid some of the seemingly irrational application of coping humans adopt.
Article
Full-text available
Commitments have been shown to promote cooperation if, on the one hand, they can be sufficiently enforced, and on the other hand, the cost of arranging them is justified with respect to the benefits of cooperation. When either of these constraints is not met it leads to the prevalence of commitment free-riders, such as those who commit only when someone else pays to arrange the commitments. Here, we show how intention recognition may circumvent such weakness of costly commitments. We describe an evolutionary model, in the context of the one-shot Prisoner's Dilemma, showing that if players first predict the intentions of their co-player and propose a commitment only when they are not confident enough about their prediction, the chances of reaching mutual cooperation are largely enhanced. We find that an advantageous synergy between intention recognition and costly commitments depends strongly on the confidence and accuracy of intention recognition. In general, we observe an intermediate level of confidence threshold leading to the highest evolutionary advantage, showing that neither unconditional use of commitment nor intention recognition can perform optimally. Rather, our results show that arranging commitments is not always desirable, but that they may be also unavoidable depending on the strength of the dilemma.
Book
What is morality? Where does it come from? And why do most of us heed its call most of the time? In Braintrust, neurophilosophy pioneer Patricia Churchland argues that morality originates in the biology of the brain. She describes the "neurobiological platform of bonding" that, modified by evolutionary pressures and cultural values, has led to human styles of moral behavior. The result is a provocative genealogy of morals that asks us to reevaluate the priority given to religion, absolute rules, and pure reason in accounting for the basis of morality. Moral values, Churchland argues, are rooted in a behavior common to all mammals--the caring for offspring. The evolved structure, processes, and chemistry of the brain incline humans to strive not only for self-preservation but for the well-being of allied selves--first offspring, then mates, kin, and so on, in wider and wider "caring" circles. Separation and exclusion cause pain, and the company of loved ones causes pleasure; responding to feelings of social pain and pleasure, brains adjust their circuitry to local customs. In this way, caring is apportioned, conscience molded, and moral intuitions instilled. A key part of the story is oxytocin, an ancient body-and-brain molecule that, by decreasing the stress response, allows humans to develop the trust in one another necessary for the development of close-knit ties, social institutions, and morality. A major new account of what really makes us moral, Braintrust challenges us to reconsider the origins of some of our most cherished values.
Chapter
Why do we think it’s wrong to treat people merely as a means to end? Why do we consider lies of omission less immoral than lies of commission? Why do we consider it good to give, regardless of whether the gift is effective? We use four simple game theoretic models—the Coordination Game, the Hawk–Dove game, Repeated Prisoner’s Dilemma, and the Envelope Game—to shed light on these and other puzzling aspects of human morality. We also justify the use of game theory for the study of morality and explore implications for group selection and moral realism.