A Survey of Psychological Games: Theoretical Findings and Experimental Evidence
ABSTRACT I modify the uniform-price auction rules in allowing the seller to ration bidders. This allows me to provide a strategic foundation for underpricing when the seller has an interest in ownership dispersion. Moreover, many of the so-called "collusive-seeming" equilibria disappear.
- SourceAvailable from: Pierpaolo Battigalli
Article: Dynamic psychological games[show abstract] [hide abstract]
ABSTRACT: The motivation of decision makers who care for various emotions, intentions-based reciprocity, or the opinions of others may depend directly on beliefs (about choices, beliefs, or information). Geanakoplos, Pearce and Stacchetti [J. Geanakoplos, D. Pearce, E. Stacchetti, Psychological games and sequential rationality, Games Econ. Behav. 1 (1989) 60-79] point out that traditional game theory is ill-equipped to address such matters, and they pioneer a new framework which does. However, their toolbox - psychological game theory - incorporates several restrictions that rule out plausible forms of belief-dependent motivation. Building on recent work on dynamic interactive epistemology, we propose a more general framework. Updated higher-order beliefs, beliefs of others, and plans of action may influence motivation, and we can capture dynamic psychological effects (such as sequential reciprocity, psychological forward induction, and regret) that were previously ruled out. We develop solution concepts, provide examples, explore properties, and suggest avenues for future research.Journal of Economic Theory. 01/2009; 144(1):1-35.
A Survey of Psychological Games: Theoretical
Findings and Experimental Evidence?
Giuseppe Attanasiyand Rosemarie Nagelz x
1. Introduction: the role of psychological games
Most economic models assume that agents maximize their expected material payo¤.
However, subjects in the lab exhibit persistent and signi…cant deviations from this self-
interested maximizing behavior. A reasonable explanation for this behavior is that
players can be motivated not only by material (monetary) payo¤s but also by what are
sometimes referred to as ‘psychological’ utilities. These are related to preferences that
are in some degree ‘other regarding’: they take others into account. Traditional game
theory does not provide enough tools to adequately describe many of these preferences:
the traditional approach assumes that utilities only depend on the actions that are chosen
by the players. By contrast, when players are emotional or motivated by reciprocity or
social respect, their utilities may also directly depend on the beliefs (about choices,
?Published in “Games, Rationality and Behaviour. Essays on Behavioural Game Theory and Ex-
periments”, A. Innocenti and P. Sbriglia (eds.), Palgrave McMillan, Houndmills, February 2008 (Ch.
9, pp. 204-232).
yToulouse School of Economics (Georges Meyer Chair in Mathematical Economics, LERNA) and
Bocconi University (Department of Economics).
zUniversitat Pompeu Fabra (Department of Economics).
xWe are grateful to Pierpaolo Battigalli for the useful suggestions encouraging this survey and to
Martin Dufwenberg for the helpful starting points we found in his doctoral dissertation. We are also
grateful to Nikolaos Georgantzis and the participants in several seminars for helpful discussions and to
Michel Sera…nelli for useful research assistance. O¤ course the authors are responsible for any error in the
chapter. Giuseppe Attanasi acknowledges …nancial support from Bocconi University and thanks Univer-
sitat Jaume I of Castelló for its hospitality during part of this project. Rosemarie Nagel acknowledges
…nancial support from the Spanish Ministry of Education and Science under grant SEC2002-03403, and
thanks the Barcelona Economics Program of CREA for support. Both authors thank for the hospitality
of HSS in Caltech where part of the chapter was written.
beliefs, or information) they hold. This is not to say that traditional game theory is
not able to analyze the in‡uence of feelings, emotions and social norms on the players’
behavior. Distribution-dependent preferences à la Fehr and Schmidt (1999), for example,
can be addressed by the traditional game theory. But when we deal with intention-
based feelings, emotions and social norms, i.e. belief-dependent motivations, we need to
turn to psychological game theory. This new framework focuses on strategic settings
where at least one player has belief-dependent motivations or believes, with a certain
probability, that one of his opponents has belief-dependent motivations. Nonetheless, it
allows for every other kind of social preferences. In that sense, it can be interpreted as
a generalization of the traditional game theory.
In games with belief-dependent motivations there are clearly two channels through
which beliefs and information a¤ect behavior: the direct (psychological) impact of beliefs
on preferences over terminal histories, and the (traditional) impact of (updated) beliefs
about the opponents on the preferences over own strategies. Geanakoplos, Pearce and
Stacchetti (1989) is the seminal paper that shows the inadequacy of traditional methods
in representing the involved preferences, and develops extensions of the traditional game
theory in order to deal with the matter. Battigalli and Dufwenberg (2005) generalize
and extend Geanakoplos, Pearce and Stacchetti (1989), thus providing the framework
we use to analyze games with belief-dependent motivations both from a theoretical and
from an experimental point of view.
Experimental evidence gives support to the theories of belief-dependent motivations.
Even when not explicitly designed to test such motivations, several experimental works
provide results that are in line with psychological game theoretical predictions. Some of
these experiments can be seen as an indirect proof of the relevance of psychological games
in explaining certain forms of strategic interaction. Some others have been explicitly
designed to test the relevance of psychological game theory.
The plan of the chapter is as follows. In section 2, we provide the main theoretical
insights of the general psychological game framework, through the example of a simple
trust game in which belief-dependent motivations are involved. In section 3, we describe
and discuss the main experimental papers that directly or indirectly refer to psycholog-
ical game-theoretic explanations as being able to rationalize their experimental results.
Throughout the chapter, we try to clarify ideas and to dispel misconceptions emerged
among economists about this new …eld, in order to stress the role and the importance
of psychological games both for the strategic interaction analysis and for the related
2. Main Theoretical Findings
2.1 Theoretical Literature
Geanakoplos, Pearce and Stacchetti (1989; henceforth GPS) introduce belief-dependent
motivation into strategic decision making. They develop a new analytical framework cen-
tred on the concept of psychological game (or game with belief-dependent motivations),
i.e. a strategic interactive situation in which players’ utilities do not only depend on ter-
minal nodes but also on the beliefs (about choices, beliefs, or information) they hold.1
GPS also present several examples that illustrate the inadequacy of traditional methods
in representing preferences that re‡ect various forms of belief-dependent motivation.
GPS framework may be seen as a generalization of a traditional game able to model
only some of the speci…c belief-dependent motivations in strategic settings: as stated
by Battigalli and Dufwenberg (2005; henceforth BD), ‘GPS’s toolbox of psychological
games incorporates several restrictions that rule out many plausible forms of belief-
dependent motivation’ (p. 41). In particular, GPS only allow initial beliefs to enter
the domain of a player’s utility; however, many seemingly important forms of belief-
dependent motivations require the introduction of updated beliefs.
BD generalize and extend GPS, by allowing updated higher-order beliefs, beliefs of
others, planned strategies, and incomplete information to in‡uence motivation. Among
other advances, BD address the issue of how beliefs about others’ beliefs are revised as
the play unfolds: they are able to model dynamic psychological e¤ects that are ruled
out when epistemic types are identi…ed with hierarchies of initial beliefs. They also
de…ne a notion of Psychological Sequential Equilibrium, which generalizes the sequential
equilibrium notion for traditional games, for which they prove existence under mild
assumptions. In the next paragraph we will use their notion of psychological sequential
equilibrium to solve a simple two-stage psychological game.
The most well-known example of a psychological-game based application is Rabin’s
(1993) model of intention-based reciprocity, according to which players wish to act kindly
(unkindly) in response to kind (unkind) actions. The key notion of kindness depends on
beliefs in such a way that reciprocal motivation can only be described using psychological
However, the range of topics that have been explored in models of belief dependent
1As Dufwenberg (2006) correctly points out, ‘the term game with belief-dependent motivation would
be more descriptive than the term psychological game, but we stick with the latter which has become
established’ (p. 2).
motivation is still limited. BD suggest that there is a variety of interesting forms of
belief-dependent motivations waiting to be analytically explored. In his survey paper
on ‘Emotions and Economic Theory’, Elster (1998) argues that a key characteristic of
emotions is that ‘they are triggered by beliefs’ (p. 49). He discusses, inter alia, anger,
hatred, guilt, shame, pride, admiration, regret, rejoicing, disappointment, elation, fear,
hope, joy, grief, envy, malice, indignation, jealousy, surprise, boredom, sexual desire,
enjoyment, worry, and frustration.
2.2. An example: trust games as psychological games
Let us show and analyze some of the more important features of BD’s psychological
game-theoretic framework by means of a simple trust game where some belief-dependent
motivations can emerge.
Figure 1 Trust Game with material payo¤s
We build on a simple two-stage game representing the following economic situation of
strategic interaction. Player A (the truster, ‘he’) and B (the trustee, ‘she’) are partners
on a project that has thus far yielded total pro…ts of 2e. Player A has to decide whether
to withdraw from the project or not. If player A dissolves the partnership, the contract
dictates that the players split the pro…ts …fty-…fty. If player A leaves his resources in the
project, total pro…ts would be higher (4e); however, according to the contract, in that
case player B has the right to share or not the pro…ts after the project is completed.
So, player A must decide whether to Dissolve or to Continue the partnership, without
knowing if there will be pro…t sharing in case he continues. After knowing player A’s
choice and only in case player A has chosen to Continue the partnership, player B has
to decide whether to Take or Share the higher pro…ts. The game tree with the material
payo¤s is represented in Figure 1. Payo¤s are in euros and do not necessarily represent
preferences. For this reason we call them ‘material payo¤s’.
We know from traditional game theory that the unique subgame perfect equilibrium
of the two-stage trust game (with material payo¤s) in Figure 1 is player A choosing
Dissolve and player B choosing Take if A would Continue.
Now, suppose that we represent this game in a laboratory and let participants play
it in pairs. According to the experimental literature on this subject,2one should expect
quite a high percentage of pairs with outcomes (Continue;Share).
A reason for this could be the bounded rationality of some A or B players (or both).
However, it seems very di¢cult to support that in a so simple and clear game a quite
high number of players (or pairs) are not able to understand the rules of the game or
to calculate their pro…t-maximizing action given their expectations (…rst-order beliefs)
about their opponent’s choice.3
Another reason could be that player A or player B are motivated not only by self-
interest, but also, at least some of them, by ‘social preferences’.
Let us …rst concentrate on distribution-dependent preferences à la Fehr and Schmidt
(1999). As stated above, this kind of preferences can be addressed by traditional game
theory. More speci…cally, let us suppose that B is inequity averse (i.e. motivated by
fairness) and that this is common knowledge among players. Formally, B’s preferences
are represented by the utility function
uB(sA;sB) = ?A(sA;sB)
?k maxf0;?B(sA;sB) ? ?A(sA;sB)g ? hmaxf0;?A(sA;sB) ? ?B(sA;sB)g
where siis the strategy of player i = A;B, ?i(sA;sB) is the material payo¤ of player
i = A;B and k;h are positive parameters such that k 2 [0;1] and h 2 [0;k].
is a self-interested player and knows uB(sA;sB).
Given the payo¤ structure of our
simple trust game, the utility function of B reduces to uB(sA;sB) = ?A(sA;sB) ?
2Among others, Charness and Dufwenberg (2006). See section 3 for a complete review of the exper-
imental literature on this family of trust games.
3A reasonable explanation for outcomes that partially di¤er from the subgame perfect equilibrium
one could be the ‘uncorrectness of beliefs’ of some players. For example, suppose that A and B are
both self-interest and rational, i.e. they maximize their (expected) material payo¤. Suppose also that
A believes that B is not rational and so he chooses Continue, being quite sure that B will choose Share
after Continue. Being rational and self-interested, B instead chooses Take after Continue and so the
outcome (Continue;Take) takes place.
hmaxf0;?A(sA;sB) ? ?B(sA;sB)g and so (Continue;Share) is the (unique) equilib-
rium outcome of the trust game in case B is highly inequity averse, i.e. k 2
and h 2
tally independently of B’s expectation of A’s expectation on B choosing Share after
2;k?. In that case, (Continue;Share) outcomes should emerge experimen-
However, as we shall report below, experimental evidence on trust games shows
that there is a certain correlation between B’s expectation of A’s trust on her and
trust ful…lment; that can be explained by a particular kind of social preferences: those
expressed as belief-dependent motivations. As said above, traditional game theory is
ill-equipped to address such preferences. For that reason, we need to introduce some
psychological game tools.
Consider the game with material payo¤s in Figure 1.
Let us …rst suppose that A is a self-interested player, while B is (also) a guilt-averse
Guilt aversion can be de…ned as follows: people su¤er from guilt if they in‡ict
harm on others; although guilt could have a variety of sources, one preeminent way
to in‡ict harm is to let others down (see Tangney, 1995). Battigalli and Dufwenberg
(2007) develop a general theory of guilt aversion and show how to solve for sequential
equilibria. Their approach can be easily applied to our simple trust game, whenever
we assume that B is a¤ected by guilt. Moreover, Charness and Dufwenberg (2006)
suggest that sensitivity to guilt aversion imposes a speci…c behavior to the trustee (B):
suppose that in an experiment it is possible to truthfullly elicit B’s expectation of A’s
expectation that B would Share after Continue (B’s second-order beliefs of Share); given
that the (Continue;Share) outcome indicates trust ful…lment, as we shall report below,
experimental evidence on trust games shows positive correlation between B’s second-
order beliefs of Share (after Continue) and trust ful…lment and thus higher expected
mean guess of Bs who choose Share after Continue than the mean guess of Bs who
choose Take after Continue.
We …rst consider the psychological utility of player A. Since A is only motivated by
self-interest, his feeling sensitivity to each possible belief-dependent motivation is equal
to zero. Therefore, A’s total utility function reduces to his material payo¤.
Next, let us concentrate on the psychological utility of player B. Since B is moti-
vated by guilt aversion, she takes into account the disappointment of player A when the
material payo¤ he expects to receive after Continue does not match the one he actu-
ally receives. The material payo¤ A expects to receive after Continue depends on his
…rst-order beliefs on B’s strategy.
Let us analyze the situation from a formal point of view: de…ne A’s initial …rst-
order belief that B will Share if A chooses Continue as ?A= PrA[Share if Continue].
Hence, ?Ameasures A’s trust on B before the game starts. De…ne also B’s conditional
2nd-order belief that she would Share if A would Continue as ?B= EB[?AjContinue].
Hence, ?Bmeasures B’s expectation of A’s trust on her given that B knows that A has
Given the notation we introduced, we can express A’s expected material payo¤ after
Continue as ECont;?A[?A] = 2 ? ?A+ 0 ? (1 ? ?A) = 2?A. How much would A feel ‘let
down’ after (Continue;Take)? According to BD, the amount of his disappointment is
exactly ?2?A, i.e. the di¤erence between the payo¤ he receives after (Continue;Take),
which is zero, and the payo¤ he would have received after (Continue;Share).
B’s guilt is given by her expectation of A’s disappointment, given that A has chosen
Continue, i.e. B’s expectation of (?2?A), given Continue. This amount, ?2?B, mul-
tiplied by B’s sensitivity to guilt aversion, ?g
B? 0, is exactly B’s psychological utility
when A chooses Continue and she chooses Take. B’s total utility after (Continue;Take)
is given by 4 ? ?g
The psychological game representing the trust game with a self-interested truster and
B2?B, i.e. the sum of her material payo¤ and her psychological utility.
a guilt-averse trustee is depicted in Figure 2. What appears at the terminal histories
should be thought of as utilities, not as material payo¤s, although the two notions
coincide for all but one of the terminal histories.
4 ? ?g
Figure 2 Trust Game with guilt aversion.
4In general, we use the Greek letter ? to refer to …rst-order beliefs, and the letter ? to refer to
Looking at Figure 2 we can conclude that in the simple trust game we are analyzing, B
exhibits guilt aversion if her expected utility from playing Take after Continue depends
negatively on her expectation of ?A, conditional on A choosing Continue.
Let us now suppose that A is again a self-interested player, while B is (also) a
Reciprocity has two sides: positive reciprocity, where a player is kind in return to
another one’s kind choice, and negative reciprocity, where a player is unkind in return
to another one’s unkind choice. Rabin’s (1993) theory of reciprocity, in which players
reciprocate belief dependent (un)kindness with (un)kindness, is probably the most well-
known application of GPS’s psychological game theory. Rabin works with the normal
form version of GPS’s theory. His goal is to highlight certain key qualitative features
of reciprocity, and he does not address issues of dynamic decision making, although he
points out that this is important for applied work (p. 1296). Dufwenberg and Kirch-
steiger (2004), while building on the same framework as Rabin in de…ning reciprocity,
depart from it, developing a theory of reciprocity for extensive games. As BD, in dealing
with sequential reciprocity they argue that it is necessary to deviate from GPS’s exten-
sive form framework: GPS only allow initial beliefs to enter the domain of a player’s
utility, while the modeling of reciprocal responses at various nodes of a game tree requires
kindness to be re-evaluated using updated belief.
We build on the same intuition as Rabin (1993) according to which modeling reci-
procity may require belief-dependent utilities, since kindness and perceived kindness de-
pend on beliefs. And we also take into account the suggestion of Dufwenberg and Kirch-
steiger (2004) that reciprocity in a dynamic setting is a ‘conditional’ belief-dependent
motivation, in the sense that updated belief matters when evaluating the kindness of a
player. Nonetheless, following Battigalli (2007), we model reciprocity in an easier and
more direct way compared to the formal de…nition of Rabin (1993), although we build
on the same form of belief-dependency.
Let us start by de…ning A’s kindness (and B’s perceived kindness) through a simple
question: if A chooses Continue can we safely postulate that he has been kind to B?
The answer is ‘possibly not’, because, in the case A is quite certain that B would choose
Share after Continue (say, ?A>
2), he is just maximizing his material payo¤. Thus,
when A chooses Continue, he is (and is perceived to be) more kind, the less he thinks
that B would choose Share after Continue. Obviously, in case A chooses Dissolve, he is
not kind to B and we can intuitively state, without loss of generality, that A’s kindness
to B is negative (see the exact calculation below). Hence, Continue may be deemed
‘kind’, if ?Ais low. Continue is perceived as kind by B if ?Bis low. If ?Bis low and
B is highly motivated by reciprocity considerations, she reciprocates and chooses Share
Formally, we de…ne the ‘kindness’ of player i as a function of his strategy and beliefs:5
i is kind (unkind) to j if he intends to make j get more (less) money than a context-
dependent ‘equitable payo¤’ ?ei
j, with i;j = A;B, i 6= j. For example, the equitable
payo¤ ascribed by B to A, given B’s belief, can be expressed with the formula
sBEB[?A;?B;sB] + min
where ?B= PrB[Continue], i.e. it measures B’s initial …rst-order belief that A would
choose Continue. The equitable payo¤ is de…ned in this case as the average between the
highest and the lowest expected payo¤ that B can allow to A, given her initial belief
on the action chosen by A. Therefore, we can de…ne ‘Kindness’ of B towards A as the
di¤erence between the expected payo¤ B would allow to A and the equitable payo¤, i.e.
KB(?B;sB) = EB[?A;?B;sB] ? ?eB
Notice that B’s kindness depends on B’s intentions.
In the same way, we can de…ne EB[KA;?B], i.e. player B’s perceived kindness of
A. Since A’s kindness depends on the …rst-order belief of A (his belief about sB), then
player B’s perceived kindness depends on the second-order belief of B (belief of B about
belief of A):
Given B’s sensitivity to reciprocity, ?r
B? 0, we express the psychological part of B’s
(total) utility function when she is sensitive to reciprocity as in Battigalli (2007), i.e. as
the product of B’s sensitivity to reciprocity, her perceived A’s kindness and A’s material
payo¤.6Then, the total expected utility function of B when she is (also) concerned with
5We point out a subtle issue. Here we follow BD and assume that the kindness of a player depends
on his beliefs and his actual strategy. This means that observing a player’s behavior may force a change
in the perception of his kindness. Battigalli (2007) note that kindness should be more appropriately
modeled as a function of a player’s beliefs about the other’s behavior and about his own behavior.
When coupled with the trembling hand logic of sequential equilibrium (according to which deviations
are unintentional and players never change their beliefs about the beliefs of others), this would imply
that the perception of the co-player’s kindness is always the same, on and o¤ the equilibrium path, thus
trivializing the equilibrium analysis of sequential reciprocity.
6Battigalli (2007) himself acknowledges that several functional forms could be able to adequately
represent the psychological preferences of a player concerned with intention-based reciprocity. Among
other reasons, we chose to use his formulation because of its simplicity and tractability.
uB((?B;sB);?B) = ?B(?B;sB) + ?r
B? EB[KA;?B] ? ?A(?B;sB)
Given that we assumed A being a self-interested player, we have ?r
A= 0, hence again
A’s total utility function reduces to his material payo¤.
Next, let us concentrate on the psychological utility of player B. According to Bat-
tigalli’s (2007) formulation, A’s Kindness when he chooses Continue is
KA(Continue;?A) = 2?A+ 4(1 ? ?A) ?1
2(2?A+ 4(1 ? ?A) + 1) =3
and so B’s perceived Kindness when A chooses Continue is
? B’s total utility after (Continue;Share) is given by
uB((Continue;Share);?B) = 2 + ?r
? 2 = 2 + ?r
B(3 ? 2?B)
? B’s total utility after (Continue;Take) is given by
uB((Continue;Take);?B) = 4 + ?r
? 0 = 4
Using the same procedure, we can calculate B’s total utility after Dissolve, i.e.
uB((Dissolve);?B) = 1 + ?r
which is no greater than 1, given that ?r
The psychological game representing the trust game with a self-interested truster
and a reciprocity concerned trustee is depicted in Figure 3. Again, what appears at the
terminal histories should be thought of as utilities, not as material payo¤s, although the
7From now on, we denote with the ‘bold’ labels Take and Share player B’s strategies ‘Take if
Continue’ and ‘Share if Continue’ respectively.
two notions coincide for one of the terminal histories.
2 + ?r
B(3 ? 2?B)
1 + ?r
Figure 3 Trust Game with reciprocity.
2.3. Psychological games: some misconception dispelled
In this subsection we want to analyze more deeply some of the features of our psycho-
logical game framework in order to make the main concepts clearer and to dispel some
misconceptions that could emerge while comparing psychological game theoretical tools
to the well-known features of traditional game theory. We try to answer the questions
that could be asked by a reader with a good background in game theory that approaches
for the …rst time the games depicted in Figure 2 and in Figure 3. We bear in mind the
main assumptions of the general framework of psychological games as described in BD.
Question 1. In Figure 2, why do we suppose that player B cares about A’s disap-
pointment even though A is just an expected material payo¤ maximizer?
Answer. Our analysis is descriptive, not prescriptive. Utility functions are meant
to help analyzing players’ behavior. They describe hypothetical preferences (i.e. condi-
tioned to some hypothesis). In that sense, utilities in Figure 2 are only ‘instruments’.
We do not use them to represent players’ happiness at the end of the game. It is pos-
sible that A feels disappointment but his behavior is not a¤ected by the anticipation of
such disappointment. Nonetheless, even though A’s feelings do not in‡uence his own
behavior, it is possible that B cares about them.
Moreover, for the trust game it can be shown:
? that the qualitative analysis does not change if we put disappointment in A’s utility
? that the set of pure strategy equilibria of the psychological game is the same both
in case we add A’s disappointment in his utility function and in case we do not
Hence, we choose not to consider it in order to simplify both the equilibrium calcu-
lations and the analysis of the utility functions of the game.
Question 2. Given the high number of unknown parameters in a psychological game,
under which conditions can we say that we are in a situation of complete information?
Answer. If we assume that the material game form in Figure 1 is common knowl-
edge, and that players are expected material payo¤ maximizers and this is also common
knowledge, then the trust game in Figure 1 has complete information. If we assume that
players have psychological (belief-dependent) preferences, then it is more reasonable to
allow for incomplete information, which comes from two sources:
(i) player i does not know which feeling j is sensitive to or, even if i knows what is the
feeling, j does not know i knows that and so on. In our trust game, this happens
for example if A does not know whether B is only self-interested, or (also) sensitive
to guilt, or to reciprocity or to both. This means that A does not know if he is
playing the trust game in Figure 1, that one in Figure 2, that one in Figure 3 or
a trust game in which B’s total utility after (Dissolve) is 1 + ?r
(Continue;Take) is 4 ? ?g
i.e. a psychological game which includes both guilt and reciprocity of player B.8
B2?Band after (Continue;Share) is 2 + ?r
B(3 ? 2?B),
(ii) even if i knows which feeling j is sensitive to, i does not know j’s sensitivity to
that speci…c feeling. This happens, for example, in case A and B know they are
playing the psychological game in Figure 2, but A does not know the value of ?g
In strategic settings in which the two players do not know each other very well, con-
dition (i) applies. In all these cases, we deal with a Psychological game with Incomplete
Now suppose that we are in a setting in which condition (i) does not apply, i.e. the set
of belief-dependent motivations to which each player is sensitive to is common knowledge
among players. This is the case, for example, in which the game structure in Figure 2
8Another case would be that A knows with certainty that B is sensitive to, say, guilt aversion, but
B does not know A knows that.
9Another case would be that in which A and B know they are playing the psychological game in
Figure 2, A knows the value of ?g
B, but B does not know that A knows it.
is common knowledge. Then, we say that the trust game in Figure 2 is a Psychological
Game with Complete Information when ?g
Bis commonly known among players. In case
Bis not commonly known among players, we deal again with a Psychological game
with Incomplete Information.10
In both cases, ?Bis unknown to A and is endogenous. Therefore, despite information
is complete, in a psychological game there is still a source of uncertainty, that is not solved
even after the end of the game. This is because A will not know the realized value of
B’s conditional second-order belief in any of the three end nodes of the game of Figure
Question 3. According to BD, an easier way to represent and analyze the psycholog-
ical game in Figure 2 is to write B’s utility after (Continue;Take) as 4 ? ?g
given that B does not know A’s …rst-order beliefs about her strategy, suppose that she
maximizes the expected value of the previous expression. In other words, B maximizes
the expected value of a state-dependent utility function. It may be argued that it does
not make any sense to insert a state variable in a utility function if that state is not
revealed ex post or if that utility is not ‘experienced’ ex-post. This is because even after
the end of the game, B would not know the exact value of ?Aand so she would not
know her exact total utility after (Continue;Take). Therefore, a reasonable question
could be: does it make sense to express B’s belief-dependent motivations by inserting
?Ainto B’s utility function?
Answer. As emphasized above, the utility functions are analytical tools used to
explain players’ behavior, not to measure their happiness: uB does not represent the
utility ‘experienced’ by B. In the simple trust game we analyze, we just want to describe
the behavior of player B, who maximizes the expected value of her total utility (material
payo¤ and psychological utility). Therefore, it is not necessary for the state variable or
the induced utility to be indeed ‘experienced’ ex-post. Their only role is to help us
analyzing players’ motivations. Doing that by using (initial) …rst-order beliefs is much
simpler than using (conditional) second-order ones.
Question 4. What is the di¤erence between psychological games (with complete or
incomplete information) and traditional games with incomplete information?
Answer. The payo¤s of a (traditional) game with incomplete information depend on
players’ actions and on an exogenous parameter representing the state of nature, about
which players are asymmetrically informed. In psychological games, players’ beliefs in
10The same de…nitions can be extended to the trust game in Figure 3, given that ?r
knowledge or not, respectively.
the utility functions are not (exogenous) parameters, but rather endogenous variables.
Consider the psychological game in Figure 2: ?Bis endogenous because ?Ais an en-
dogenous variable and any belief on an endogenous variable is endogenous. ?Bis a
conditional second-order belief (conditional on A choosing Continue) and, at the same
time, a terminal belief. Moreover, what B chooses to do is endogenous, but, by con-
struction, ?Bis not in‡uenced by B’s choice: B’s beliefs on her opponent’s beliefs and
actions do not depend on what she decides to do after A has chosen Continue.
2.4. Solving a Psychological Game with Complete Information
In this subsection, we show how to solve a Dynamic Psychological Game with Complete
Information. More speci…cally, we will solve the two psychological games in Figure 2 and
in Figure 3, supposing respectively that ?g
Bare commonly known among players.
Although it could seem unrealistic to suppose that each others’ feelings sensitivity is
common knowledge, this assumption is needed as a …rst step in order to predict players’
behavior when they hold belief-dependent motivations.
In general, the equilibrium concepts we use in order to solve psychological games
are not ‘di¤erent’ from those used in traditional game theory. Starting from GPS, BD
simply apply the same logic as Nash Equilibrium to solve static psychological games and
the standard logic of backward induction and of the (traditional) sequential equilibrium
to solve dynamic psychological games. They are somehow ‘forced’ to generalize the
previous concepts in order to consider the fact that in psychological games beliefs enter
players’ utility function. That creates two additional problems that need to be taken
into account when dealing with equilibrium behavior:
(a) the condition of correctness of beliefs requires also correctness of stated utilities.
Given that di¤erent orders of beliefs are involved in the analysis, we have to ex-
plicitly impose that they are correct in equilibrium (for example, in each of the
game in Figure 1, 2 and 3, it must be ?A= ?Bin equilibrium). Nonetheless, when
checking that in equilibrium players maximize their total utilities at all decision
nodes given their correct beliefs about one another’s actions, we must take into
account that some of these beliefs (independently of the order) are part of (some
of) the utilities of (some) players. Hence, in equilibrium, players maximize their
total utilities measured according to the correct beliefs they hold on actions and
beliefs of their opponents and, at the same time, (correct) beliefs of di¤erent orders
match players’ best replies calculated according to the ‘correctly’ stated total util-
ities. This issue is addressed by GPS and generalized by BD, who allow for both
own beliefs and beliefs of others in the (total) utility function; they also provide
a general framework able to account for conditional beliefs of whatever order in
players’ (total) utility function.
(b) as the play unfolds, beliefs about the beliefs of others are revised; but players’
beliefs updating leads also to players’ utilities updating.
Building on the theory of hierarchies of conditional beliefs due to Battigalli and
Siniscalchi (1999), BD model a universal belief space that accounts for updating
beliefs about others’ beliefs. The idea is that, in a psychological game, in order to
decide on the best course of action, player i may need to form (conditional) beliefs
about the in…nite hierarchies of (conditional) beliefs of other players, because they
enter her psychological utility function. For example, in the trust games in Figure
2 and Figure 3, since ?Benters B’s psychological utility function, in order to decide
if she prefers to Take or to Share after Continue, B may need to form conditional
beliefs about the …rst-order beliefs of A. But when B updates her second-order
beliefs after A has chosen Continue, her utility from choosing Take after Continue
is also updated.
GPS propose a notion of psychological equilibrium that takes into account (a), but
does not solve (b), since they allow only initial pre-play beliefs to enter players’ util-
ity functions. Hence, GPS’s notion of equilibrium cannot encompass belief-dependent
motivations as those considered here.
Building on the previous considerations, BD propose a notion of Psychological Se-
quential Equilibrium (PSE henceforth), which generalizes the sequential equilibrium
concept of Kreps and Wilson (1982). As stated in BD, ‘Kreps and Wilson (1982) argue
that an appropriate de…nition of equilibrium in extensive form games must refer to as-
sessments, that is, pro…les of (behavior) strategies and conditional (…rst-order) beliefs.
They formulate a de…nition of sequential equilibrium in two steps: …rst, they put forward
a consistency condition for assessments, and then they stipulate that an assessment is
a sequential equilibrium if it is consistent and satis…es sequential rationality’ (p. 18).
BD follow a similar two-step approach, adding to it a third requirement concerning the
higher-order beliefs that need to be speci…ed in psychological games. This third condi-
tion is analogous to a condition used by GPS. Essentially, it requires that players hold
at each node common, correct beliefs about each others’ beliefs. This implies that in