Article

Why punish cheaters? Those who withdraw cooperation enjoy better reputations than punishers, but both are viewed as difficult to exploit

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Negatively sanctioning cheaters promotes cooperation. But do all negative sanctions have the same consequences? In dyadic cooperation, there are two ways that cooperators can sanction failures to reciprocate: by inflicting punishment or withdrawing cooperation. Although punishment can be costly, it has been proposed that this cost can be recouped if punishers acquire better reputations than non-punishers and, therefore, are favored as cooperation partners. But the evidence so far is mixed, and nothing is known about the reputations of those who sanction by withdrawing cooperation. Here, we test two novel hypotheses about how inflicting negative sanctions affects the reputation of the sanctioner: (i) Those who withdraw cooperation are evaluated more favorably than punishers, and (ii) both sanctioners are viewed as less exploitable than non-sanctioners. Observers (US online convenience sample, n = 246) evaluated withdrawers as more cooperative and less vengeful than punishers and preferred withdrawers as a partner. Sanctioners were also viewed as more difficult to exploit than non-sanctioners, with no difference between punishers and withdrawers. The results were the same when punishment was costly (US college sample, n = 203) with one exception: Costly punishers, who lost their payoffs by punishing, were viewed as more exploitable than withdrawers. Our results indicate that withdrawing cooperation has advantages over punishing: Withdrawers are favored as cooperative partners while gaining a reputation as difficult to exploit. The reputational consequences of the three responses to defectors—punishing, withdrawing cooperation, and not sanctioning at all—were opposite to those predicted by group selection models.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Others, for example 30 indicated that violating group norms can lead to indirect or direct forms of punishment, like confrontation. Indirect forms of punishment, like gossip 32 , social exclusion 30 , and even withdrawing cooperation 33 are more commonly used when the perceived risk of retaliation is high 30 . This underlines the importance of balancing individual and group reputations to maintain intergroup cooperation without group disintegration. ...
... We anticipate that human participants will imitate these acts of outgroup altruism, and will be rewarded via reciprocation. Nudging outgroup altruism can produce intergroup cooperation, but it may have unintended ingroup consequences, undermining the reputation of individuals now viewed as ingroup traitors or norm violators 1,5 , and leading to the withdrawal of cooperation by ingroup members in subsequent interactions 33 ...
Article
Full-text available
Ingroup favoritism and intergroup discrimination can be mutually reinforcing during social interaction, threatening intergroup cooperation and the sustainability of societies. In two studies (N = 880), we investigated whether promoting prosocial outgroup altruism would weaken the ingroup favoritism cycle of influence. Using novel methods of human-agent interaction via a computer-mediated experimental platform, we introduced outgroup altruism by (i) nonadaptive artificial agents with preprogrammed outgroup altruistic behavior (Study 1; N = 400) and (ii) adaptive artificial agents whose altruistic behavior was informed by the prediction of a machine learning algorithm (Study 2; N = 480). A rating task ensured that the observed behavior did not result from the participant’s awareness of the artificial agents. In Study 1, nonadaptive agents prompted ingroup members to withhold cooperation from ingroup agents and reinforced ingroup favoritism among humans. In Study 2, adaptive agents were able to weaken ingroup favoritism over time by maintaining a good reputation with both the ingroup and outgroup members, who perceived agents as being fairer than humans and rated agents as more human than humans. We conclude that a good reputation of the individual exhibiting outgroup altruism is necessary to weaken ingroup favoritism and improve intergroup cooperation. Thus, reputation is important for designing nudge agents.
... Monitoring and punishing selfishness, however, is often highly costly to the policer, who must invest time and energy in monitoring others and can experience retaliation and reputational damages (Arai et al., 2022;Guala, 2012;Raihani & Bshary, 2019;Wiessner, 2005). By contrast, communicating beliefs in supernatural monitoring and punishment is less costly for at least two reasons. ...
... Second, appealing to supernatural punishment allows the policer to shirk responsibility for reproving the cheater's behavior-"it's god who punishes you, not me!". This makes retaliation by the cheater unjustifiable and allows the policer to avoid the reputational damage that punishers sometimes experience (see Arai et al., 2022;Wiessner, 2005). ...
Preprint
Full-text available
What explains the ubiquity and cultural success of prosocial religions? Leading accounts argue that prosocial religions evolved because they help societies grow and promote group cooperation. Yet recent evidence suggests that prosocial religious beliefs are not limited to large societies and might not have strong effects on cooperation. Here, we propose that prosocial religions, including beliefs in moralizing gods, develop because individuals shape supernatural beliefs to achieve their goals in within-group, strategic interactions. People have a fitness interest in controlling others' cooperation-either to extort benefits from others or to gain reputational benefits for protecting the public good. Moreover, they intuitively infer that other people could be deterred from cheating if they feared supernatural punishment. Thus, people endorse supernatural punishment beliefs to manipulate others into cooperating. Prosocial religions emerge from a dynamic of mutual monitoring, in which each individual, lacking confidence in the cooperativeness of conspecifics, attempts to incentivize their cooperation by endorsing beliefs in supernatural punishment. We show how variants of this incentive structure explain the variety of cultural attractors towards which supernatural punishment converges-including extractive religions that extort benefits from exploited individuals, prosocial religions geared toward mutual benefit, and moralized forms of prosocial religion where belief in moralizing gods is itself a moral duty. We review cross-disciplinary evidence for nine predictions of this account and use it to explain the decline of prosocial religions in modern societies. Prosocial religious beliefs seem endorsed as long as people believe them necessary to ensure other people's cooperation, regardless of their objective effectiveness in doing so.
... effective ways to withhold cooperation with non-cooperators (cf. [198]), while attenuating antisocial punishment, and while facilitating forgiveness where cooperatively advantageous. (See above; cf. ...
Preprint
Full-text available
With an evolutionary approach, the basis of morality can be explained as adaptations to problems of cooperation. With ‘evolution’ taken in a broad sense, AIs that satisfy the conditions for evolution to apply will be subject to the same cooperative evolutionary pressure as biological entities. Here the adaptiveness of increased cooperation as material safety and wealth increase is discussed — for humans, for other societies, and for AIs. Diminishing beneficial returns from increased access to material resources also suggests the possibility that, on the whole, there will be no incentive to for instance colonize entire galaxies, thus providing a possible explanation of the Fermi paradox, wondering where everybody is. It is further argued that old societies could engender, give way to, super-AIs, since it is likely that super-AIs are feasible, and fitter. Closing is an aside on effective ways for morals and goals to affect life and society, emphasizing environments, cultures, and laws, and exemplified by how to eat.� `Diminishing returns’ is defined, as less than roots, the inverse of infeasibility. It is also noted that there can be no exponential colonization or reproduction, for mathematical reasons, as each entity takes up a certain amount of space. Appended are an algorithm for colonizing for example a galaxy quickly, models of the evolution of cooperation and fairness under diminishing returns, and software for simulating signaling development.
Article
Full-text available
Evolutionary models of dyadic cooperation demonstrate that selection favors different strategies for reciprocity depending on opportunities to choose alternative partners. We propose that selection has favored mechanisms that estimate the extent to which others can switch partners and calibrate motivations to reciprocate and punish accordingly. These estimates should reflect default assumptions about relational mobility: the probability that individuals in one’s social world will have the opportunity to form relationships with new partners. This prior probability can be updated by cues present in the immediate situation one is facing. The resulting estimate of a partner’s outside options should serve as input to motivational systems regulating reciprocity: Higher estimates should down-regulate the use of sanctions to prevent defection by a current partner, and up-regulate efforts to attract better cooperative partners by curating one’s own reputation and monitoring that of others. We tested this hypothesis using a Trust Game with Punishment (TGP), which provides continuous measures of reciprocity, defection, and punishment in response to defection. We measured each participant’s perception of relational mobility in their real-world social ecology and experimentally varied a cue to partner switching. Moreover, the study was conducted in the US (n = 519) and Japan (n = 520): societies that are high versus low in relational mobility. Across conditions and societies, higher perceptions of relational mobility were associated with increased reciprocity and decreased punishment: i.e., those who thought that others have many opportunities to find new partners reciprocated more and punished less. The situational cue to partner switching was detected, but relational mobility in one’s real social world regulated motivations to reciprocate and punish, even in the experimental setting. The current research provides evidence that motivational systems are designed to estimate varying degrees of partner choice in one’s social ecology and regulate reciprocal behaviors accordingly.
Article
Full-text available
When one individual helps another, it benefits the recipient and may also gain a reputation for being cooperative. This may induce others to favour the helper in subsequent interactions, so investing in being seen to help others may be adaptive. The best-known mechanism for this is indirect reciprocity (IR), in which the profit comes from an observer who pays a cost to benefit the original helper. IR has attracted considerable theoretical and empirical interest, but it is not the only way in which cooperative reputations can bring benefits. Signalling theory proposes that paying a cost to benefit others is a strategic investment which benefits the signaller through changing receiver behaviour, in particular by being more likely to choose the signaller as a partner. This reputation-based partner choice can result in competitive helping whereby those who help are favoured as partners. These theories have been confused in the literature. We therefore set out the assumptions, the mechanisms and the predictions of each theory for how developing a cooperative reputation can be adaptive. The benefits of being seen to be cooperative may have been a major driver of sociality, especially in humans. This article is part of the theme issue ‘The language of cooperation: reputation and honest signalling’.
Article
Full-text available
Although punishment can promote cooperative behavior, the evolution of punishment requires benefits which override the cost. One possible source of the benefit of punishing uncooperative behavior is obtaining a positive evaluation. This study compares evaluations of punishers and non-punishers. Two hundred and thirty-four undergraduate students participated in two studies. Study 1 revealed that, in the public goods game, punishers were not positively evaluated, while punishers were positively evaluated in the third-party punishment game. In Study 2, where the non-cooperator was a participant of a public goods game, we manipulated the punishers participation in the game. The results showed that punishers received no positive evaluations, regardless of their participation in the game, indicating that negative evaluation may not be a reaction toward aggression with retaliatory intentions.
Article
Full-text available
Humans are outstanding in their ability to cooperate with unrelated individuals, and punishment – paying a cost to harm others – is thought to be a key supporting mechanism. According to this view, cooperators punish defectors, who respond by behaving more cooperatively in future interactions. However, a synthesis of the evidence from laboratory and real-world settings casts serious doubts on the assumption that the sole function of punishment is to convert cheating individuals into cooperators. Instead, punishment often prompts retaliation and punishment decisions frequently stem from competitive, rather than deterrent motives. Punishment decisions often reflect the desire to equalise or elevate payoffs relative to targets, rather than the desire to enact revenge for harm received or to deter cheats from reoffending in future. We therefore suggest that punishment also serves a competitive function, where what looks like spiteful behaviour actually allows punishers to equalise or elevate their own payoffs and/or status relative to targets independently of any change in the target's behaviour. Institutions that reduce or remove the possibility that punishers are motivated by relative payoff or status concerns might offer a way to harness these competitive motives and render punishment more effective at restoring cooperation.
Article
Full-text available
In many two-player games, players that invest in punishment finish with lower payoffs than those who abstain from punishing. These results question the effectiveness of punishment at promoting cooperation, especially when retaliation is possible. It has been suggested that these findings may stem from the unrealistic assumption that all players are equal in terms of power. However, a previous empirical study which incorporated power asymmetries into an iterated prisoner's dilemma (IPD) game failed to show that power asymmetries stabilize cooperation when punishment is possible. Instead, players cooperated in response to their partner cooperating, and punishment did not yield any additional increase in tendency to cooperate. Nevertheless, this previous study only allowed an all-or-nothing–rather than a variable–cooperation investment. It is possible that power asymmetries increase the effectiveness of punishment from strong players only when players are able to vary their investment in cooperation. We tested this hypothesis using a modified IPD game which allowed players to vary their investment in cooperation in response to being punished. As in the previous study, punishment from strong players did not increase cooperation under any circumstances. Thus, in two-player games with symmetric strategy sets, punishment does not appear to increase cooperation.
Article
Full-text available
Significance Prominent theories of shame hold that shame is inherently maladaptive. However, direct tests of the fit between shame and its probable target domain have not previously been conducted. Here we test the alternative hypothesis that shame, although unpleasant (like pain), serves the adaptive function of defending against the social devaluation that results when negative information reaches others—by deterring actions that would lead to more devaluation than benefits, for example. If so, the intensity of shame people feel regarding a given item of negative information should track the devaluation that would happen if that item became known. Indeed, the data indicate a close match between shame intensities and audience devaluation, which suggests that shame is an adaptation.
Article
Full-text available
Third-party intervention, such as when a crowd stops a mugger, is common. Yet it seems irrational because it has real costs but may provide no personal benefits. In a laboratory analogue, the third-party-punishment game, third parties ("punishers") will often spend real money to anonymously punish bad behavior directed at other people. A common explanation is that third-party punishment exists to maintain a cooperative society. We tested a different explanation: Third-party punishment results from a deterrence psychology for defending personal interests. Because humans evolved in small-scale, face-to-face social worlds, the mind infers that mistreatment of a third party predicts later mistreatment of oneself. We showed that when punishers do not have information about how they personally will be treated, they infer that mistreatment of other people predicts mistreatment of themselves, and these inferences predict punishment. But when information about personal mistreatment is available, it drives punishment. This suggests that humans' punitive psychology evolved to defend personal interests.
Article
Full-text available
Humans everywhere cooperate in groups to achieve benefits not attainable by individuals. Individual effort is often not automatically tied to a proportionate share of group benefits. This decoupling allows for free-riding, a strategy that (absent countermeasures) outcompetes cooperation. Empirically and formally, punishment potentially solves the evolutionary puzzle of group cooperation. Nevertheless, standard analyses appear to show that punishment alone is insufficient, because second-order free riders (those who cooperate but do not punish) can be shown to outcompete punishers. Consequently, many have concluded that other processes, such as cultural or genetic group selection, are required. Here, we present a series of agent-based simulations that show that group cooperation sustained by punishment easily evolves by individual selection when you introduce into standard models more biologically plausible assumptions about the social ecology and psychology of ancestral humans. We relax three unrealistic assumptions of past models. First, past models assume all punishers must punish every act of free riding in their group. We instead allow punishment to be probabilistic, meaning punishers can evolve to only punish some free riders some of the time. This drastically lowers the cost of punishment as group size increases. Second, most models unrealistically do not allow punishment to recruit labor; punishment merely reduces the punished agent 's fitness. We instead realistically allow punished free riders to cooperate in the future to avoid punishment. Third, past models usually restrict agents to interact in a single group their entire lives. We instead introduce realistic social ecologies in which agents participate in multiple, partially overlapping groups. Because of this, punitive tendencies are more expressed and therefore more exposed to natural selection. These three moves toward greater model realism reveal that punishment and cooperation easily evolve by direct selection - even in sizeable groups.
Article
Full-text available
Many researchers have suggested that a sanctioning system is necessary to achieve cooperation in a large society. Sanctioning others, however, is costly, raising the question of what exactly is the adaptive advantage of sanctioning. One possible answer is that sanctioners get reputational benefit. While the reputational benefits accruing to punishers and nonpunishers have been compared in previous studies, in the present study we directly compared the reputational benefit of punisher, rewarder, and non-sanctioner. We conducted a scenario experiment in which participants were asked to play several games, such as the Ultimatum Game, Dictator Game, and Chicken Game with punisher, rewarder, and non-sanctioner. While in previous studies, punishers have gotten better reputational benefit as providers of resources than have non-sanctioners, we found that punishers received worse reputations than did rewarders or non-sanctioners in all games used in our experiment. These results suggest that reputational benefits change according to what kind of sanction individuals can exercise.
Article
Full-text available
Significance Why do humans cooperate in one-time interactions with strangers? The most prominent explanations for this long-standing puzzle rely on punishment of noncooperators, but differ in the form punishment takes. In models of direct punishment, noncooperators are punished directly at personal cost, whereas indirect reciprocity assumes that punishment is indirect by withholding rewards. To resolve the persistent debate on which model better explains cooperation, we conduct the first field experiment, to our knowledge, on direct and indirect punishment among strangers in real-life interactions. We show that many people punish noncooperators directly but prefer punishing indirectly by withholding help when possible. The occurrence of direct and indirect punishment in the field shows that both are key to understanding the evolution of human cooperation.
Article
Full-text available
Organizations are composed of stable, predominantly cooperative interactions or n-person exchanges. Humans have been engaging in n-person exchanges for a great enough period of evolutionary time that we appear to have evolved a distinct constellation of species-typical mechanisms specialized to solve the adaptive problems posed by this form of social interaction. These mechanisms appear to have been evolutionarily elaborated out of the cognitive infrastructure that initially evolved for dyadic exchange. Key adaptive problems that these mechanisms are designed to solve include coordination among individuals, and defense against exploitation by free riders. Multi-individual cooperation could not have been maintained over evolutionary time if free riders reliably benefited more than contributors to collective enterprises, and so outcompeted them. As a result, humans evolved mechanisms that implement an aversion to exploitation by free riding, and a strategy of conditional cooperation, supplemented by punitive sentiment towards free riders. Because of the design of these mechanisms, how free riding is treated is a central determinant of the survival and health of cooperative organizations. The mapping of the evolved psychology of n-party exchange cooperation may contribute to the construction of a principled theoretical foundation for the understanding of human behavior in organizations.
Article
Full-text available
Discusses the limitations of the rational-structural and goal/expectation approaches to the problem of public goods (PGs), presents a new approach—the structural goal/expectation approach—intended to overcome these limitations, and tested 4 predictions derived from the new approach in a study of 48 4-person groups of undergraduates. According to this new approach, members who have realized the undesirable consequence of free riding and the importance of mutual cooperation will cooperate to establish a sanctioning system that assures other members' cooperation instead of trying to induce other members into mutual cooperation directly through cooperative actions. One important condition for their voluntary cooperation in the establishment of a sanctioning system is their realization that voluntarily based cooperation is impossible. In the study, each member of the group was given resource money that they could keep for themselves or contribute to the provision of a PG. The increase in the personal benefit due to one's contribution was reduced to zero, and Ss were not allowed to see each other in person. Some groups were given opportunities to develop a negative sanctioning system that punished the least cooperative group member. The level of punishment depended on the total amount of contribution made by the group members to the sanctioning system, which was separate from the contribution to the original PG. Results support the approach's predictions. (31 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A model is presented to account for the natural selection of what is termed reciprocally altruistic behavior. The model shows how selection can operate against the cheater (non-reciprocator) in the system. Three instances of altruistic behavior are discussed, the evolution of which the model can explain: (1) behavior involved in cleaning symbioses; (2) warning cries in birds; and (3) human reciprocal altruism. Regarding human reciprocal altruism, it is shown that the details of the psychological system that regulates this altruism can be explained by the model. Specifically, friendship, dislike, moralistic aggression, gratitude, sympathy, trust, suspicion, trustworthiness, aspects of guilt, and some forms of dishonesty and hypocrisy can be explained as important adaptations to regulate the altruistic system. Each individual human is seen as possessing altruistic and cheating tendencies, the expression of which is sensitive to developmental variables that were selected to set the tendencies at a balance ap...
Article
Full-text available
While empirical evidence highlights the importance of punishment for cooperation in collective action, it remains disputed how responsible sanctions targeted predominantly at uncooperative subjects can evolve. Punishment is costly; in order to spread it typically requires local interactions, voluntary participation, or rewards. Moreover, theory and experiments indicate that some subjects abuse sanctioning opportunities by engaging in antisocial punishment (which harms cooperators), spiteful acts (harming everyone) or revenge (as a response to being punished). These arguments have led to the conclusion that punishment is maladaptive. Here, we use evolutionary game theory to show that this conclusion is premature: If interactions are non-anonymous, cooperation and punishment evolve even if initially rare, and sanctions are directed towards non-cooperators only. Thus, our willingness to punish free riders is ultimately a selfish decision rather than an altruistic act; punishment serves as a warning, showing that one is not willing to accept unfair treatments.
Article
Full-text available
The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses – the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferroni-type procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples.
Article
Full-text available
For collective action to evolve and be maintained by selection, the mind must be equipped with mechanisms designed to identify free riders--individuals who do not contribute to a collective project but still benefit from it. Once identified, free riders must be either punished or excluded from future collective actions. But what criteria does the mind use to categorize someone as a free rider? An evolutionary analysis suggests that failure to contribute is not sufficient. Failure to contribute can occur by intention or accident, but the adaptive threat is posed by those who are motivated to benefit themselves at the expense of cooperators. In 6 experiments, we show that only individuals with exploitive intentions were categorized as free riders, even when holding their actual level of contribution constant (Studies 1 and 2). In contrast to an evolutionary model, rational choice and reinforcement theory suggest that different contribution levels (leading to different payoffs for their cooperative partners) should be key. When intentions were held constant, however, differences in contribution level were not used to categorize individuals as free riders, although some categorization occurred along a competence dimension (Study 3). Free rider categorization was not due to general tendencies to categorize (Study 4) or to mechanisms that track a broader class of intentional moral violations (Studies 5A and 5B). The results reveal the operation of an evolved concept with features tailored for solving the collective action problems faced by ancestral hunter-gatherers.
Article
Full-text available
Corruption in the public sector erodes tax compliance and leads to higher tax evasion. Moreover, corrupt public officials abuse their public power to extort bribes from the private agents. In both types of interaction with the public sector, the private agents are bound to face uncertainty with respect to their disposable incomes. To analyse effects of this uncertainty, a stochastic dynamic growth model with the public sector is examined. It is shown that deterministic excessive red tape and corruption deteriorate the growth potential through income redistribution and public sector inefficiencies. Most importantly, it is demonstrated that the increase in corruption via higher uncertainty exerts adverse effects on capital accumulation, thus leading to lower growth rates.
Article
Full-text available
Are humans too generous? The discovery that subjects choose to incur costs to allocate benefits to others in anonymous, one-shot economic games has posed an unsolved challenge to models of economic and evolutionary rationality. Using agent-based simulations, we show that such generosity is the necessary byproduct of selection on decision systems for regulating dyadic reciprocity under conditions of uncertainty. In deciding whether to engage in dyadic reciprocity, these systems must balance (i) the costs of mistaking a one-shot interaction for a repeated interaction (hence, risking a single chance of being exploited) with (ii) the far greater costs of mistaking a repeated interaction for a one-shot interaction (thereby precluding benefits from multiple future cooperative interactions). This asymmetry builds organisms naturally selected to cooperate even when exposed to cues that they are in one-shot interactions.
Article
Full-text available
The issue of evolution of punitive behavior has been a focus of recent studies of human cooperation. One of the topics for discussion in this literature is whether punishers receive benefits, on which no clear conclusion has been reached yet. We conducted a scenario experiment in which we manipulated game types and reward types, and found that punishers were chosen more frequently than non-punishers as providers of rewards, and yet, they were chosen less frequently than non-punishers as recipients of rewards. Adaptive advantages of punishers are suggested to be in their likelihood of being chosen as providers of resources, rather than as recipients of reward.
Article
Full-text available
Because mutually beneficial cooperation may unravel unless most members of a group contribute, people often gang up on free-riders, punishing them when this is cost-effective in sustaining cooperation. In contrast, current models of the evolution of cooperation assume that punishment is uncoordinated and unconditional. These models have difficulty explaining the evolutionary emergence of punishment because rare unconditional punishers bear substantial costs and hence are eliminated. Moreover, in human behavioral experiments in which punishment is uncoordinated, the sum of costs to punishers and their targets often exceeds the benefits of the increased cooperation that results from the punishment of free-riders. As a result, cooperation sustained by punishment may actually reduce the average payoffs of group members in comparison with groups in which punishment of free-riders is not an option. Here, we present a model of coordinated punishment that is calibrated for ancestral human conditions and captures a further aspect of reality missing from both models and experiments: The total cost of punishing a free-rider declines as the number of punishers increases. We show that punishment can proliferate when rare, and when it does, it enhances group-average payoffs.
Article
Full-text available
In a series of experiments, we demonstrate that certain players of an economic game reject unfair offers even when this behavior increases rather than decreases inequity. A substantial proportion (30-40%, compared with 60-70% in the standard ultimatum game) of those who responded rejected unfair offers even when rejection reduced only their own earnings to 0, while not affecting the earnings of the person who proposed the unfair split (in an impunity game). Furthermore, even when the responders were not able to communicate their anger to the proposers by rejecting unfair offers in a private impunity game, a similar rate of rejection was observed. The rejection of unfair offers that increases inequity cannot be explained by the social preference for inequity aversion or reciprocity; however, it does provide support for the model of emotion as a commitment device. In this view, emotions such as anger or moral disgust lead people to disregard the immediate consequences of their behavior, committing them to behave consistently to preserve integrity and maintain a reputation over time as someone who is reliably committed to this behavior.
Article
Full-text available
This paper provides strong evidence challenging the self-interest assumption that dominates the behavioral sciences and much evolutionary thinking. The evidence indicates that many people have a tendency to voluntarily cooperate, if treated fairly, and to punish non-cooperators. We call this behavioral propensity ‘strong reciprocity’ and show empirically that it can lead to almost universal cooperation in circumstances in which purely self-interested behavior would cause a complete breakdown of cooperation. In addition, we show that people are willing to punish those who behaved unfairly towards a third person or who defected in a Prisoner’s Dilemma game with a third person. This suggests that strong reciprocity is a powerful device for the enforcement of social norms like, e.g., food-sharing norms or collective action norms. Strong Reciprocity cannot be rationalized as an adaptive trait by the leading evolutionary theories of human cooperation, i.e., by kin selection theory, reciprocal altruism theory, indirect reciprocity theory and costly signaling theory. However, multi-level selection theories and theories of cultural evolution are consistent with strong reciprocity.
Article
Full-text available
Cooperation among nonrelatives can be puzzling because cooperation often involves incurring costs to confer benefits on unrelated others. Punishment of noncooperators can sustain otherwise fragile cooperation, but the provision of punishment suffers from a "second-order" free-riding problem because nonpunishers can free ride on the benefits from costly punishment provided by others. One suggested solution to this problem is second-order punishment of nonpunishers; more generally, the threat or promise of higher order sanctions might maintain the lower order sanctions that enforce cooperation in collective action problems. Here the authors report on 3 experiments testing people's willingness to provide second-order sanctions by having participants play a cooperative game with opportunities to punish and reward each other. The authors found that people supported those who rewarded cooperators either by rewarding them or by punishing nonrewarders, but people did not support those who punished noncooperators--they did not reward punishers or punish nonpunishers. Furthermore, people did not approve of punishers more than they did nonpunishers, even when nonpunishers were clearly unwilling to use sanctions to support cooperation. The results suggest that people will much more readily support positive sanctions than they will support negative sanctions.
Article
Full-text available
Although positive reciprocity (reciprocal altruism) has been a focus of interest in evolutionary biology, negative reciprocity (retaliatory infliction of fitness reduction) has been largely ignored. In social animals, retaliatory aggression is common, individuals often punish other group members that infringe their interests, and punishment can cause subordinates to desist from behaviour likely to reduce the fitness of dominant animals. Punishing strategies are used to establish and maintain dominance relationships, to discourage parasites and cheats, to discipline offspring or prospective sexual partners and to maintain cooperative behaviour.
Article
Full-text available
The present study concerns connections between personality traits, the behaviors by which they are manifest, and the behaviors by which they are judged. One hundred forty undergraduate Ss were videotaped in 2 social interactions, and 62 behaviors were coded from each tape. Separately, personality descriptions were obtained from knowledgeable informants. A pair of "strangers" viewed each videotape then also provided personality descriptions. Other Ss rated the diagnosticity of the 62 behaviors for each of the Big Five personality traits. The diagnosticity ratings predicted how behavioral cues would be used by strangers and were closely related to their actual relevance as indexed by their correlations with informants' judgments. These findings speak to the general accuracy of personality judgments, the development of methods to improve accuracy, and the value of reintegrating traditionally separate concerns of personality and social psychology.
Article
Full-text available
Human cooperation is an evolutionary puzzle. Unlike other creatures, people frequently cooperate with genetically unrelated strangers, often in large groups, with people they will never meet again, and when reputation gains are small or absent. These patterns of cooperation cannot be explained by the nepotistic motives associated with the evolutionary theory of kin selection and the selfish motives associated with signalling theory or the theory of reciprocal altruism. Here we show experimentally that the altruistic punishment of defectors is a key motive for the explanation of cooperation. Altruistic punishment means that individuals punish, although the punishment is costly for them and yields no material gain. We show that cooperation flourishes if altruistic punishment is possible, and breaks down if it is ruled out. The evidence indicates that negative emotions towards defectors are the proximate mechanism behind altruistic punishment. These results suggest that future study of the evolution of human cooperation should include a strong focus on explaining altruistic punishment.
Article
Full-text available
Memory evolved to supply useful, timely information to the organism's decision-making systems. Therefore, decision rules, multiple memory systems, and the search engines that link them should have coevolved to mesh in a coadapted, functionally interlocking way. This adaptationist perspective suggested the scope hypothesis: When a generalization is retrieved from semantic memory, episodic memories that are inconsistent with it should be retrieved in tandem to place boundary conditions on the scope of the generalization. Using a priming paradigm and a decision task involving person memory, the authors tested and confirmed this hypothesis. The results support the view that priming is an evolved adaptation. They further show that dissociations between memory systems are not--and should not be--absolute: Independence exists for some tasks but not others.
Article
Full-text available
The existence of cooperation and social order among genetically unrelated individuals is a fundamental problem in the behavioural sciences. The prevailing approaches in biology and economics view cooperation exclusively as self-interested behaviour--unrelated individuals cooperate only if they face economic rewards or sanctions rendering cooperation a self-interested choice. Whether economic incentives are perceived as just or legitimate does not matter in these theories. Fairness-based altruism is, however, a powerful source of human cooperation. Here we show experimentally that the prevailing self-interest approach has serious shortcomings because it overlooks negative effects of sanctions on human altruism. Sanctions revealing selfish or greedy intentions destroy altruistic cooperation almost completely, whereas sanctions perceived as fair leave altruism intact. These findings challenge proximate and ultimate theories of human cooperation that neglect the distinction between fair and unfair sanctions, and they are probably relevant in all domains in which voluntary compliance matters--in relations between spouses, in the education of children, in business relations and organizations as well as in markets.
Article
Full-text available
Models of large-scale human cooperation take two forms. 'Indirect reciprocity' occurs when individuals help others in order to uphold a reputation and so be included in future cooperation. In 'collective action', individuals engage in costly behaviour that benefits the group as a whole. Although the evolution of indirect reciprocity is theoretically plausible, there is no consensus about how collective action evolves. Evidence suggests that punishing free riders can maintain cooperation, but why individuals should engage in costly punishment is unclear. Solutions to this 'second-order free rider problem' include meta-punishment, mutation, conformism, signalling and group-selection. The threat of exclusion from indirect reciprocity can sustain collective action in the laboratory. Here, we show that such exclusion is evolutionarily stable, providing an incentive to engage in costly cooperation, while avoiding the second-order free rider problem because punishers can withhold help from free riders without damaging their reputations. However, we also show that such a strategy cannot invade a population in which indirect reciprocity is not linked to collective action, thus leaving unexplained how collective action arises.
Article
Humans regularly intervene in others' conflicts as third-parties. This has been studied using the third-party punishment game: A third-party can pay a cost to punish another player (the “dictator”) who treated someone else poorly. Because the game is anonymous and one-shot, punishers are thought to have no strategic reasons to intervene. Nonetheless, punishers often punish dictators who treat others poorly. This result is central to a controversy over human social evolution: Did third-party punishment evolve to maintain group norms or to deter others from acting against one's interests? This paper provides a critical test. We manipulate the ingroup/outgroup composition of the players while simultaneously measuring the inferences punishers make about how the dictator would treat them personally. The group norm predictions were falsified, as outgroup defectors were punished most harshly, not ingroup defectors (predicted by ingroup fairness norms) and not outgroup members generally (predicted by norms of parochialism). The deterrence predictions were validated: Punishers punished the most when they inferred that they would be treated the worst by dictators, especially when better treatment would be expected given ingroup/outgroup composition.
Article
Third-party punishment (TPP), in which unaffected observers punish selfishness, promotes cooperation by deterring defection. But why should individuals choose to bear the costs of punishing? We present a game theoretic model of TPP as a costly signal of trustworthiness. Our model is based on individual differences in the costs and/or benefits of being trustworthy. We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP). We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves. We then empirically validate our model using economic game experiments. We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers. Furthermore, as predicted by our model, introducing a more informative signal - the opportunity to help directly - attenuates these signalling effects. When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness. Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible. Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one's trustworthiness.
Article
Two factors that promote cooperation are partner choice and punishment of defectors, but which option do people actually prefer to use? Punishment is predicted to be more common when organisms cannot escape bad partners, whereas partner choice is useful when one can switch to a better partner. Here we use a modified iterated Prisoner’s Dilemma to examine people’s cooperation and punishment when partner choice wfaas possible and when it was not. The results show that cooperation was higher when people could leave bad partners versus when they could not. When they could not switch partners, people preferred to actively punish defectors rather than withdraw. When they could switch, punishment and switching were equally preferred. Contrary to our predictions, punishment was higher when switching was possible, possibly because cooperators could then desert the defector they had just punished. Punishment did not increase defectors’ subsequent cooperation. Our results support the importance of partner choice in promoting human cooperation and in changing the prevalence of punishment.
Article
Peer-punishment is an important determinant of cooperation in human groups. It has been suggested that, at the proximate level of analysis, punitive preferences can explain why humans incur costs to punish their deviant peers. How punitive preferences could have evolved in humans is still not entirely understood. A possible explanation at the ultimate level of analysis comes from signaling theory. It has been argued that the punishment of defectors can be a type-separating signal of the punisher’s cooperative intent. As a result, punishers are selected more often as interaction partners in social exchange and are partly compensated for the costs they incur when punishing defectors. A similar argument has been made with regard to acts of generosity. In a laboratory experiment, we investigate whether the punishment of a selfish division of money in a dictator game is a sign of trustworthiness and whether punishers are more trustworthy interaction partners in a trust game than non-punishers. We distinguish between second-party and third-party punishment and compare punitive acts with acts of generosity as signs of trustworthiness. We find that punishers are not more trustworthy than non-punishers and that punishers are not trusted more than non-punishers, both in the second-party and in the third-party punishment condition. To the contrary, second-party punishers are trusted less than their non-punishing counterparts. However, participants who choose a generous division of money are more trustworthy and are trusted more than participants who choose a selfish division or participants about whom no information is available. Our results suggest that, unlike for punitive acts, the signaling benefits of generosity are to be gained in social exchange.
Article
Punishers can benefit from a tough reputation, where future partners cooperate because they fear repercussions. Alternatively, punishers might receive help from bystanders if their act is perceived as just and other-regarding. Third-party punishment of selfish individuals arguably fits these conditions but it is not known whether third-party punishers are rewarded for their investments. Here, we show that third-party punishers are indeed rewarded by uninvolved bystanders. Third-parties were presented with the outcome of a dictator game where the dictator was either selfish or fair and were allocated to one of three treatments where they could choose to do nothing or (i) punish the dictator, (ii) help the receiver or (iii) choose between punishment and helping, respectively. A fourth player ('bystander') then saw the third-party's decision and could choose to reward the third-party or not. Third-parties that punished selfish dictators were more likely to be rewarded by bystanders than third-parties who took no action in response to a selfish dictator. However, helpful third-parties were rewarded even more than third-party punishers. These results suggest that punishment could in principle evolve via indirect reciprocity but also provide insights into why individuals typically prefer to invest in positive actions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Article
Punishment is a potential mechanism to stabilise cooperation between self-regarding agents. Theoretical and empirical studies on the importance of a punitive reputation have yielded conflicting results. Here, we propose that a variety of factors interact to explain why a punitive reputation is sometimes beneficial and sometimes harmful. We predict that benefits are most likely to occur in forced play scenarios and in situations where punishment is the only means to convey an individual's cooperative intent and willingness to uphold fairness norms. By contrast, if partner choice is possible and an individual's cooperative intent can be inferred directly, then individuals with a nonpunishing cooperative reputation should typically be preferred over punishing cooperators. Copyright © 2015 Elsevier Ltd. All rights reserved.
Article
G*Power (Erdfelder, Faul, & Buchner, 1996) was designed as a general stand-alone power analysis program for statistical tests commonly used in social and behavioral research. G*Power 3 is a major extension of, and improvement over, the previous versions. It runs on widely used computer platforms (i.e., Windows XP, Windows Vista, and Mac OS X 10.4) and covers many different statistical tests of the t, F, and chi2 test families. In addition, it includes power analyses for z tests and some exact tests. G*Power 3 provides improved effect size calculators and graphic options, supports both distribution-based and design-based input modes, and offers all types of power analyses in which users might be interested. Like its predecessors, G*Power 3 is free.
Article
When organisms can choose whom to interact with, it can create a biological market where individuals need to outbid their rivals for access to cooperative relationships. Each individual's market value is determined by the benefits it can confer (and is willing to confer) upon others, which selects for tendencies to actively confer benefits on others. In this article, I introduce the basics of biological markets and how they relate to traditional models of cooperation, and then elucidate their impact on human cooperation, especially in the tasks of choosing partners, competing over partners, and keeping partners. Since “generosity” is necessarily rated relative to one's rivals, this can result in tendencies to compete over relative generosity, commit to partners, help when help is unnecessary, give strategically, and attack or suppress others' helpfulness. Biological markets explain and make novel predictions about why we desire to associate with particular individuals and how we attract them, and are therefore a useful incorporation into models of cooperation.
Article
The threat of punishment usually promotes cooperation. However, punishing itself is costly, rare in nonhuman animals, and humans who punish often finish with low payoffs in economic experiments. The evolution of punishment has therefore been unclear. Recent theoretical developments suggest that punishment has evolved in the context of reputation games. We tested this idea in a simple helping game with observers and with punishment and punishment reputation (experimentally controlling for other possible reputational effects). We show that punishers fully compensate their costs as they receive help more often. The more likely defection is punished within a group, the higher the level of within-group cooperation. These beneficial effects perish if the punishment reputation is removed. We conclude that reputation is key to the evolution of punishment.
Article
Two studies were conducted to test reputation-based accounts of altruism which predict that the more people sacrifice to help others, the greater their ensuing benefits. We tested this prediction by varying the cost invested in altruistic behavior, here modeled as costly sanctioning of unfair behavior. Confirming this prediction, it was found that only altruists who invested most in the punishment of unfairness were preferred as partners and were transferred more money in a subsequent trust game. This implies that the benefits of behaving altruistically depend upon how much one is willing to pay. It is discussed that these results fit both an indirect reciprocity and a costly signaling framework.
Article
Many studies show that people act cooperatively and are willing to punish free riders (i.e., people who are less cooperative than others). However, nonpunishers benefit when free riders are punished, making punishment a group-beneficial act. Presented here are four studies investigating whether punishers gain social benefits from punishing. Undergraduate participants played public goods games (PGGs) (cooperative group games involving money) in which there were free riders, and in which they were given the opportunity to impose monetary penalties on free riders. Participants rated punishers as being more trustworthy, group focused, and worthy of respect than nonpunishers. In dyadic trust games following PGGs, punishers did not receive monetary benefits from punishing free riders in a single-round PGG, but did benefit monetarily from punishing free riders in iterated PGGs. Punishment that was not directed at free riders brought no monetary benefits, suggesting that people distinguish between justified and unjustified punishment and only respond to punishment with enhanced trust when the punishment is justified.
Article
Over the past two decades, an abundance of evidence has shown that individuals typically rely on semantic summary knowledge when making trait judgments about self and others (for reviews, see Klein, 2004; Klein, Robertson, Gangi, & Loftus, 2008). But why form trait summaries if one can consult the original episodes on which the summary was based? Conversely, why retain episodes after having abstracted a summary representation from them? Are there functional reasons to have trait information represented in two different, independently retrievable databases? Evolution does not produce new phenotypic systems that are complex and functionally organized by chance. Such systems acquire their functional organization because they solved some evolutionarily recurrent problems for the organism. In this article we explore some of the functional properties of episodic memory. Specifically, in a series of studies we demonstrate that maintaining a database of episodic memories enables its owner to reevaluate an individual's past behavior in light of new information, sometimes drastically changing one's impression in the process. We conclude that some of the most important functions of episodic memory have to do with its role in human social interaction.
Article
The paper, which has an informal discussion at the end, provides a game theoretical analysis of the asymmetric “war of attrition” with incomplete information. This is a contest where animals adopt different roles like “owner” and “intruder” in a territorial conflict, and where the winner is the individual prepared to persist longer. The term incomplete information refers to mistakes in the identification of roles. The idea by Parker & Rubenstein (1981) is mathematically worked out and confirmed that there exists only a single evolutionarily stable strategy (ESS) for the model with a continuum of possible levels of persistence and no discontinuities in the increase of cost during attrition. The ESS prescribes to settle the conflict according to “who has more to gain or less to pay for persistence”. The only evolutionarily stable convention is thus to give the player access to the resource who has the role which is favoured with respect to payoffs. By contrast, it was shown earlier (Hammerstein, 1981) for various asymmetric versions of the “Hawks-Doves” model that an ESS can exist which appears paradoxical with respect to payoffs. The nature of this contrast is further analyzed by introducing elements of discreteness in the asymmetric war of attrition. It turns out that some conditions must be satisfied in order to have the possibility of an alternative ESS which is not of the above simple commonsense type. First, a decision to persist (or escalate) further in a contest must typically commit a contestant to go on fighting for a full “round”, before he can give up without danger. Second, such a “discontinuity” must occur at a level of persistence where the contest is still cheap, and, finally, errors in the identification of roles must be rare.
Article
Humans actively share resources with one another to a much greater degree than do other great apes, and much human sharing is governed by social norms of fairness and equity. When in receipt of a windfall of resources, human children begin showing tendencies towards equitable distribution with others at five to seven years of age. Arguably, however, the primordial situation for human sharing of resources is that which follows cooperative activities such as collaborative foraging, when several individuals must share the spoils of their joint efforts. Here we show that children of around three years of age share with others much more equitably in collaborative activities than they do in either windfall or parallel-work situations. By contrast, one of humans' two nearest primate relatives, chimpanzees (Pan troglodytes), 'share' (make food available to another individual) just as often whether they have collaborated with them or not. This species difference raises the possibility that humans' tendency to distribute resources equitably may have its evolutionary roots in the sharing of spoils after collaborative efforts.
Article
Egalitarian behavior is considered to be a species-typical component of human cooperation. Human adults tend to share resources equally, even if they have the opportunity to keep a larger portion for themselves. Recent experiments have suggested that this tendency emerges fairly late in human ontogeny, not before 6 or 7 years of age. Here we show that 3-year-old children share mostly equally with a peer after they have worked together actively to obtain rewards in a collaboration task, even when those rewards could easily be monopolized. These findings contrast with previous findings from a similar experiment with chimpanzees, who tended to monopolize resources whenever they could. The potentially species-unique tendency of humans to share equally emerges early in ontogeny, perhaps originating in collaborative interactions among peers.
Article
Simes (1986) has proposed a modified Bonferroni procedure for the test of an overall hypothesis which is the combination of n individual hypotheses. In contrast to the classical Bonferroni procedure, it is not obvious how statements about individual hypotheses are to be made for this procedure. In the present paper a multiple test procedure allowing statements on individual hypotheses is proposed. It is based on the principle of closed test procedures (Marcus, Peritz & Gabriel, 1976) and controls the multiple level α.
Article
The dictator game represents a workhorse within experimental economics, frequently used to test theory and to provide insights into the prevalence of social preferences. This study explores more closely the dictator game and the literature’s preferred interpretation of its meaning by collecting data from nearly 200 dictators across treatments that varied the action set and the origin of endowment. The action set variation includes choices in which the dictator can “take†money from the other player. Empirical results question the received interpretation of dictator game giving: many fewer agents are willing to transfer money when the action set includes taking. Yet, a result that holds regardless of action set composition is that agents do not ubiquitously choose the most selfish outcome. The results have implications for theoretical models of social preferences, highlight that “institutions†matter a great deal, and point to useful avenues for future research using simple dictator games and relevant manipulations.
Article
Cooperation in organisms, whether bacteria or primates, has been a difficulty for evolutionary theory since Darwin. On the assumption that interactions between pairs of individuals occur on a probabilistic basis, a model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game. Deductions from the model, and the results of a computer tournament show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established. Potential applications include specific aspects of territoriality, mating, and disease.
Article
We document the widespread existence of antisocial punishment, that is, the sanctioning of people who behave prosocially. Our evidence comes from public goods experiments that we conducted in 16 comparable participant pools around the world. However, there is a huge cross-societal variation. Some participant pools punished the high contributors as much as they punished the low contributors, whereas in others people only punished low contributors. In some participant pools, antisocial punishment was strong enough to remove the cooperation-enhancing effect of punishment. We also show that weak norms of civic cooperation and the weakness of the rule of law in a country are significant predictors of antisocial punishment. Our results show that punishment opportunities are socially beneficial only if complemented by strong social norms of cooperation.
Article
In constructing improved models of human behavior, both experimental and behavioral economists have increasingly turned to evolutionary theory for insights into human psychology and preferences. Unfortunately, the existing genetic evolutionary approaches can explain neither the degree of prosociality (altruism and altruistic punishment) observed in humans, nor the patterns of variation in these behaviors across different behavioral domains and social groups. Ongoing misunderstandings about why certain models work, what they predict, and what the place is of “group selection” in evolutionary theory have hampered the use of insights from biology and anthropology. This paper clarifies some of these issues and proposes an approach to the evolution of prosociality rooted in the interaction between cultural and genetic transmission. I explain how, in contrast to non-cultural species, the details of our evolved cultural learning capacities (e.g., imitative abilities) create the conditions for the cultural evolution of prosociality. By producing multiple behavioral equilibria, including group-beneficial equilibria, cultural evolution endogenously generates a mechanism of equilibrium selection that can favor prosociality. Finally, in the novel social environments left in the wake of these cultural evolutionary processes, natural selection is likely to favor prosocial genes that would not be expected in a purely genetic approach.