Figure - available from: Royal Society Open Science
This content is subject to copyright.
Results of study 3. (a) Frequency of stealing, by whether player's energy points level was currently above 200 (light) or below 200 (dark), and condition (threshold versus no threshold). Error bars represent one standard error of the proportion. Analysis collapses across levels of punishment severity. (b) Distribution of trust ratings prior to choosing to cooperate, or to work alone. Analysis collapses across levels of punishment severity and threshold condition. (c) Probability of stealing for players with fewer than 200 points, by condition and punishment severity (hatched: harsh; open: lenient). Error bars represent one standard error of the proportion.
Source publication
People facing material deprivation are more likely to turn to acquisitive crime. It is not clear why it makes sense for them to do so, given that apprehension and punishment may make their situation even worse. Recent theory suggests that people should be more willing to steal if they are on the wrong side of a ‘desperation threshold’; that is, a l...
Citations
... In Pietras et al. (2006), the less risky option was to share resources with another participant. In Radkani et al. (2023), the riskiest option was to steal points from other participants, with the possibility of being caught and fined. Thus, the results were a test of the DTM's ability to explain desperation-driven crime, or breakdown of cooperation. ...
... The DTM thus predicts an increased probability of turning to acquisitive crime when resources are extremely low (desperation prediction). There is some criminological evidence compatible with this prediction (which was tested experimentally by Radkani et al. (2023), see 3.1). McCarthy & Hagan (1992) found that the best predictor of theft among homeless Canadian youth was hunger. ...
The impacts of poverty and material scarcity on human decision making appear paradoxical. One set of findings associates poverty with risk aversion, whilst another set associates it with risk taking. We present an idealized general model, the ‘desperation threshold model’ (DTM), that explains how both these accounts can be correct. The DTM assumes a utility function with two features: a threshold or ‘cliff’, a point where utility declines steeply with a small loss of resources because basic needs can no longer be met; and a ‘rock bottom’, a point where utility is not made any worse by further loss of resources because basic needs are not being met anyway. Just above the threshold, people’s main concern is not falling below, and they are predicted to avoid risk. Below the threshold, they have little left to lose, their most important concern is jumping above, and they are predicted to take risks that would otherwise be avoided. Versions of the DTM have been proposed under various names across biology, anthropology, economics and psychology. We review a broad range of relevant empirical evidence from a variety of societal contexts. Though the model primarily concerns individual decision making, it connects to a range of population-scale and societal issues such as: the consequences of economic inequality; the deterrence of crime; and the optimal design and behavioural consequences of the welfare state. We discuss a number of interpretative issues and offer an agenda for future DTM research that bridges disciplines.
... The desperation threshold model has been tested in lab experiments [33,[39][40][41][42][43][44]. Participants-students or online participants from North America or the United Kingdom-typically play a game that includes an artificial threshold, such as a minimum number of points needed to obtain a monetary payoff at the end of the game. ...
... We also proposed an explanation for why poverty could lead to either vulnerability or desperation: the 'desperation threshold', an hypothesis that is analogous to other social sciences theories [32][33][34][35]37,38,53]. Our study provides a new source of evidence for the desperation threshold model. Until now, tests of the model have mainly been conducted either (i) in a lab, where poverty (or more precisely, 'need') is artificially induced [33,[39][40][41][42][43][44], or (ii) in populations where starvation is a realistic possibility [26][27][28]47,48]. Our study suggests that a formally equivalent mechanism can apply in the real world to more affluent populations, and that 'desperate' risk taking can happen when starvation is unlikely. ...
In situations of poverty, do people take more or less risk? One hypothesis states that poverty makes people avoid risk, because they cannot buffer against losses, while another states that poverty makes people take risks, because they have little to lose. Each hypothesis has some previous empirical support. Here, we test the ‘desperation threshold’ model, which integrates both hypotheses. We assume that people attempt to stay above a critical level of resources, representing their ‘basic needs’. Just above this threshold, people have much to lose and should avoid risk. Below, they have little to lose and should take risks. We conducted preregistered tests of the model using survey data from 472 adults in France and the UK. The predictor variables were subjective and objective measures of current resources. The outcome measure, risk taking, was measured using a series of hypothetical gambles. Risk taking followed a V-shape against subjective resources, first decreasing and then increasing again as resources reduced. This pattern was not observed for the objective resource measure. We also found that risk taking was more variable among people with fewer resources. Our findings synthesize the split literature on poverty and risk taking, with implications for policy and interventions.
... Existing research on the relationship between subjective experiences of economic scarcity and human morality appears to be split between two theoretical paradigms, with one predicting mainly negative outcomes on moral judgment and decision-making, and with the other largely arguing for the reverse. Concerning research suggesting negative effects, a selection of studies has found that resource-deprived individuals act greedier 18,26 , are more inclined to engage in dishonest behaviors to obtain resources [27][28][29][30] , exhibit less prosocial intentions 31,32 , and tend to donate less of their personal income to charitable giving 33,34 . These findings may reinforce destructive but prevalent stereotypes and folk beliefs depicting individuals with low SES as irresponsible, dishonest, and "milking the system" (see ref. 35. ...
Individuals can experience a lack of economic resources compared to others, which we refer to as subjective experiences of economic scarcity. While such experiences have been shown to shift cognitive focus, attention, and decision-making, their association with human morality remains debated. We conduct a comprehensive investigation of the relationship between subjective experiences of economic scarcity, as indexed by low subjective socioeconomic status at the individual level, and income inequality at the national level, and various self-reported measures linked to morality. In a pre-registered study, we analyze data from a large, cross-national survey (N = 50,396 across 67 countries) allowing us to address limitations related to cross-cultural generalizability and measurement validity in prior research. Our findings demonstrate that low subjective socioeconomic status at the individual level, and income inequality at the national level, are associated with higher levels of moral identity, higher morality-as-cooperation, a larger moral circle, and increased prosocial intentions. These results appear robust to several advanced control analyses. Finally, exploratory analyses indicate that observed income inequality at the national level is not a statistically significant moderator of the associations between subjective socioeconomic status and the included measures of morality. These findings have theoretical and practical implications for understanding human morality under experiences of resource scarcity.
... The mutual policing theory explains the fall of the moralistic aspect of religion. People in rich, modern environments exhibit especially high levels of social trust (De Courson & Nettle, 2021;Nettle, 2015;Petersen & Aarøe, 2015;Ortiz-Ospina, 2017), spontaneous prosociality towards strangers (Holland et al., 2012;Nettle, 2015;Silva & Mace, 2014;Zwirner & Raihani, 2020), and low rates of crime, violence, and homicides De Courson & Nettle, 2021;Radkani et al., 2023). In this context, we argue that people are less inclined to believe that the prospect of supernatural punishment is necessary to ensure other people's cooperation. ...
What explains the ubiquity and cultural success of prosocial religions? Leading accounts argue that prosocial religions evolved because they help societies grow and promote group cooperation. Yet recent evidence suggests that prosocial religious beliefs are not limited to large societies and might not have strong effects on cooperation. Here, we propose that prosocial religions, including beliefs in moralizing gods, develop because individuals shape supernatural beliefs to achieve their goals in within-group, strategic interactions. People have a fitness interest in controlling others' cooperation-either to extort benefits from others or to gain reputational benefits for protecting the public good. Moreover, they intuitively infer that other people could be deterred from cheating if they feared supernatural punishment. Thus, people endorse supernatural punishment beliefs to manipulate others into cooperating. Prosocial religions emerge from a dynamic of mutual monitoring, in which each individual, lacking confidence in the cooperativeness of conspecifics, attempts to incentivize their cooperation by endorsing beliefs in supernatural punishment. We show how variants of this incentive structure explain the variety of cultural attractors towards which supernatural punishment converges-including extractive religions that extort benefits from exploited individuals, prosocial religions geared toward mutual benefit, and moralized forms of prosocial religion where belief in moralizing gods is itself a moral duty. We review cross-disciplinary evidence for nine predictions of this account and use it to explain the decline of prosocial religions in modern societies. Prosocial religious beliefs seem endorsed as long as people believe them necessary to ensure other people's cooperation, regardless of their objective effectiveness in doing so.
Why do humans believe in moralizing gods? Leading accounts argue that these beliefs evolved because they help societies grow and promote group cooperation. Yet recent evidence suggests that beliefs in moralizing gods are not limited to large societies and might not have strong effects on cooperation. Here, we propose that beliefs in moralizing gods develop because individuals shape supernatural beliefs to achieve strategic goals in within-group interactions. People have a strategic interest in controlling others’ cooperation, either to extort benefits from them or to gain reputational benefits for protecting the public good. Moreover, they believe, based on their folk-psychology, that others would be less likely to cheat if they feared supernatural punishment. Thus, people endorse beliefs in moralizing gods to manipulate others into cooperating. Prosocial religions emerge from a dynamic of mutual monitoring, in which each individual, lacking confidence in the cooperativeness of conspecifics, attempts to incentivize others’ cooperation by endorsing beliefs in supernatural punishment. We show how variations of this incentive structure explain the variety of cultural attractors toward which supernatural punishment converges, including extractive religions that extort benefits from exploited individuals, prosocial religions geared toward mutual benefit, and forms of prosocial religion where belief in moralizing gods is itself a moral duty. We review evidence for nine predictions of this account and use it to explain the decline of prosocial religions in modern societies. Supernatural punishment beliefs seem endorsed as long as people believe them necessary to ensure others’ cooperation, regardless of their objective effectiveness in doing so.
In situations of poverty, do people take more or less risk? Some theories state that poverty makes people 'vulnerable': they cannot buffer against losses, and therefore avoid risk. Yet, other theories state the opposite: poverty makes people 'desperate': they have little left to lose, and therefore take risks. Each theory has some support: most studies find a negative association between resources and risk taking, but risky behaviors such as crime are more common in deprived populations. Here, we test the 'desperation threshold' model, which integrates both hypotheses. The model assumes that people attempt to stay above a critical level of resources, representing their 'basic needs'. Just above the threshold, people have too much to lose, and should avoid risk. Below it, they have little to lose, and should take risks. We conducted preregistered tests of this prediction using longitudinal data of 472 adults over the age of 25 in France and the UK, who completed a survey once a month for 12 months. We examined whether risk taking first increased and then decreased as a function of objective and subjective financial resources. Results supported this prediction for subjective resources, but not for objective resources. Next, we tested whether risk taking varies more among people who have fewer resources. We find strong evidence for both more extreme risk avoidance and more extreme risk taking in this group. We rule out alternative explanations related to question comprehension and measurement error, and discuss implications of our findings for welfare states, poverty, and crime.