Preprint

A Meta-Analytic Review of the Validity of the Tangram Help/Hurt Task (THHT)

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

The Tangram Help Hurt Task (THHT) allows participants to help another participant win a prize (by assigning them easy tangrams), to hurt another participant by preventing them from winning the prize (by assigning them difficult tangrams), or to do neither (by assigning them medium tangrams) in offline or online studies. Consistent with calls for continued evidence supporting psychological measurement, we conducted a meta-analytic review of the THHT that included 52 independent studies involving 11,060 participants. THHT scores were associated with helping and hurting outcomes in theoretically appropriate ways. Results showed that THHT scores were not only associated with short-term (experimental manipulations, state measures) and long-term (trait measures) helping and hurting outcomes, but also helping and harming intentions. We discuss the strengths and limitations of the THHT relative to other laboratory measures of helping and hurting, discuss unanswered questions about the task, and offer suggestions for best use of the task.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The field of prosociality is flourishing, yet researchers disagree about how to define prosocial behavior and often neglect defining it altogether. In this review, we provide an overview about the breadth of definitions of prosocial behavior and the related concept of altruism. Common to almost all definitions is an emphasis on the promotion of welfare in agents other than the actor. However, definitions of the two concepts differ in terms of whether they emphasize intentions and motives, costs and benefits, and the societal context. In order to improve on the conceptual ambiguity surrounding the study of prosociality, we urge researchers to provide definitions, to use operationalizations that match their definitions, and to acknowledge the diversity of prosocial behavior.
Article
Full-text available
The psychological mechanisms that lead terrorists to make costly sacrifices for their ideological convictions are of great theoretical and practical importance. We investigate two key components of this process: (1) the feeling of admiration toward ingroup members making costly self-sacrifices for their ideological group, and (2) identity fusion with religion. Data collected in 27 Spanish prisons reveal that jihadists’ admiration toward members of radical Islamist groups amplifies their willingness to engage in costly sacrifices for religion in prison. This effect is produced because admiration toward radical Islamist groups has a binding effect, increasing identity fusion with religion. Five additional experiments provide causal and behavioural evidence for this model. By showing that admiration for ingroup members increases identity fusion, which in turn makes individuals prone to engage in costly pro-group behaviours, we provide insights into the emotional machineries of radicalization and open new avenues for prevention strategies to strengthen public safety.
Article
Full-text available
Researchers run experiments to test theories, search for and document phenomena, develop theories, or advise policymakers. When testing theories, experiments must be internally valid but do not have to be externally valid. However, when experiments are used to search for and document phenomena, develop theories, or advise policymakers, external validity matters. Conflating these goals and failing to recognize their tensions with validity concerns can lead to problems with theorizing. Psychological scientists should be aware of the mutual-internal-validity problem, long recognized by experimental economists. When phenomena elicited by experiments are used to develop theories that, in turn, influence the design of theory-testing experiments, experiments and theories can become wedded to each other and lose touch with reality. They capture and explain phenomena within but not beyond the laboratory. We highlight how triangulation can address validity problems by helping experiments and theories make contact with ideas from other disciplines and the real world.
Article
Full-text available
A purpose-made video game was used to measure response time and moral alignment of in-game moral decisions, which were made by 115 undergraduate students. Overall, moral decisions took between 4–6 seconds and were mostly pro-social. Previous gameplay, in-game, and post-game experiences predicted in-game moral alignment. Real-life moral salience was not related to in-game decision-making. The implications of these results are discussed in the context of the demands of video games and in-game moral decision-making models.
Article
Full-text available
Aggressive behaviour serves many useful social functions, yet can also have damaging consequences. In line with evidence showing adolescent development in social cognitive abilities, we hypothesised that the use of aggression would become more sophisticated with age. We investigated adolescent aggression toward peers using an experimental, hypothetical aggression paradigm, the Hot Sauce Paradigm, in a school‐based social network setting. Participants (N = 162 aged 11–17, 98 male) indicated which strength of imaginary hot sauce they would allocate to each of their classmates. A Social Network Questionnaire quantified participants’ perceived dyadic social tie strength with each classmate, and the incidence of mutual or unilateral dyadic real‐world aggression (e.g. teasing). Participants allocated weaker hot sauce to peers with whom they reported strong, positive social ties and an absence of self‐reported unilateral real‐world aggression. With increasing cross‐sectional age, there was a decrease in the impact of social tie strength and an increase in the extent to which hot sauce allocation was predicted by self‐reported mutual real‐world aggression. This pattern of findings is consistent with young (vs. late) adolescent use of experimental, hypothetical Hot Sauce aggression to reflect real‐world animosity, while late adolescents’ behaviour is more subtle. These findings extend our understanding of the dyadic social context of adolescent aggressive behaviour using a novel experimental aggression paradigm.
Article
Full-text available
When two or more individuals with different values, interests, and experiences work together, interpersonal conflicts are inevitable. Conflicts, in turn, can hinder or delay successful task completion. However, certain types of conflicts may also have beneficial effects. The literature differentiates between task conflicts (TCs) and relationship conflicts (RCs). Whether TCs are detrimental or beneficial for performance largely depends on the simultaneous occurrence of RCs. However, the reasons for the differential effects of TCs with and without RCs remain largely unknown. Therefore, we explored the underlying fine-grained mechanisms of the conflict-performance relationship in two studies. We used event-sampling methodology to track employees’ conflicts in the field (study 1) and we examined conflicts in a controlled laboratory setting (study 2). We found that RCs during TCs made participants feel disrespected and thereby increased negative affect. Further, RCs during TCs impaired knowledge gain, which decreased positive affect. In turn, low positive affect explained why TCs with RCs led to poorer performance than TCs without RCs. However, neither of the two studies supported the assumption that high negative affect from RCs during TCs—by itself—had adverse effects on performance. Our results confirm previous findings of the destructive character of RCs during TCs and additionally provide new insights into the nature and complexity of workplace conflicts by introducing positive affect as a missing piece of the puzzle.
Article
Full-text available
There are active movements to connect children with nature to improve their well-being. However, most of the research on children and nature has focused on cognitive benefits or used non-experimental designs. In a preliminary study, we examined the potential benefits of a 4-hour nature experience on children's mood, pro-sociality, and attitudes toward nature. Eighty students from an urban Canadian elementary school were recruited to participate in field trips to a nature school and an aviation/space museum. Children reported more positive and negative emotions, a closer connection to nature, and a greater willingness to protect nature when at the nature school. We also found indications that children were more pro-social at the nature school. Although further research is needed to replicate these findings with additional populations/environments, this study suggests that children largely benefit from spending time in nature.
Article
Full-text available
Studies show that rejection increases negative affect and aggression and decreases helping behavior toward the excluder. Less is known about emotions and behavior after rejection by a friend for someone else. In two experimental studies (N = 101 and N = 169), we tested the predictions that rejection would feel worse in a close relationship but would result in less aggression and more reconnecting behavior, especially when the reasons for rejection were unknown. The results of Study 1 showed that, as expected, among acquaintances, more aggression was noted only after comparative rejection, but among strangers, aggression was also observed after rejection with no stated reason. Negative feelings toward a new acquaintance were only marginally stronger than those toward a stranger in Study 1, but Study 2 confirmed that rejection by a best friend, and especially comparative rejection by a friend, felt worse than other conditions. Study 2 also showed that reconnecting behavior was more likely to dominate over aggressive behavior between people in close relationships than between strangers. The results are discussed mostly in light of the multimotive model of rejection.
Article
Full-text available
The “transportability” of laboratory findings to other instances than the original implementation entails the robustness of rates of observed behaviors and estimated treatment effects to changes in the specific research setting and in the sample under study. In four studies based on incentivized games of fairness, trust, and reciprocity, we evaluate (1) the sensitivity of laboratory results to locally recruited student-subject pools, (2) the comparability of behavioral data collected online and, under varying anonymity conditions, in the laboratory, (3) the generalizability of student-based results to the broader population, and (4) with a replication at Amazon Mechanical Turk, the stability of laboratory results across research contexts. For the class of laboratory designs using incentivized games as measurement instruments of prosocial behavior, we find that rates of behavior and the exact behavioral differences between decision situations do not transport beyond specific implementations. Most clearly, data obtained from standard participant pools differ significantly from those from the broader population. This undermines the use of empirically motivated laboratory studies to establish descriptive parameters of human behavior. Directions of the behavioral differences between games, in contrast, are remarkably robust to changes in samples and settings. Moreover, we find no evidence for either anonymity effects nor mode effects potentially biasing laboratory measurement. These results underscore the capacity of laboratory experiments to establish generalizable causal effects in theory-driven designs.
Article
Full-text available
Research shows that interpersonal rejection increases aggression and decreases helping toward the rejecter. Based on the assumptions of the evolutionary approach, it was hypothesized that aggression would be higher and helping would be lower after rejection by a same-sex rather than an opposite-sex other. Moreover, it was predicted that the effect for aggression would be stronger in men, and the effect for helping would be stronger in women. Participants (N = 100) were rejected or accepted by a same- or opposite-sex person, and later aggression and helping were measured using the tangram Help-Hurt task. The major finding was that same-sex rejection resulted in more aggression and less helping than opposite-sex rejection, but the rejectee’s sex did not moderate the effect. Instead, men were more aggressive and less helping independently of condition. Along with the sexual exchange theory, more negative behavior in same-sex rejection could be interpreted as raised in-group sexual competitive tendencies, whereas less negative behavior in opposite-sex rejection could result from the motivation to exchange resources between men and women.
Article
Full-text available
Aggression is often defined as a behavior that is done with the intent to harm an individual who is believed to want avoid being harmed (e.g., Baron & Richardson, 1994). Accordingly, social scientists have developed several tasks to study aggression in laboratory settings; tasks that we refer to as “lab-based aggression paradigms.” However, because of the legal, ethical, and practical issues inherent in provoking aggression within the confines of a laboratory setting, it is feasible to study only very mildly harmful aggression. The current conceptual review examines the criteria that are necessary to study aggression in a laboratory setting, discusses the strengths and weaknesses of several new and/or commonly-used lab-based aggression paradigms, and offers recommendations for the future of lab-based aggression research. Collectively, we hope the current discussion helps researchers to describe the contributions and limitations of lab-based aggression research and, ultimately, helps to improve the informativeness of lab-based aggression research.
Article
Full-text available
A large meta-analysis by Anderson et al. (2010) found that violent video games increased aggressive thoughts, angry feelings, physiological arousal, and aggressive behavior and decreased empathic feelings and helping behavior. Hilgard, Engelhardt, and Rouder (2017) reanalyzed the data of Anderson et al. (2010) using newer publication bias methods (i.e., precision-effect test, precision-effect estimate with standard error, p-uniform, p-curve). Based on their reanalysis, Hilgard, Engelhardt, and Rouder concluded that experimental studies examining the effect of violent video games on aggressive affect and aggressive behavior may be contaminated by publication bias, and these effects are very small when corrected for publication bias. However, the newer methods Hilgard, Engelhardt, and Rouder used may not be the most appropriate. Because publication bias is a potential a problem in any scientific domain, we used a comprehensive sensitivity analysis battery to examine the influence of publication bias and outliers on the experimental effects reported by Anderson et al. We used best meta-analytic practices and the triangulation approach to locate the likely position of the true mean effect size estimates. Using this methodological approach, we found that the combined adverse effects of outliers and publication bias was less severe than what Hilgard, Engelhardt, and Rouder found for publication bias alone. Moreover, the obtained mean effects using recommended methods and practices were not very small in size. The results of the methods used by Hilgard, Engelhardt, and Rouder tended to not converge well with the results of the methods we used, indicating potentially poor performance. We therefore conclude that violent video game effects should remain a societal concern.
Article
Full-text available
Awe is a feeling of wonder and amazement in response to experiencing something so vast that it transcends the one’s current frames of reference. Across three experiments (N=557), we tested the inhibition effect of awe on aggression. We used a narrative recall task paradigm (Study 1 and 2) and a video (Study 3) to induce the emotion of awe. After inducing awe, we first examined participants’ emotion and the sense of “small self” and then the manifestation of aggressiveness in a Shooting Game (Study 1), Tangram Help/Hurt Task (Study 2 and 3) and Aggression-IAT (Study 3), respectively. Results indicated that awe reduced aggression, and increased prosociality and a sense of small self relative to neutral affect and positive emotions of happiness and amusement. Mediation analysis evidenced mixed support for a sense of small self mediating the effect of awe on aggression and prosociality.
Article
Full-text available
The positive role of secure attachment in reducing intergroup biases has been suggested in prior studies. We extend this work by testing the effects of secure attachment primes on negative emotions and aggressive behaviors toward outgroup members across four experiments. Results from Studies 1A and 1B reveal that secure attachment prime, relative to neutral, can reduce negative outgroup emotions. In addition, Studies 1B and 3 results rule out positive mood increase as an alternative explanation for the observed effects. Results from Studies 2 and 3 reveal that secure attachment primes can reduce aggressive behavior toward an outgroup member. The effect of secure attachment primes on outgroup harm was found to be fully mediated by negative emotions in Studies 2 and 3. An interaction between secure attachment primes and ingroup identification in Study 2 indicated that the positive effects of secure attachment in reducing outgroup harm may be especially beneficial for highly identified ingroup members. © 2015 by the Society for Personality and Social Psychology, Inc.
Article
Full-text available
The external validity of artificial "trivial" laboratory settings is examined. Past views emphasizing generalizability of relations among conceptual variables are reviewed and affirmed. One major implication of typical challenges to the external validity of laboratory research is tested with aggression research: If laboratory research is low in external validity, then laboratory studies should fail to detect relations among variables that are correlated with aggression in "real-world" studies. Meta-analysis was used to examine 5 situational variables (provocation, violent media, alcohol, anonymity, hot temperature) and 3 individual difference variables (sex, Type A personality, trait aggressiveness) in real-world and laboratory aggression studies. Results strongly supported the external validity of trivial laboratory studies. Advice is given on how scholars might handle occasional descrepancies between laboratory and real-world findings.
Article
Full-text available
The Competitive Reaction Time Task (CRTT) is the measure of aggressive behavior most commonly used in laboratory research. However, the test has been criticized for issues in standardization because there are many different test procedures and at least 13 variants to calculate a score for aggressive behavior. We compared the different published analyses of the CRTT using data from 3 different studies to scrutinize whether it would yield the same results. The comparisons revealed large differences in significance levels and effect sizes between analysis procedures, suggesting that the unstandardized use and analysis of the CRTT have substantial impacts on the results obtained, as well as their interpretations. Based on the outcome of our comparisons, we provide suggestions on how to address some of the issues associated with the CRTT, as well as a guideline for researchers studying aggressive behavior in the laboratory. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
Anderson, Lindsay, and Bushman (1999) compared effect sizes from laboratory and field studies of 38 research topics compiled in 21 meta-analyses and concluded that psychological laboratories produced externally valid results. A replication and extension of Anderson et al. (1999) using 217 lab-field comparisons from 82 meta-analyses found that the external validity of laboratory research differed considerably by psychological subfield, research topic, and effect size. Laboratory results from industrial-organizational psychology most reliably predicted field results, effects found in social psychology laboratories most frequently changed signs in the field (from positive to negative or vice versa), and large laboratory effects were more reliably replicated in the field than medium and small laboratory effects. © The Author(s) 2012.
Article
Full-text available
Funnel plots, and tests for funnel plot asymmetry, have been widely used to examine bias in the results of meta-analyses. Funnel plot asymmetry should not be equated with publication bias, because it has a number of other possible causes. This article describes how to interpret funnel plot asymmetry, recommends appropriate tests, and explains the implications for choice of meta-analysis modelThe 1997 paper describing the test for funnel plot asymmetry proposed by Egger et al 1 is one of the most cited articles in the history of BMJ.1 Despite the recommendations contained in this and subsequent papers,2 3 funnel plot asymmetry is often, wrongly, equated with publication or other reporting biases. The use and appropriate interpretation of funnel plots and tests for funnel plot asymmetry have been controversial because of questions about statistical validity,4 disputes over appropriate interpretation,3 5 6 and low power of the tests.2This article recommends how to examine and interpret funnel plot asymmetry (also known as small study effects2) in meta-analyses of randomised controlled trials. The recommendations are based on a detailed MEDLINE review of literature published up to 2007 and discussions among methodologists, who extended and adapted guidance previously summarised in the Cochrane Handbook for Systematic Reviews of Interventions.7What is a funnel plot?A funnel plot is a scatter plot of the effect estimates from individual studies against some measure of each study’s size or precision. The standard error of the effect estimate is often chosen as the measure of study size and plotted on the vertical axis8 with a reversed scale that places the larger, most powerful studies towards the top. The effect estimates from smaller studies should scatter more widely at the bottom, with the spread narrowing among larger studies.9 In the absence of bias and between study heterogeneity, the scatter will be due to sampling variation alone and the plot will resemble a symmetrical inverted funnel (fig 1⇓). A triangle centred on a fixed effect summary estimate and extending 1.96 standard errors either side will include about 95% of studies if no bias is present and the fixed effect assumption (that the true treatment effect is the same in each study) is valid. The appendix on bmj.com discusses choice of axis in funnel plots.View larger version:In a new windowDownload as PowerPoint SlideFig 1 Example of symmetrical funnel plot. The outer dashed lines indicate the triangular region within which 95% of studies are expected to lie in the absence of both biases and heterogeneity (fixed effect summary log odds ratio±1.96×standard error of summary log odds ratio). The solid vertical line corresponds to no intervention effectImplications of heterogeneity, reporting bias, and chance Heterogeneity, reporting bias, and chance may all lead to asymmetry or other shapes in funnel plots (box). Funnel plot asymmetry may also be an artefact of the choice of statistics being plotted (see appendix). The presence of any shape in a funnel plot is contingent on the studies having a range of standard errors, since otherwise they would lie on a horizontal line.Box 1: Possible sources of asymmetry in funnel plots (adapted from Egger et al1)Reporting biasesPublication bias: Delayed publication (also known as time lag or pipeline) bias Location biases (eg, language bias, citation bias, multiple publication bias)Selective outcome reportingSelective analysis reportingPoor methodological quality leading to spuriously inflated effects in smaller studiesPoor methodological designInadequate analysisFraudTrue heterogeneitySize of effect differs according to study size (eg, because of differences in the intensity of interventions or in underlying risk between studies of different sizes)ArtefactualIn some circumstances, sampling variation can lead to an association between the intervention effect and its standard errorChanceAsymmetry may occur by chance, which motivates the use of asymmetry testsHeterogeneityStatistical heterogeneity refers to differences between study results beyond those attributable to chance. It may arise because of clinical differences between studies (for example, setting, types of participants, or implementation of the intervention) or methodological differences (such as extent of control over bias). A random effects model is often used to incorporate heterogeneity in meta-analyses. If the heterogeneity fits with the assumptions of this model, a funnel plot will be symmetrical but with additional horizontal scatter. If heterogeneity is large it may overwhelm the sampling error, so that the plot appears cylindrical.Heterogeneity will lead to funnel plot asymmetry if it induces a correlation between study sizes and intervention effects.5 For example, substantial benefit may be seen only in high risk patients, and these may be preferentially included in early, small studies.10 Or the intervention may have been implemented less thoroughly in larger studies, resulting in smaller effect estimates compared with smaller studies.11Figure 2⇓ shows funnel plot asymmetry arising from heterogeneity that is due entirely to there being three distinct subgroups of studies, each with a different intervention effect.12 The separate funnels for each subgroup are symmetrical. Unfortunately, in practice, important sources of heterogeneity are often unknown.View larger version:In a new windowDownload as PowerPoint SlideFig 2 Illustration of funnel plot asymmetry due to heterogeneity, in the form of three distinct subgroups of studies. Funnel plot including all studies (top left) shows clear asymmetry (P<0.001 from Egger test for funnel plot asymmetry). P values for each subgroup are all >0.49.Differences in methodological quality may also cause heterogeneity and lead to funnel plot asymmetry. Smaller studies tend to be conducted and analysed with less methodological rigour than larger studies,13 and trials of lower quality also tend to show larger intervention effects.14 15Reporting biasReporting biases arise when the dissemination of research findings is influenced by the nature and direction of results. Statistically significant “positive” results are more likely to be published, published rapidly, published in English, published more than once, published in high impact journals, and cited by others.16 17 18 19 Data that would lead to negative results may be filtered, manipulated, or presented in such a way that they become positive.14 20 Reporting biases can have three types of consequence for a meta-analysis:A systematic review may fail to locate an eligible study because all information about it is suppressed or hard to find (publication bias) A located study may not provide usable data for the outcome of interest because the study authors did not consider the result sufficiently interesting (selective outcome reporting) A located study may provide biased results for some outcome—for example, by presenting the result with the smallest P value or largest effect estimate after trying several analysis methods (selective analysis reporting).These biases may cause funnel plot asymmetry if statistically significant results suggesting a beneficial effect are more likely to be published than non-significant results. Such asymmetry may be exaggerated if there is a further tendency for smaller studies to be more prone to selective suppression of results than larger studies. This is often assumed to be the case for randomised trials. For instance, it is probably more difficult to make a large study disappear without trace, while a small study can easily be lost in a file drawer.21 The same may apply to specific outcomes—for example, it is difficult not to report on mortality or myocardial infarction if these are outcomes of a large study. Smaller studies have more sampling error in their effect estimates. Thus even though the risk of a false positive significant finding is the same, multiple analyses are more likely to yield a large effect estimate that may seem worth publishing. However, biases may not act this way in real life; funnel plots could be symmetrical even in the presence of publication bias or selective outcome reporting19 22—for example, if the published findings point to effects in different directions but unreported results indicate neither direction. Alternatively, bias may have affected few studies and therefore not cause glaring asymmetry.ChanceThe role of chance is critical for interpretation of funnel plots because most meta-analyses of randomised trials in healthcare contain few studies.2 Investigations of relations across studies in a meta-analysis are seriously prone to false positive findings when there is a small number of studies and heterogeneity across studies,23 and this may affect funnel plot symmetry.Interpreting funnel plot asymmetryAuthors of systematic reviews should distinguish between possible reasons for funnel plot asymmetry (box 1). Knowledge of the intervention, and the circumstances in which it was implemented in different studies, can help identify causes of asymmetry in funnel plots, which should also be interpreted in the context of susceptibility to biases of research in the field of interest. Potential conflicts of interest, whether outcomes and analyses have been standardised, and extent of trial registration may need to be considered. For example, studies of antidepressants generate substantial conflicts of interest because the drugs generate vast sales revenues. Furthermore, there are hundreds of outcome scales, analyses can be very flexible, and trial registration was uncommon until recently.24 Conversely, in a prospective meta-analysis where all data are included and all analyses fully standardised and conducted according to a predetermined protocol, publication or reporting biases cannot exist. Reporting bias is therefore more likely to be a cause of an asymmetric plot in the first situation than in the second.Terrin et al found that researchers were poor at identifying publication bias from funnel plots.5 Including contour lines corresponding to perceived milestones of statistical significance (P=0.01, 0.05, 0.1, etc) may aid visual interpretation.25 If studies seem to be missing in areas of non-significance (fig 3⇓, top) then asymmetry may be due to reporting bias, although other explanations should still be considered. If the supposed missing studies are in areas of higher significance or in a direction likely to be considered desirable to their authors (fig 3⇓, bottom), asymmetry is probably due to factors other than reporting bias. View larger version:In a new windowDownload as PowerPoint SlideFig 3 Contour enhanced funnel plots. In the top diagram there is a suggestion of missing studies in the middle and right of the plot, broadly in the white area of non-significance, making publication bias plausible. In the bottom diagram there is a suggestion of missing studies on the bottom left hand side of the plot. Since most of this area contains regions of high significance, publication bias is unlikely to be the underlying cause of asymmetryStatistical tests for funnel plot asymmetryA test for funnel plot asymmetry (sometimes referred to as a test for small study effects) examines whether the association between estimated intervention effects and a measure of study size is greater than might be expected to occur by chance. These tests typically have low power, so even when a test does not provide evidence of asymmetry, bias cannot be excluded. For outcomes measured on a continuous scale a test based on a weighted linear regression of the effect estimates on their standard errors is straightforward.1 When outcomes are dichotomous and intervention effects are expressed as odds ratios, this corresponds to an inverse variance weighted linear regression of the log odds ratio on its standard error.2 Unfortunately, there are statistical problems because the standard error of the log odds ratio is mathematically linked to the size of the odds ratio, even in the absence of small study effects.2 4 Many authors have therefore proposed alternative tests (see appendix on bmj.com).4 26 27 28Because it is impossible to know the precise mechanism(s) leading to funnel plot asymmetry, simulation studies (in which tests are evaluated on large numbers of computer generated datasets) are required to evaluate test characteristics. Most have examined a range of assumptions about the extent of reporting bias by selectively removing studies from simulated datasets.26 27 28 After reviewing the results of these studies, and based on theoretical considerations, we formulated recommendations on testing for funnel plot asymmetry (box 2). The appendix describes the proposed tests, explains the reasons that some were not recommended, and discusses funnel plots for intervention effects measured as risk ratios, risk differences, and standardised mean differences. Our recommendations imply that tests for funnel plot asymmetry should be used in only a minority of meta-analyses.29Box 2: Recommendations on testing for funnel plot asymmetryAll types of outcomeAs a rule of thumb, tests for funnel plot asymmetry should not be used when there are fewer than 10 studies in the meta-analysis because test power is usually too low to distinguish chance from real asymmetry. (The lower the power of a test, the higher the proportion of “statistically significant” results in which there is in reality no association between study size and intervention effects). In some situations—for example, when there is substantial heterogeneity—the minimum number of studies may be substantially more than 10Test results should be interpreted in the context of visual inspection of funnel plots— for example, are there studies with markedly different intervention effect estimates or studies that are highly influential in the asymmetry test? Even if an asymmetry test is statistically significant, publication bias can probably be excluded if small studies tend to lead to lower estimates of benefit than larger studies or if there are no studies with significant resultsWhen there is evidence of funnel plot asymmetry, publication bias is only one possible explanation (see box 1)As far as possible, testing strategy should be specified in advance: choice of test may depend on the degree of heterogeneity observed. Applying and reporting many tests is discouraged: if more than one test is used, all test results should be reported Tests for funnel plot asymmetry should not be used if the standard errors of the intervention effect estimates are all similar (the studies are of similar sizes)Continuous outcomes with intervention effects measured as mean differencesThe test proposed by Egger et al may be used to test for funnel plot asymmetry.1 There is no reason to prefer more recently proposed tests, although their relative advantages and disadvantages have not been formally examined. General considerations suggest that the power will be greater than for dichotomous outcomes but that use of the test with substantially fewer than 10 studies would be unwiseDichotomous outcomes with intervention effects measured as odds ratiosThe tests proposed by Harbord et al26 and Peters et al27 avoid the mathematical association between the log odds ratio and its standard error when there is a substantial intervention effect while retaining power compared with alternative tests. However, false positive results may still occur if there is substantial between study heterogeneityIf there is substantial between study heterogeneity (the estimated heterogeneity variance of log odds ratios, τ2, is >0.1) only the arcsine test including random effects, proposed by Rücker et al, has been shown to work reasonably well.28 However, it is slightly conservative in the absence of heterogeneity and its interpretation is less familiar than for other tests because it is based on an arcsine transformation.When τ2 is <0.1, one of the tests proposed by Harbord et al,26 Peters et al,27 or Rücker et al28 can be used. Test performance generally deteriorates as τ2 increases.Funnel plots and meta-analysis modelsFixed and random effects modelsFunnel plots can help guide choice of meta-analysis method. Random effects meta-analyses weight studies relatively more equally than fixed effect analyses by incorporating the between study variance into the denominator of each weight. If effect estimates are related to standard errors (funnel plot asymmetry), the random effects estimate will be pulled more towards findings from smaller studies than the fixed effect estimate will be. Random effects models can thus have undesirable consequences and are not always conservative.30The trials of intravenous magnesium after myocardial infarction provide an extreme example of the differences between fixed and random effects analyses that can arise in the presence of funnel plot asymmetry.31 Beneficial effects on mortality, found in a meta-analysis of small studies,32 were subsequently contradicted when the very large ISIS-4 study found no evidence of benefit.33 A contour enhanced funnel plot (fig 4⇓) gives a clear visual impression of asymmetry, which is confirmed by small P values from the Harbord and Peters tests (P<0.001 and P=0.002 respectively).View larger version:In a new windowDownload as PowerPoint SlideFig 4 Contour enhanced funnel plot for trials of the effect of intravenous magnesium on mortality after myocardial infarctionFigure 5⇓ shows that in a fixed effect analysis ISIS-4 receives 90% of the weight, and there is no evidence of a beneficial effect. However, there is clear evidence of between study heterogeneity (P<0.001, I2=68%), and in a random effects analysis the small studies dominate so that intervention appears beneficial. To interpret the accumulated evidence, it is necessary to make a judgment about the validity or relevance of the combined evidence from the smaller studies compared with that from ISIS-4. The contour enhanced funnel plot suggests that publication bias does not completely explain the asymmetry, since many of the beneficial effects reported from smaller studies were not significant. Plausible explanations for these results are that methodological flaws in the smaller studies, or changes in the standard of care (widespread adoption of treatments such as aspirin, heparin, and thrombolysis), led to apparent beneficial effects of magnesium. This belief was reinforced by the subsequent publication of the MAGIC trial, in which magnesium added to these treatments which also found no evidence of benefit on mortality (odds ratio 1.0, 95% confidence interval 0.8 to 1.1).34View larger version:In a new windowDownload as PowerPoint SlideFig 5 Comparison of fixed and random effects meta-analytical estimates of the effect of intravenous magnesium on mortality after myocardial infarctionWe recommend that when review authors are concerned about funnel plot asymmetry in a meta-analysis with evidence of between study heterogeneity, they should compare the fixed and random effects estimates of the intervention effect. If the random effects estimate is more beneficial, authors should consider whether it is plausible that the intervention is more effective in smaller studies. Formal investigations of heterogeneity of effects may reveal explanations for funnel plot asymmetry, in which case presentation of results should focus on these. If larger studies tend to be methodologically superior to smaller studies, or were conducted in circumstances more typical of the use of the intervention in practice, it may be appropriate to include only larger studies in the meta-analysis.Extrapolation of a funnel plot regression lineAn assumed relation between susceptibility to bias and study size can be exploited by extrapolating within a funnel plot. When funnel plot asymmetry is due to bias rather than substantive heterogeneity, it is usually assumed that results from larger studies are more believable than those from smaller studies because they are less susceptible to methodological flaws or reporting biases. Extrapolating a regression line on a funnel plot to minimum bias (maximum sample size) produces a meta-analytical estimate that can be regarded as corrected for such biases.35 36 37 However, because it is difficult to distinguish between asymmetry due to bias and asymmetry due to heterogeneity or chance, the broad applicability of such approaches is uncertain. Further approaches to adjusting for publication bias are described and discussed in the appendix.DiscussionReporting biases are one of a number of possible explanations for the associations between study size and effect size that are displayed in asymmetric funnel plots. Examining and testing for funnel plot asymmetry, when appropriate, is an important means of addressing bias in meta-analyses, but the multiple causes of asymmetry and limited power of asymmetry tests mean that other ways to address reporting biases are also of importance. Searches of online trial registries can identify unpublished trials, although they do not currently guarantee access to trial protocols and results. When there are no registered but unpublished trials, and the outcome of interest is reported by all trials, restricting meta-analyses to registered trials should preclude publication bias. Recent comparisons of results of published trials with those submitted for regulatory approval have also provided clear evidence of reporting bias.38 39 Methods for dealing with selective reporting of outcomes have been described elsewhere. 40Our recommendations apply to meta-analyses of randomised trials, and their applicability in other contexts such as meta-analyses of epidemiological or diagnostic test studies is unclear.41 The performance of tests for funnel plot asymmetry in these contexts is likely to differ from that in meta-analyses of randomised trials. Further factors, such as confounding and precision of measurements, may cause a relation between study size and effect estimates in observational studies. For example, large studies based on routinely collected data might not fully control confounding compared with smaller, purpose designed studies that collected a wide range of potential confounding variables. Alternatively, larger studies might use self reported exposure levels, which are more error prone, while smaller studies used precise measuring instruments. However, simulation studies have usually not considered such situations. An exception is for diagnostic studies, where large imbalances in group sizes and substantial odds ratios lead to poor performance of some tests: that proposed by Deeks et al was designed for use in this context.4Summary points Inferences on the presence of bias or heterogeneity should consider different causes of funnel plot asymmetry and should not be based on visual inspection of funnel plots aloneThey should be informed by contextual factors, including the plausibility of publication bias as an explanation for the asymmetryTesting for funnel plot asymmetry should follow the recommendations detailed in this articleThe fixed and random effects estimates of the intervention effect should be compared when funnel plot asymmetry exists in a meta-analysis with between study heterogeneityNotesCite this as: BMJ 2011;342:d4002FootnotesContributors: All authors contributed to the drafting and editing of the manuscript. DA, JC, JD, RMH, JPTH, JPAI, DRJ, DM, JP, GR, JACS, AJS and JT contributed to the chapter in the Cochrane Handbook for Systematic Reviews of Interventions on which our recommendations on testing for funnel plot asymmetry are based. JACS will act as guarantor.Funding: Funded in part by the Cochrane Collaboration Bias Methods Group, which receives infrastructure funding as part of a commitment by the Canadian Institutes of Health Research (CIHR) and the Canadian Agency for Drugs and Technologies in Health (CADTH) to fund Canadian based Cochrane entities. This supports dissemination activities, web hosting, travel, training, workshops and a full time coordinator position. JPTH was funded by MRC Grant U.1052.00.011. DGA is supported by Cancer Research UK. GR was supported by a grant from Deutsche Forschungsgemeinschaft (FOR 534 Schw 821/2-2).Competing interests. JC, JJD, SD, RMH, JPAI, DRJ, PM, JP, GR, GS, JACS and AJS are all authors on papers proposing tests for funnel plot asymmetry, but have no commercial interests in the use of these tests. All authors have completed the ICJME unified disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that they have no financial or non-financial interests that may be relevant to the submitted work.Provenance and peer review: Not commissioned; externally peer reviewed.References↵Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ1997;315:629-34.OpenUrlFREE Full Text↵Sterne JAC, Gavaghan D, Egger M. Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature. J Clin Epidemiol2000;53:1119-29.OpenUrlCrossRefMedlineWeb of Science↵Lau J, Ioannidis JP, Terrin N, Schmid CH, Olkin I. The case of the misleading funnel plot. BMJ2006;333:597-600.OpenUrlFREE Full Text↵Deeks JJ, Macaskill P, Irwig L. The performance of tests of publication bias and other sample size effects in systematic reviews of diagnostic test accuracy was assessed. J Clin Epidemiol2005;58:882-93.OpenUrlCrossRefMedlineWeb of Science↵Terrin N, Schmid CH, Lau J. In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias. J Clin Epidemiol2005;58:894-901.OpenUrlCrossRefMedlineWeb of Science↵Ioannidis JP. Interpretation of tests of heterogeneity and bias in meta-analysis. J Eval Clin Pract 2008;14:951-7.OpenUrlMedlineWeb of Science↵Sterne JAC, Egger M, Moher D. Addressing reporting biases. In: Higgins JPT, Green S, eds. Cochrane handbook for systematic reviews of interventions. Wiley, 2008.↵Sterne JAC, Egger M. Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol2001;54:1046-55.OpenUrlCrossRefMedlineWeb of Science↵Begg CB, Berlin JA. Publication bias: a problem in interpreting medical data. J R Statist Soc A1988;151:419-63.OpenUrlCrossRef↵Davey Smith G, Egger M. Who benefits from medical interventions? Treating low risk patients can be a high risk strategy. BMJ1994;308:72-4.OpenUrlFREE Full Text↵Stuck AE, Siu AL, Wieland GD, Adams J, Rubenstein LZ. Comprehensive geriatric assessment: a meta-analysis of controlled trials. Lancet1993;342:1032-6.OpenUrlCrossRefMedlineWeb of Science↵Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L, Moreno SG. Assessing publication bias in meta-analyses in the presence of between-study heterogeneity. J R Statist Soc A2010;173:575-91.OpenUrlCrossRef↵Egger M, Jüni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess2003;7:1-68.OpenUrlMedline↵Ioannidis JP. Why most discovered true associations are inflated. Epidemiology2008;19:640-8.OpenUrlCrossRefMedlineWeb of Science↵Wood L, Egger M, Gluud LL, Schulz KF, Jüni P, Altman DG, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ2008;336:601-5.OpenUrlFREE Full Text↵Hopewell S, Clarke M, Stewart L, Tierney J. Time to publication for results of clinical trials. Cochrane Database Syst Rev2007;2:MR000011.OpenUrlMedline↵Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev2009;1:MR000006.OpenUrlMedline↵Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton J, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess2010;14:iii,ix-iii,193.OpenUrlMedlineWeb of Science↵Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE2008;3:e3081.OpenUrlCrossRefMedline↵Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med2008;358:252-60.OpenUrlCrossRefMedline↵Rosenthal R. The “file drawer” problem and tolerance for null results. Psychol Bull1979;86:638-41.OpenUrlCrossRefWeb of Science↵Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA2004;291:2457-65.OpenUrlFREE Full Text↵Higgins JP, Thompson SG. Controlling the risk of spurious findings from meta-regression. Stat Med2004;23:1663-82.OpenUrlCrossRefMedlineWeb of Science↵Ioannidis JP. Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials? Philos Ethics Humanit Med2008;3:14.OpenUrlCrossRefMedline↵Peters J, Sutton AJ, Jones DR, Abrams KR, Rushton L. Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. J Clin Epidemiol2008;61:991-6.OpenUrlCrossRefMedlineWeb of Science↵Harbord RM, Egger M, Sterne JA. A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints. Stat Med2006;25:3443-57.OpenUrlCrossRefMedlineWeb of Science↵Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L. Comparison of two methods to detect publication bias in meta-analysis. JAMA2006;295:676-80.OpenUrlFREE Full Text↵Rücker G, Schwarzer G, Carpenter J. Arcsine test for publication bias in meta-analyses with binary outcomes. Stat Med2008;27:746-63.OpenUrlCrossRefMedlineWeb of Science↵Ioannidis JP, Trikalinos TA. The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey. CMAJ2007;176:1091-6.OpenUrlFREE Full Text↵Poole C, Greenland S. Random-effects meta-analyses are not always conservative. Am J Epidemiol1999;150:469-75.OpenUrlFREE Full Text↵Egger M, Davey Smith G. Misleading meta-analysis. Lessons from an “effective, safe, simple” intervention that wasn’t. BMJ1995;310:752-4.OpenUrlFREE Full Text↵Teo KK, Yusuf S, Collins R, Held PH, Peto R. Effects of intravenous magnesium in suspected acute myocardial infarction: overview of randomised trials. BMJ1991;303:1499-503.OpenUrlFREE Full Text↵ISIS-4 (Fourth International Study of Infarct Survival) Collaborative Group. ISIS-4: a randomised factorial trial assessing early oral captopril, oral mononitrate, and intravenous magnesium sulphate in 58,050 patients with suspected acute myocardial infarction. Lancet1995;345:669-85.OpenUrlCrossRefMedlineWeb of Science↵Early administration of intravenous magnesium to high-risk patients with acute myocardial infarction in the Magnesium in Coronaries (MAGIC) Trial: a randomised controlled trial. Lancet2002;360:1189-96.OpenUrlCrossRefMedlineWeb of Science↵Shang A, Huwiler-Muntener K, Nartey L, Jüni P, Dörig S, Stene JA, et al. Are the clinical effects of homoeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy. Lancet2005;366:726-32.OpenUrlCrossRefMedlineWeb of Science↵Moreno SG, Sutton AJ, Ades AE, Stanley TD, Abrams KR, Peters JL, et al. Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study. BMC Med Res Methodol2009;9:2.OpenUrlCrossRefMedline↵Rucker G, Schwarzer G, Carpenter JR, Binder H, Schumacher M. Treatment-effect estimates adjusted for small-study effects via a limit meta-analysis. Biostatistics2011;12:122-42.OpenUrlFREE Full Text↵Moreno SG, Sutton AJ, Turner EH, Abrams KR, Cooper NJ, Palmer TM, et al. Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. BMJ2009;339:b2981.OpenUrlFREE Full Text↵Eyding D, Lelgemann M, Grouven U, Härter M, Kromp M, Kaiser T, et al. Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials. BMJ2010;341:c4737. OpenUrlFREE Full Text↵Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ2010;340:c365. OpenUrlFREE Full Text↵Egger M, Schneider M, Davey Smith G. Spurious precision? Meta-analysis of observational studies. BMJ1998;316:140-4.OpenUrlFREE Full Text
Article
Full-text available
Much work has focused on how reappraisal is related to emotions, but not behaviors. Two experiments advanced aggression theory by (a) testing how cognitive and attributional forms of reappraisal are related to aggressive affect and behavior, (b) testing variables that theoretically mediate the relation between attributional reappraisal and aggressive behavior, (c) testing the moderating influences of cognitive and attributional reappraisal on aggressive behavior, and (d) developing and testing an intervention aimed at reducing vengeance through reappraisal training. Study 1 used an essay writing task in a 3 (feedback: provocation, no feedback, praise) × 2 (mitigating information: present, absent) experimental design. Provoked participants who did not receive mitigating information were significantly more aggressive than provoked participants who received mitigating information. State vengeance was a significant mediator. Study 2 examined an experimental intervention on vengeance over a 16-week semester. Intervention participants who had the largest increase in reappraisal displayed the greatest decrease in vengeance. Overall, these findings suggest that reappraisal reduces vengeance and aggressive behavior.
Article
Full-text available
The metafor package provides functions for conducting meta-analyses in R. The package includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables (study-level covariates) in these models. Meta-regression analyses with continuous and categorical moderators can be conducted in this way. Functions for the Mantel-Haenszel and Peto&apos;s one-step method for meta-analyses of 2 x 2 table data are also available. Finally, the package provides various plot functions (for example, for forest, funnel, and radial plots) and functions for assessing the model fit, for obtaining case diagnostics, and for tests of publication bias.
Article
Full-text available
Although dozens of studies have documented a relationship between violent video games and aggressive behaviors, very little attention has been paid to potential effects of prosocial games. Theoretically, games in which game characters help and support each other in nonviolent ways should increase both short-term and long-term prosocial behaviors. We report three studies conducted in three countries with three age groups to test this hypothesis. In the correlational study, Singaporean middle-school students who played more prosocial games behaved more prosocially. In the two longitudinal samples of Japanese children and adolescents, prosocial game play predicted later increases in prosocial behavior. In the experimental study, U.S. undergraduates randomly assigned to play prosocial games behaved more prosocially toward another student. These similar results across different methodologies, ages, and cultures provide robust evidence of a prosocial game content effect, and they provide support for the General Learning Model.
Article
Authoritarianism has been the subject of scientific inquiry for nearly a century, yet the vast majority of authoritarianism research has focused on right-wing authoritarianism. In the present studies, we investigate the nature, structure, and nomological network of left-wing authoritarianism (LWA), a construct famously known as “the Loch Ness Monster” of political psychology. We iteratively construct a measure and data-driven conceptualization of LWA across six samples (N = 7,258) and conduct quantitative tests of LWA’s relations with more than 60 authoritarianism-related variables. We find that LWA, right-wing authoritarianism, and social dominance orientation reflect a shared constellation of personality traits, cognitive features, beliefs, and motivational values that might be considered the “heart” of authoritarianism. Relative to right-wing authoritarians, left-wing authoritarians were lower in dogmatism and cognitive rigidity, higher in negative emotionality, and expressed stronger support for a political system with substantial centralized state control. Our results also indicate that LWA powerfully predicts behavioral aggression and is strongly correlated with participation in political violence. We conclude that a movement away from exclusively right-wing conceptualizations of authoritarianism may be required to illuminate authoritarianism’s central features, conceptual breadth, and psychological appeal. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Article
Our research examines the association between perceived physical vulnerability and prosocial behavior. Studies 1 to 4 establish a positive association between individuals’ vulnerability and their prosociality. To increase generality, these studies looked at different behaviors (volunteering vs. monetary donations), various physical harms (e.g., war vs. illness), and different samples (students vs. MTurk workers). Study 4 also provides initial evidence of a partial mediating effect of closeness on the observed association. In Study 5, perceived vulnerability is experimentally manipulated, demonstrating a causal link between vulnerability and willingness to donate. Study 6 further demonstrates that closeness partially mediates the association between vulnerability and donation, while ruling out an alternative explanation of the effect—such as that vulnerable people donate in expectation of future reciprocity. Together, our research demonstrates a consistent positive association between perceived physical vulnerability and prosociality. This effect appears small when considering daily threats and stronger when vulnerability becomes more salient.
Article
The results of prior research investigating whether the violence in violent video games leads to increased subsequent aggression are mixed. Some observers question whether the difficulty and/or the competitive aspects of these games are important, but overlooked, factors that also affect aggression. In the present study, participants (N = 408) played a violent or nonviolent video game that was either difficult or easy and in which they competed and won, competed and lost, or did not compete against another player. Results revealed that participants became more aggressive only after playing a competitive, as opposed to a noncompetitive, game. Level of violence, winning or losing, and game difficulty did not have any significant effect. These results support the assertion that competition in video games has an independent and significant effect on subsequent aggression beyond violent content and game difficulty.
Article
The effect of exposure to violent video game content on aggression is intensely debated. Meta-analyses have produced widely varying estimates as to the effect (or non-effect) of violent video games on subsequent aggressive thoughts and behaviors. Recent work suggests that interactivity and player skill may play key roles in moderating the effects of violent content in video games on aggression. This study investigates the effects of violence, interactivity, and player skill on mild aggressive behavior using a custom-developed first-person shooter game allowing for high levels of experimental control. We conduct effect and equivalence tests with effect size assumptions drawn from prominent meta-analyses in the video game violence literature, finding that aggressive behavior following violent video game play is statistically equivalent to that observed following non-violent game play. We also observe an interaction between violent game content, player skill, and interactivity. When player skill matched the interactivity of the game, violent content led to an increase in aggressive behavior, whereas when player skill did not match the interactivity of the game, violent content decreased aggressive behavior. This interaction is probed using a multiverse analysis incorporating both classical significance testing and Bayesian analyses.
Article
The current studies examined the relationship between the penchant to daydream about helping others and prosocial traits and behaviour. We reasoned that fantasising about prosocial acts should be positively associated with a more prosocial disposition and real behaviour. Across both studies, the findings suggest that people who exhibit prosocial characteristics (e.g., empathic concern, fantasy/fictional empathy, moral reasoning) are more likely to fantasise about prosocial behaviour, and these characteristics are reliably associated with increased helping behaviours. From Study 1, the correlational results showed that people higher in agreeableness exhibited a stronger tendency to engage in prosocial fantasising, and empathy, in part, mediated the relationship. The experimental results from Study 2 conceptually support those from Study 1; when prompted to fantasise about prosocial behaviour, those higher in agreeableness and openness to experience engaged in more helping behaviour, whereas in a control condition, no helping differences emerged. Finding that empathic concern was most consistently related to daydreaming is consistent with the theory in that people are more intrinsically motivated to promote other's welfare at a personal cost when they feel empathy. Engaging in prosocial fantasising may increase empathy, which in turn, may enhance one's prosocial disposition and increase one's helping behaviour.
Article
Crowdsourcing platforms provide an affordable approach for recruiting large and diverse samples in a short time. Past research has shown that researchers can obtain reliable data from these sources, at least in domains of research that are not affectively involving. The goal of the present study was to test if crowdsourcing platforms can also be used to conduct experiments that incorporate the induction of aversive affective states. First, a laboratory experiment with German university students was conducted in which a frustrating task induced anger and aggressive behavior. This experiment was then replicated online using five crowdsourcing samples. The results suggest that participants in the online samples reacted very similarly to the anger manipulation as participants in the laboratory experiments. However, effect sizes were smaller in crowdsourcing samples with non-German participants while a crowdsourcing sample with exclusively German participants yielded virtually the same effect size as in the laboratory.
Article
Background: The prefrontal cortex is crucial for top-down regulation of aggression, but the neural underpinnings of aggression are still poorly understood. Past research showed the transcranial direct current stimulation (tDCS) over the ventrolateral prefrontal cortex (VLPFC) modulates aggression following exposure to risk factors for aggression (e.g., social exclusion, violent media). Although frustration is a key risk factor for aggression, no study to date has examined the modulatory role of tDCS on frustration-induced aggression. Objectives: By exploring the VLPFC involvement in frustration-aggression link, we tested the hypothesis that the anodal tDCS over right and left VLPFC modulates frustration-induced aggression. We also explored whether tDCS interacts with gender to influence frustration-induced aggression. Methods: 90 healthy participants (45 men) were randomly assigned to receive anodal or sham tDCS over the right or left VLPFC before being frustrated by an accomplice. To increase reliability, several tasks were used to measure aggression. Results: We found that anodal tDCS over the left VLPFC, compared to sham stimulation, increased aggression. Unexpectedly, no main effect was found following tDCS of right VLPFC. However, we also found a significant interaction between gender and tDCS, showing that males were more aggressive than females following sham stimulation, but females became as aggressive as males following active tDCS. Conclusion: Overall, these results shed light on the neural basis of frustration-induced aggression, providing further evidence for the involvement of VLPFC in modulating aggressive responses, and on gender differences in aggression. Future research should further investigate the role of stimulating the VLPFC on frustration-induced aggression.
Article
Laboratory measures play an important role in the study of aggression because they allow researchers to make causal inferences. However, these measures have also been criticized. In particular, the competitive reaction time task (CRTT) has been criticized for allowing aggression to be operationalized in multiple ways, leaving it susceptible to “p‐hacking.” This article describes the development of the CRTT and the ways in which its paradigm flexibility and analytic flexibility allows it to test a wide range of hypotheses and research questions. This flexibility gives the CRTT significant scientific utility, but as with any research paradigm, comes with the obligation that it has to be used with integrity. Although safeguards exist and there is little evidence of misuse, study preregistration can increase confidence in CRTT findings. The importance of findings such as those of Hyatt et al. (in press), which provide further evidence for the validity of the CRTT, are also noted.
Article
Competitive reaction time tasks (CRTTs) have been used widely in social science research, but recent criticism has been directed at the flexible quantification strategies used with this methodology. A recent review suggests that over 150 different quantification strategies have been used in this literature, and there is evidence to suggest that different operationalizations can affect the results and interpretations of experiments using CRTTs. In the current investigation, we reanalyze data from four existing samples from two different sites (total N = 600) to examine how the relations between a range of personality traits and aggression vary based on how aggression is operationalized. Our results suggest that there is a modest degree of heterogeneity in effect size and direction for these relations, and that effect size and direction were most consistent for traits more generally related to lab aggression (e.g., psychopathy, low Five‐Factor Model agreeableness). In addition, profile matching analyses suggest that different operationalizations yield empirical correlates that are quite similar to one another, even when quantifying absolute rather than relative similarity. These results were consistent across site, methodology, and type of sample, suggesting that these issues are likely generalizable across most labs using CRTTs. We conclude with suggestions for future directions, particularly emphasizing the need for adequately‐powered samples, and for researchers to preregister a plan for how CRTT data will be analyzed.
Article
Fear of retaliation plays an integral role in the evolution of provocation into conflict. While previous research has only studied fear of retaliation in the context of a laboratory manipulation or a justification for not detailing transgressions to others, the current research utilized experimental (Study 1) and correlational (Study 2) studies to test the role of dispositional fear of retaliation on aggression. In Study 1, participants were provoked by receiving either positive or negative feedback on an essay they wrote from an ostensible partner and were later given the opportunity to aggress against the partner. Results demonstrated that dispositional fear of retaliation moderated the relationship between provocation and aggression, such that participants who were provoked were more aggressive than non-provoked participants; however, this relationship was nullified for those high on fear of retaliation. In Study 2, participants completed measures to assess the following traits: fear of retaliation, aggression, behavioral inhibition, anxiety, and the Big Five. Results showed that fear of retaliation correlated with physical and verbal aggression, which remained significant while controlling for other trait variables – ruling out alternative hypotheses.
Article
We present a lab-field experiment designed to systematically assess the external validity of social preferences elicited in a variety of experimental games. We do this by comparing behavior in the different games with several behaviors elicited in the field and with self-reported behaviors exhibited in the past, using the same sample of participants. Our results show that the experimental social preference games do a poor job explaining both social behaviors in the field and social behaviors from the past. We also include a systematic review and meta-analysis of previous literature on the external validity of social preference games.
Article
We tested the prediction, derived from the hubris hypothesis, that bragging might serve as a verbal provocation and thus enhance aggression. Experiments 1 and 2 were vignette studies where participants could express hypothetical aggression; Experiment 3 was an actual decision task where participants could make aggressive and/or prosocial choices. Observers disliked an explicit braggart (who claimed to be “better than others”) or a competence braggart as compared with an implicit braggart (who claimed to be “good”) or a warmth braggart, respectively. Showing that explicit and competence bragging function as verbal provocations, observers responded more aggressively to the explicit and competence braggart than to the implicit and warmth braggart, respectively. They did so because they inferred that an explicit and a competence braggart viewed other people and them negatively, and therefore disliked the braggart. Rather than praising the self, braggarts are sometimes viewed as insulting others.
Article
Results from several studies show that early aggression predicts later aggression; however, few studies have examined the mediating mechanisms in these relations. The paucity of research that has tested mediation found that aggressive motives and hostile attributions are important causal processes. This past work is limited by not measuring aggression multiple times throughout the study to test aggression change over time and the variables that mediate such change. The current study had participants (N = 90) interact with a same-sex confederate on a modified version of the Tangram Task-our measure of aggressive behavior-for three trials. At each trial, participants completed a measure of aggressive motivations, assigned tangram puzzles for their partner to solve, were provoked (or not) by their ostensible partner, and then completed an assessment of aggressive attributions regarding their partner's behavior. Results showed that, for provoked participants, the relation between Time 1 aggressive attributions predicted Time 3 aggressive behavior through the following temporal mediated pathway: Time 2 aggressive attributions, Time 2 aggressive behavior, and Time 3 aggressive motivations.
Article
This paper presents a selective review of decades of empirical research on behavioral games, with a particular focus on experimental games. We suggest that games effectively (but imperfectly) model many human social interactions, and we present important findings from six popular experimental games – Prisoner’s and Social Dilemmas, and the Trust, Ultimatum, Dictator, and Deception games – to discuss their theoretical and empirical implications as well as their various insights into human nature. We close by asking several fundamental questions about games and suggesting several directions and ideas for future research.
Article
The Tangram Help/Hurt Task is a laboratory-based measure designed to simultaneously assess helpful and hurtful behavior. Across five studies we provide evidence that further establishes the convergent and discriminant validity of the Tangram Help/Hurt Task. Cross-sectional and meta-analytic evidence finds consistently significant associations between helpful and hurtful scores on the Tangram Task and prosocial and aggressive personality traits. Experimental evidence reveals that situational primes known to induce aggressive and prosocial behavior significantly influence helpful and hurtful scores on the Tangram Help/Hurt Task. Additionally, motivation items in all studies indicate that tangram choices are indeed associated with intent of helping and hurting. We discuss the advantages and limitations of the Tangram Help/Hurt Task relative to established measures of helpful and hurtful behavior. Aggr. Behav. 9999:1–14, 2016.
Article
The construct validity of four laboratory paradigms used in studying aggression (the teacher/learner, essay evaluation, competitive reaction time game, and Bobo modeling paradigms) is examined. It is argued that the first three paradigms under-represent the construct of aggression because they deal only with situations of retaliation which have been sanctioned by a third party legitimate authority (the experimenter) and because research participants are given no choice other than physical forms of harm-doing as a means of responding to attacks. Additionally, the teacher/learner and essay evaluation paradigms employ cover stories which make the research participants' intentions and motivations unclear or even counter to the proposed theory. The Bobo modeling paradigm may not examine aggression at all, rather, imitative behavior of "rough and tumble play" in which there is no intent to harm. It is proposed that the focus of research on aggression should be the intentions and motivations of the actor rather than simple attack-retaliation situations. Future research needs to examine the motivations of subjects in the traditional paradigms to determine if they are situations in which participants intend to cause harm. Additionally, in order to examine the full range of phenomena which aggression theorists wish to explain, a multimethod approach combining both laboratory and non-laboratory studies must be utilized.
Article
Two studies investigated retaliatory responses to actual honor threats among members of an honor culture (Turkey) and a dignity culture (northern United States). The honor threat in these studies was based on previous research which has shown that honesty is a key element of the conception of honor and that accusations of dishonesty are threatening to one's honor. In both studies, participants wrote an essay describing the role of honesty in their lives and received feedback on their essay accusing them of being dishonest (vs. neutral feedback). Turkish participants retaliated more strongly than did northern U.S. participants against the person who challenged their honesty by assigning him/her to solve more difficult tangrams over easy ones (Study 1) and by choosing sensory tasks of a higher level of intensity to complete (Study 2). Study 2 added a relational honor condition, in which participants wrote about honesty in their parents' lives and examined the role of individual differences in honor values in retaliation. Endorsement of honor values significantly predicted retaliation among Turkish participants in the relational honor attack condition, but not among northern U.S. Aggr. Behav. 9999:XX-XX, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Article
Dependent effect size estimates are a common problem in meta-analysis. Recently, a robust variance estimation method was introduced that can be used whenever effect sizes in a meta-analysis are not independent. This problem arises, for example, when effect sizes are nested or when multiple measures are collected on the same individuals. In this paper, we investigate the robustness of this method in small samples when the effect size of interest is the risk difference, log risk ratio, or log odds ratio. This simulation study examines the accuracy of 95% confidence intervals constructed using the robust variance estimator across a large variety of parameter values. We report results for both estimations of the mean effect (intercept) and of a slope. The results indicate that the robust variance estimator performs well even when the number of studies is as small as 10, although coverage is generally less than nominal in the slope estimation case. Throughout, an example based on a meta-analysis of cognitive behavior therapy is used for motivation. Copyright © 2013 John Wiley & Sons, Ltd. Copyright © 2013 John Wiley & Sons, Ltd.
Article
Three analyses of published research were undertaken to assess whether diverse laboratory response measures that are intended to measure aggression reflect a common underlying construct. It was found that (a) alternative measures of physical aggression directed by the same subjects against the same target tend to intercorrelate positively within studies, (b) across studies, the correlations between effect-size estimates of physical and written aggression emitted by the same subjects are positive, and (c) physical and written aggressive responses are similarly influenced by theoretically relevant antecedent factors (e.g., personal attack and frustration). The consistent overall pattern of results supports the notion that aggression, defined as intent to harm, is a viable construct that possesses some degree of generality.
Article
explicate four kinds of validity [statistical conclusion validity, internal validity, construct validity and external validity] / describe and critically examine some quasi-experimental designs from the perspective of these four kinds of validity, especially internal validity / argue that the quality of causal inference depends on the structural attributes of a quasi-experimental design, the local particulars of each research project, and the quality of substantive theory available to aid in interpretation / place special emphasis on quasi-experimental designs that allow multiple empirical probes of the causal hypothesis under scrutiny on the assumption that this usually rules out more threats to internal validity (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Conventional meta-analytic techniques rely on the assumption that effect size estimates from different studies are independent and have sampling distributions with known conditional variances. The independence assumption is violated when studies produce several estimates based on the same individuals or there are clusters of studies that are not independent (such as those carried out by the same investigator or laboratory). This paper provides an estimator of the covariance matrix of meta-regression coefficients that are applicable when there are clusters of internally correlated estimates. It makes no assumptions about the specific form of the sampling distributions of the effect sizes, nor does it require knowledge of the covariance structure of the dependent estimates. Moreover, this paper demonstrates that the meta-regression coefficients are consistent and asymptotically normally distributed and that the robust variance estimator is valid even when the covariates are random. The theory is asymptotic in the number of studies, but simulations suggest that the theory may yield accurate results with as few as 20-40 studies. Copyright © 2010 John Wiley & Sons, Ltd. Copyright © 2010 John Wiley & Sons, Ltd.
Article
The possibility exists that during the past decade some important social psychological theory and research and some salient social forces may have interacted to produce a subtle shift in the Zeitgeist from studies concerned with more negative to studies of more prosocial or positive forms of social behavior. The history of the term “prosocial” is first explored. Some different forms of prosocial behavior are indicated, along with the different research strategies and operational definitions involved, and a working definition of prosocial behavior is suggested. Finally, each chapter is overviewed and some indications are made of various intersections of agreement as well as countervailing arguments.
Article
Aggressive behavior is a highly complex construct that is very challenging to measure. While advancements in the assessment of aggression have been made, some fundamental problems persist. First, the operational definition of aggressive behavior and its various subtypes are frequently misinterpreted and lack sufficient conceptual clarity. Second, due to these definitional problems, assessment instruments frequently correspond to different conceptualizations of aggression. In the present review, we attempt to resolve these limitations by proposing a new taxonomic system of aggressive acts that (a) corresponds to a hybrid definition of aggressive behavior, and (b) increases conceptual clarity between subtypes of aggressive behavior. It is argued that this classification system will permit greater precision in the assessment of aggression and lead to the improvement of theories, diagnostic systems, and clinical interventions.