PreprintPDF Available

Action-Inaction Asymmetries in Emotions and Counterfactual Thoughts: Meta-Analysis of the Action Effect [Registered Report Stage 1]

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Action-effect refers to the phenomenon in which people experience, associate, or attribute ‎stronger emotions for action compared to inaction. In this registered report, we conducted a ‎meta-analysis of the action effect literature (k = [enter number of studies by Stage 2], N = [enter ‎no. of participants by Stage 2], 1982-2021). We found support/no support/mixed support for ‎action-effect in [positive emotions, g = X.XX, 95% CI [X.XX, X.XX]], support/no ‎support/mixed support for action-effect in [negative emotions, g = X.XX, 95% CI [X.XX, ‎X.XX]], and support/no support/mixed support for action-effect in [counterfactual thought, g = ‎X.XX, 95% CI [X.XX, X.XX]]. Study heterogeneity was [low / low to medium / medium / ‎medium to high / high], Q(XX) = XXX.XX, p = .XXX / < .001, I² = XX.XX%. [Summarize ‎results of publication bias tests; to be completed by Stage 2]. Action-effect was stronger [list of ‎conditions in which the effects were stronger, if there was/were, to be entered by Stage 2]. We ‎pre-registered our meta-analysis, with all search protocol, datasets, code, and supplementary ‎made available on the OSF: https://osf.io/acm24/ .‎
Content may be subject to copyright.
A preview of the PDF is not available
... We believe that more replications with extensions are needed to better understand the robustness of the findings in this literature and examine new directions, together with metaanalyses of the action-inaction related literature (e.g., action-effect: Yeung & Feldman, 2022; omission bias: Yeung et al., 2022), to examine possible moderating factors such as temporal distance, scenarios versus experience, between-subject versus within-subject study design comparison, and the used meanings of action versus inaction. We require more comprehensive systematic aggregation of findings and insights and identifying boundary conditions. ...
... We believe that more replications with extensions are needed to better understand the robustness of the findings in this literature and examine new directions, together with metaanalyses of the action-inaction related literature (e.g., action-effect: Yeung & Feldman, 2022; omission bias: Yeung et al., 2022), to examine possible moderating factors such as temporal distance, scenarios versus experience, between-subject versus within-subject study design comparison, and the used meanings of action versus inaction. We require more comprehensive systematic aggregation of findings and insights and identifying boundary conditions. ...
Preprint
Full-text available
The temporal pattern of regret is the phenomenon that people perceive or experience stronger regret over action compared to inaction in the short-term, yet stronger regret over inaction compared to action in the long term. Following mixed and null findings in the literature, we conducted replications and extension of Studies 1, 3, 4, and 5 in the classic Gilovich and Medvec (1994) which first demonstrated this phenomenon, with a single combined data collection in randomized display order with an online sample of Americans on MTurk (N = 988). We found support for the original findings using different designs in Studies 1, 3, and 4, yet with weaker effects. We failed to find support for such a pattern in Study 5. We discuss possible interpretations for these differences: the change in the meaning of action and inaction, or change in hypothetical versus real-life personal experiences. Extending the replications, we found support for stronger responsibility for action compared to inaction both in the short-term and the long-term. We conclude overall support for the effects, yet with follow-up work necessary to resolve the inconsistencies in findings. Pre-registration, materials, data, and code were made available on: https://osf.io/7m3q2/
Article
Full-text available
Omission bias is people's tendency to evaluate harm done through omission as less morally ‎wrong and less blameworthy than commission when there is harm. However, findings are ‎inconsistent. We conducted a pre-registered meta-analysis, with 21 samples (13 articles, 49 ‎effects) on omission-commission asymmetries in judgments and decisions. We found an ‎overall effect of g=0.45[0.14,0.77], with stronger effects for morality and blame than for ‎decisions. Publication bias tests produced mixed results with some indication for ‎publication bias, though effects persisted even after most publication-bias adjustments. The ‎small sample of studies included limited our ability to draw definite conclusions regarding ‎moderators, with inconclusive findings when applying different models. After ‎compensating for low-power we found indication for moderation by role responsibility, ‎perspective (self-versus-other), outcome-type, and study-design. We hope this meta-‎analysis will inspire research on this phenomenon and applications to real-life, especially ‎given the raging pandemic. Materials, data, and code are available on ‎https://osf.io/9fcqm/
Article
Full-text available
Exceptionality effect is the phenomenon that people associate stronger negative affect with a ‎negative outcome when it is a result of an exception (abnormal behavior) compared to when it ‎is a result of routine (normal behavior). In this pre-registered meta-analysis, we examined ‎exceptionality effect in 48 studies (N = 4212). An analysis of 35 experimental studies (n = ‎‎3332) showed medium to strong effect (g = 0.60, 95% confidence intervals (CI) [0.41, 0.79]) ‎for past behavior across several measures (regret/affect: g = 0.66, counterfactual thought: g = ‎‎0.40, self-blame: g = 0.44, victim compensation: g = 0.44, offender punishment: g = 0.50). An ‎analysis of 13 one-sample studies presenting a comparison of exceptional and routine behaviors ‎simultaneously (n = 1217) revealed a very strong exceptionality effect (converted g = 2.07, CI ‎‎[1.59, 2.43]). We tested several theoretical moderators: norm strength, event controllability, ‎outcome rarity, action versus inaction, and status quo. We found that exceptionality effect was ‎stronger when the routine was aligned with the status quo option and with action rather than for ‎inaction. All materials are available on: https://osf.io/542c7/‎
Article
Full-text available
Research on action and inaction in judgment and decision making now spans over 35 years, ‎with ever-growing interest. Accumulating evidence suggests that action and inaction are ‎perceived and evaluated differently, affecting a wide array of psychological factors from ‎emotions to morality. These asymmetries have been shown to have real impact on choice ‎behavior in both personal and interpersonal contexts, with implications for individuals and ‎society. We review impactful action-inaction related phenomena, with a summary and ‎comparison of key findings and insights, reinterpreting these effects and mapping links ‎between effects using norm theory's (Kahneman & Miller, 1986) concept of normality. ‎Together, these aim to contribute towards an integrated understanding of the human psyche ‎regarding action and inaction.‎
Article
Full-text available
Ongoing technological developments have made it easier than ever before for scientists to share their data, materials, and analysis code. Sharing data and analysis code makes it easier for other researchers to reuse or check published research. However, these benefits will emerge only if researchers can reproduce the analyses reported in published articles and if data are annotated well enough so that it is clear what all variable and value labels mean. Because most researchers are not trained in computational reproducibility, it is important to evaluate current practices to identify those that can be improved. We examined data and code sharing for Registered Reports published in the psychological literature from 2014 to 2018 and attempted to independently computationally reproduce the main results in each article. Of the 62 articles that met our inclusion criteria, 41 had data available, and 37 had analysis scripts available. Both data and code for 36 of the articles were shared. We could run the scripts for 31 analyses, and we reproduced the main results for 21 articles. Although the percentage of articles for which both data and code were shared (36 out of 62, or 58%) and the percentage of articles for which main results could be computationally reproduced (21 out of 36, or 58%) were relatively high compared with the percentages found in other studies, there is clear room for improvement. We provide practical recommendations based on our observations and cite examples of good research practices in the studies whose main results we reproduced.
Article
Full-text available
Conventional meta-analytic procedures assume that effect sizes are independent. When effect sizes are not independent, conclusions based on these conventional procedures can be misleading or even wrong. Traditional approaches, such as averaging the effect sizes and selecting one effect size per study, are usually used to avoid the dependence of the effect sizes. These ad-hoc approaches, however, may lead to missed opportunities to utilize all available data to address the relevant research questions. Both multivariate meta-analysis and three-level meta-analysis have been proposed to handle non-independent effect sizes. This paper gives a brief introduction to these new techniques for applied researchers. The first objective is to highlight the benefits of using these methods to address non-independent effect sizes. The second objective is to illustrate how to apply these techniques with real data in R and Mplus. Researchers may modify the sample R and Mplus code to fit their data.
Article
Full-text available
Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/.
Article
Full-text available
Data documentation in psychology lags behind not only many other disciplines, but also basic standards of usefulness. Psychological scientists often prefer to invest the time and effort that would be necessary to document existing data well in other duties, such as writing and collecting more data. Codebooks therefore tend to be unstandardized and stored in proprietary formats, and they are rarely properly indexed in search engines. This means that rich data sets are sometimes used only once—by their creators—and left to disappear into oblivion. Even if they can find an existing data set, researchers are unlikely to publish analyses based on it if they cannot be confident that they understand it well enough. My codebook package makes it easier to generate rich metadata in human- and machine-readable codebooks. It uses metadata from existing sources and automates some tedious tasks, such as documenting psychological scales and reliabilities, summarizing descriptive statistics, and identifying patterns of missingness. The codebook R package and Web app make it possible to generate a rich codebook in a few minutes and just three clicks. Over time, its use could lead to psychological data becoming findable, accessible, interoperable, and reusable, thereby reducing research waste and benefiting both its users and the scientific community as a whole.
Preprint
This Systematic Review Registration Form is intended as a general-purpose registration form. The form is designed to be applicable to reviews across disciplines (i.e., psychology, economics, law, physics, or any other field) and across review types (i.e., scoping review, review of qualitative studies, meta-analysis, or any other type of review). That means that the reviewed records may include research reports as well as archive documents, case law, books, poems, etc. This form, therefore, is a fall-back for more specialized forms and can be used if no specialized form or registration platform is available.
Article
Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the trim and fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger's regression test that incorporate robust variance estimation (RVE) or multilevel meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Article
Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.