To read the file of this research, you can request a copy directly from the authors.
Royzman and Baron (2002) demonstrated that people prefer indirect harm to direct harm: they judge actions that produce harm as a by-product to be more moral than actions that produce harm directly. In two preregistered studies, we successfully replicated Study 2 of Royzman and Baron (2002) with a Hong Kong student sample (N = 45) and an online American Mechanical Turk sample (N = 314). We found consistent evidential support for the preference for indirect harm phenomenon (d = 0.46 [0.26, 0.65] to 0.47 [0.18, 0.75]), weaker than effects reported in the original findings of the target article (d = 0.70, CI [0.40, 1.00]). We also successfully replicated findings regarding reasons underlying a preference for indirect harm (directness, intent, omission, probability of harm, and appearance of harm).
Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects.
Psychology journals rarely publish nonsignificant results. At the same time, it is often very unlikely (or “too good to be true”) that a set of studies yields exclusively significant results. Here, we use likelihood ratios to explain when sets of studies that contain a mix of significant and nonsignificant results are likely to be true or “too true to be bad.” As we show, mixed results are not only likely to be observed in lines of research but also, when observed, often provide evidence for the alternative hypothesis, given reasonable levels of statistical power and an adequately controlled low Type 1 error rate. Researchers should feel comfortable submitting such lines of research with an internal meta-analysis for publication. A better understanding of probabilities, accompanied by more realistic expectations of what real sets of studies look like, might be an important step in mitigating publication bias in the scientific literature.
Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016's position that "bigger samples are generally better, but that very large samples could have the downside of commandeering resources that would have been better invested in other studies" (abstract). We identify problematic assumptions involved in FER2016's modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently powering studies (i.e., >80%) maximizes both research efficiency and confidence in the literature (research quality). Given that we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate.
In recent years, Mechanical Turk (MTurk) has revolutionized social science by providing a way to collect behavioral data with unprecedented speed and efficiency. However, MTurk was not intended to be a research tool, and many common research tasks are difficult and time-consuming to implement as a result. TurkPrime was designed as a research platform that integrates with MTurk and supports tasks that are common to the social and behavioral sciences. Like MTurk, TurkPrime is an Internet-based platform that runs on any browser and does not require any downloads or installation. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others. This article describes how TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process. Various features have been and continue to be implemented on the basis of feedback from the research community. TurkPrime is a free research platform.
Mechanical Turk (MTurk), an online labor market created by Amazon, has recently become popular among social scientists as a source of survey and experimental data. The workers who populate this market have been assessed on dimensions that are universally relevant to understanding whether, why, and when they should be recruited as research participants. We discuss the characteristics of MTurk as a participant pool for psychology and other social sciences, highlighting the traits of the MTurk samples, why people become MTurk workers and research participants, and how data quality on MTurk compares to that from other pools and depends on controllable and uncontrollable factors.
Political conservatives and liberals were interviewed about 3 kinds of sexual acts: homosexual sex, unusual forms of masturbation, and consensual incest between an adult brother and sister. Conservatives were more likely to moralize and to condemn these acts, but the differences were concentrated in the homosexual scenarios and were minimal in the incest scenarios. Content analyses reveal that liberals had a narrow moral domain, largely limited to the “ethics of autonomy” (Shweder, Much, Mahapatra, & Park, 1997) while conservatives had a broader and more multifaceted moral domain. Regression analyses show that, for both groups, moral judgments were best predicted by affective reactions, and were not predicted by perceptions of harmfulness. Suggestions for calming the culture wars over homosexuality are discussed.
We presented subjects with pairs of hypothetical scenarios. The action in each scenario harmed some people in order to aid others. In one member of the pair, the harm was a direct result of the action. In the other member, it was an indirect byproduct. Subjects preferred the indirect harm to the direct harm. This result could not be fully explained in terms of di#erences in judgments about which option was more active, more intentional, more likely to cause harm, or more subject to the disapproval of others. Taken together, these findings provide evidence for a new bias in judgment, a tendency to favor indirectly harmful options over directly harmful alternatives, irrespective of the associated outcomes, intentions, or self-presentational concerns. We speculate that this bias could originate from the use of a typical but somewhat unreliable property of harmful acts, their directness, as a cue to moral evaluation. We discuss the implications of the bias for a range of social issues, including the distinction between passive and active euthanasia, legal deterrence, and the rhetoric of a#rmative action.
We outline the need to, and provide a guide on how to, conduct a meta-analysis on one's own studies within a manuscript. Although conducting a “mini meta” within one's manuscript has been argued for in the past, this practice is still relatively rare and adoption is slow. We believe two deterrents are responsible. First, researchers may not think that it is legitimate to do a meta-analysis on a small number of studies. Second, researchers may think a meta-analysis is too complicated to do without expert knowledge or guidance. We dispel these two misconceptions by (1) offering arguments on why researchers should be encouraged to do mini metas, (2) citing previous articles that have conducted such analyses to good effect, and (3) providing a user-friendly guide on calculating some meta-analytic procedures that are appropriate when there are only a few studies. We provide formulas for calculating effect sizes and converting effect sizes from one metric to another (e.g., from Cohen's d to r), as well as annotated Excel spreadsheets and a step-by-step guide on how to conduct a simple meta-analysis. A series of related studies can be strengthened and better understood if accompanied by a mini meta-analysis.
To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals ' responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: (1) patterns of moral judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affi liation (within the limited range of our sample population) and (2) a majority of subjects failed to provide justifi cations that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action.
Subjects read scenarios concerning pairs of options. One option was an omission, the other, a commission. Intentions, motives, and consequences were held constant. Subjects either judged the morality of actors by their choices or rated the goodness of decision options. Subjects often rated harmful omissions as less immoral, or less bad as decisions, than harmful commissions. Such ratings were associated with judgments that omissions do not cause outcomes. The effect of commission is not simply an exaggerated response to commissions: a reverse effect for good outcomes was not found, and a few subjects were even willing to accept greater harm in order to avoid action. The “omission bias” revealed in these experiments can be described as an overgeneralization of a useful heuristic to cases in which it is not justified. Additional experiments indicated that subjects' judgments about the immorality of omissions and commissions are dependent on several factors that ordinarily distinguish omissions and commissions: physical movement in commissions, the presence of salient alternative causes in omissions, and the fact that the consequences of omissions would occur if the actor were absent or ignorant of the effects of not acting.
When powerful people cause harm, they often do so indirectly through other people. Are harmful actions carried out through others evaluated less negatively than harmful actions carried out directly? Four experiments examine the moral psychology of indirect agency. Experiments 1A, 1B, and 1C reveal effects of indirect agency under conditions favoring intuitive judgment, but not reflective judgment, using a joint/separate evaluation paradigm. Experiment 2A demonstrates that effects of indirect agency cannot be fully explained by perceived lack of foreknowledge or control on the part of the primary agent. Experiment 2B indicates that reflective moral judgment is sensitive to indirect agency, but only to the extent that indirectness signals reduced foreknowledge and/or control. Experiment 3 indicates that effects of indirect agency result from a failure to automatically consider the potentially dubious motives of agents who cause harm indirectly. Experiment 4 demonstrates an effect of indirect agency on purchase intentions.
Is moral judgment accomplished by intuition or conscious reasoning? An answer demands a detailed account of the moral principles in question. We investigated three principles that guide moral judgments: (a) Harm caused by action is worse than harm caused by omission, (b) harm intended as the means to a goal is worse than harm foreseen as the side effect of a goal, and (c) harm involving physical contact with the victim is worse than harm involving no physical contact. Asking whether these principles are invoked to explain moral judgments, we found that subjects generally appealed to the first and third principles in their justifications, but not to the second. This finding has significance for methods and theories of moral psychology: The moral principles used in judgment must be directly compared with those articulated in justification, and doing so shows that some moral principles are available to conscious reasoning whereas others are not.
What country/region was the original study conducted in?
Online, on the World Wide Web.
7. What country/region was the original study conducted in?
In the United States of America.
How do I convert a t-statistic (and odds ratio) into an effect size?
Watson, P. (2017). How do I convert a t-statistic (and odds ratio) into an effect size?
Retrieved 20 February, 2018, frohttp://imaging.mrc-cbu.cam.ac.uk/statswiki/FAQ/td
A Brief Guide to Evaluate Replications
E P Lebel
LeBel, E. P., Vanpaemel, W., Cheung, I., & Campbell, L. (2019). A Brief Guide to Evaluate
Replications. Meta Psychology, 541, 1-17. https://doi.org/10.31219/osf.io/paxyn
which was published in Psychology Science and cited 907 times. Therefore, the Baron and Royzman (2002) paper can be regarded as an influential one and its effect is worth confirming. Also, as this paper was published in 2002, the social environment has not changed much till yet
It is important to replicate this effect because:
The target Baron and Royzman (2002) paper is cited 149 times, and some papers which cited
Baron and Royzman (2002) are influential, like Cushman, F., Young, L., & Hauser, M. (2006)
which was published in Psychology Science and cited 907 times. Therefore, the Baron and
Royzman (2002) paper can be regarded as an influential one and its effect is worth
confirming. Also, as this paper was published in 2002, the social environment has not
changed much till yet. The target effect is highly likely to be cited in the future publications.
1. account intention, appearance and/or probability as reasons for the morality judgement
1. account intention, appearance and/or probability as reasons for the morality judgement
(instead of/other than indirectness)