PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Kruger, Wirtz, van Boven, and Altermatt (2004) described the effort heuristic as the tendency to evaluate the quality and the monetary value of an object as higher if the production of that object was perceived as involving more effort. We attempted two preregistered replications (total N = 1405; U.S. American participants from MTurk and Prolific) of their Experiments 1 and 2. Our first replication using an MTurk sample found support for the original’s findings regarding Experiment 2, yet failed to find support for the original’s findings in Experiment 1. Our second revised attempt of Experiment 1 on Prolific was mixed with more nuanced findings, showing support for an effort heuristic effect for liking/quality and no support for monetary value.
Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Preprint
Full-text available
Eight studies (total N = 1962) show that consumers believe that cultural products that took a higher amount of effort to produce are going to make them feel emotionally worse-off after consumption. This effect is especially stronger when considering the author's mood when composing the work, and counteracts the effort heuristic (Kruger, Wirtz, Van Boven, & Altermatt, 2004; the higher effort = higher quality beliefs) by affecting product choice, quality inferences, and willingness-to-buy. Theoretical implications regarding the connection between effort, mood, and quality, and mood transmission are discussed. Practical implication about product positioning and advertisement of cultural products are presented.
Article
Full-text available
Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and observing effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.
Article
Full-text available
Inaction inertia is the phenomenon that forgoing an initial attractive opportunity decreases the ‎likelihood of taking a subsequent opportunity that is less attractive, even when the subsequent ‎opportunity still offers positive value. We conducted three pre-registered replications of ‎Tykocinski, Pittman, and Tuttle (1995) Experiments 1 and 2's four scenarios in four samples (N ‎‎= 1555). We found consistent findings across samples, with the inaction inertia effect ‎dependent on the scenario used. Strongest support was for the car scenario (d = -0.57 to -0.68) ‎and the ski scenario (d = -0.18 to -0.67), with mixed findings for the fitness scenario (large-‎small: d = -0.62; control contrasts: opposite to predictions) and weak to no effects for the flyer ‎scenario (d = -0.14 to 0.02). We conclude that context is important in studying inaction inertia, ‎recommend the car and ski scenarios for follow-up research on inaction inertia, and discuss ‎implications for future research. ‎
Article
Full-text available
We measure how accurately replication of experimental results can be predicted by black-box statistical models. With data from four large-scale replication projects in experimental psychology and economics, and techniques from machine learning, we train predictive models and study which variables drive predictable replication. The models predicts binary replication with a cross-validated accuracy rate of 70% (AUC of 0.77) and estimates of relative effect sizes with a Spearman ρ of 0.38. The accuracy level is similar to market-aggregated beliefs of peer scientists [1, 2]. The predictive power is validated in a pre-registered out of sample test of the outcome of [3], where 71% (AUC of 0.73) of replications are predicted correctly and effect size correlations amount to ρ = 0.25. Basic features such as the sample and effect sizes in original papers, and whether reported effects are single-variable main effects or two-variable interactions, are predictive of successful replication. The models presented in this paper are simple tools to produce cheap, prognostic replicability metrics. These models could be useful in institutionalizing the process of evaluation of new findings and guiding resources to those direct replications that are likely to be most informative.
Article
Full-text available
The importance of replication is becoming increasingly appreciated, however, considerably less consensus exists about how to evaluate the design and results of replications. We make concrete recommendations on how to evaluate replications with more nuance than what is typically done currently in the literature. We highlight six study characteristics that are crucial for evaluating replications: replication method similarity, replication differences, investigator independence, method/data transparency, analytic result reproducibility, and auxiliary hypotheses’ plausibility evidence. We also recommend a more nuanced approach to statistically interpret replication results at the individual-study and meta-analytic levels, and propose clearer language to communicate replication results.
Article
Full-text available
Consumers typically infer greater quantity from larger numbers. For instance, a 500 gram box of chocolates appears heavier than a .5 kilogram box. By expressing quantities in alternative units or attribute dimensions, one can represent an otherwise identical quantity in a more versus less discretized manner (e.g., a box containing 25 chocolates vs. a box weighing 500 grams). Seven experimental studies show that a difference between more discretized quantities (e.g., 25 vs. 50 chocolates) appears larger relative to a difference between less discretized quantities (e.g., 500 grams vs. 1,000 grams), above and beyond effects of number magnitude. More discretized quantity expressions enhance the consumers’ ability to discriminate between choice options and can also nudge consumers to more favorable choices. Because more discretized quantities are more evaluable, expressing a quantity in terms of a collection of elements particularly helps individuals who are less numerically proficient. By identifying how discretization functions as a novel antecedent of evaluability and by distinguishing two different conceptualizations of numerosity (i.e., symbolic and perceptual numerosity), this article draws important connections between the numerical cognition literature and General Evaluability Theory. © The Author(s) 2018. Published by Oxford University Press on behalf of Journal of Consumer Research, Inc.
Article
People often make judgments about their own and others' valuations and preferences. Across 12 studies (N = 17,594), we find a robust bias in these judgments such that people overestimate the valuations and preferences of others. This overestimation arises because, when making predictions about others, people rely on their intuitive core representation of the experience (e.g., is the experience generally positive?) in lieu of a more complex representation that might also include countervailing aspects (e.g., is any of the experience negative?). We first demonstrate that the overestimation bias is pervasive for a wide range of positive (Studies 1-5) and negative experiences (Study 6). Furthermore, the bias is not merely an artifact of how preferences are measured (Study 7). Consistent with judgments based on core representations, the bias significantly reduces when the core representation is uniformly positive (Studies 8A-8B). Such judgments lead to a paradox in how people see others trade off between valuation and utility (Studies 9A-9B). Specifically, relative to themselves, people believe that an identically paying other will get more enjoyment from the same experience, but paradoxically, that an identically enjoying other will pay more for the same experience. Finally, consistent with a core representation explanation, explicitly prompting people to consider the entire distribution of others' preferences significantly reduced or eliminated the bias (Study 10). These findings suggest that social judgments of others' preferences are not only largely biased, but they also ignore how others make trade-offs between evaluative metrics. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Over the past four decades, psychometric meta-analysis (PMA) has emerged a key way that psychological disciplines build cumulative scientific knowledge. Despite the importance and popularity of PMA, software implementing the method has tended to be closed-source, inflexible, limited in terms of the psychometric corrections available, cumbersome to use for complex analyses, and/or costly. To overcome these limitations, we created the psychmeta R package: a free, open-source, comprehensive program for PMA.
Preprint
Zwaan, Etz, Lucas, and Donnellan (2017) address commonly voiced concerns about replication studies and conclude that no obstacles exist to making replication a routine aspect of psychological science. We extend Zwaan et al.’s discussion by making concrete recommendations on how to (1) evaluate the epistemological soundness (transparency, methodological similarity relative to an original study, and auxiliary hypotheses plausibility evidence) of replication studies, and (2) statistically interpret replication evidence using nuanced and clear language.
Article
According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time.