Taisuke Imai’s research while affiliated with Japan Economic Research Institute and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (28)


Pre-Registration and Pre-Analysis Plans in Experimental Economics
  • Preprint

April 2025

·

31 Reads

Taisuke Imai

·

Severine Toussaert

·

Aurelien Baillon

·

[...]

·

Marie Claire Villeval

The open science movement has gained significant momentum over the past decade, with pre-registration and the use of pre-analysis plans being central to ongoing debates. Combining observational evidence on trends in adoption with survey data from 519 researchers, this study examines the adoption of pre-registration (potentially but not necessarily including pre-analysis plans) in experimental economics. Pooling statistics from 19 leading journals published between 2017 and 2023, we observe that the number of papers containing a pre-registration grew from seven per year to 190 per year. Our findings indicate that pre-registration has now become mainstream in experimental economics, with two-thirds of respondents expressing favorable views and 86% having pre-registered at least one study. However, opinions are divided on the scope and comprehensiveness of pre-registration, highlighting the need for clearer guidelines. Researchers assign a credibility premium to pre-registered tests, although the exact channels remain to be understood. Our results suggest growing support for open science practices among experimental economists, with demand for professional associations to guide researchers and reviewers on best practices for pre-registration and other open science initiatives.





Decision market prices for the 41 included studies
Plotted are the decision market prices for the 41 MTurk social science experiments published in PNAS between 2015 and 2018. The small grey dots indicate the market prices after each market transaction; the larger dots indicate the final market price. The studies are ordered based on the final decision market prices, which can be interpreted as the market’s probability forecast of successful replication. The 12 studies with the highest decision market prices and the 12 studies with the lowest decision market prices were selected for replication; in addition, 2 of the remaining 17 studies were selected for replication at random to ensure that the decision market is incentive compatible. The replication outcomes for the statistical significance indicator are also illustrated for the 26 replicated studies. The point-biserial correlation between the decision market prices and the replication outcomes in primary hypothesis 1 is r = 0.505 (95% CI (0.146, 0.712), t(24) = 2.867, P = 0.008; n = 26, two-sided test).
Replication results
Plotted are the point estimates and the 95% CIs (standardized to Cohen’s d units) of the 26 replications (dR) and original studies. Studies within each of the three panels (top-12, random, bottom-12) are sorted based on the decision market prices as in Fig. 1. There is a statistically significant effect (P < 0.05) in the same direction as the original study for 14 out of 26 replications (53.8%; 95% CI (33.4%, 73.4%)). For the 12 studies with the highest decision market prices, there is a statistically significant effect (P < 0.05) in the same direction as the original study for 10 out of 12 replications (83.3%; 95% CI (51.6%, 97.9%)). For the 12 studies with the lowest decision market prices, there is a statistically significant effect (P < 0.05) in the same direction as the original study for 4 out of 12 replications (33.3%; 95% CI (9.9%, 65.1%)). Our secondary hypothesis test provides suggestive evidence that the difference in replication rates between the top-12 and the bottom-12 group is different from zero (Fisher’s exact test; χ²(1) = 6.171, P = 0.036; n = 24, two-sided test). The error bars denote the 95% CIs of the original and the replication effect size estimates. The numbers of observations used to estimate the 95% CIs are the original and replication sample sizes noted on the right as nO and nR.
Relationship between estimated original and replication effect sizes
Plotted are the estimated original and replication effect sizes for each of the 26 replication studies (the estimated effect sizes of both the original and replication studies are standardized to Cohen’s d units). The 95% CIs for the original and replication effect size estimates are illustrated in Fig. 2 and tabulated in Supplementary Table 3. The mean estimated effect size of the 26 replication studies is 0.253 (s.d. = 0.357) compared with 0.563 (s.d. = 0.426) for the original studies, resulting in a relative estimated average effect size of 45.0%, confirming our second primary hypothesis (Wilcoxon signed-rank test, z = 4.203, P < 0.001; n = 26, two-sided test). The estimated relative effect size of the 13 replications that have been successfully replicated according to the statistical significance indicator is 69.5%, and the estimated relative effect size of the 13 studies that did not replicate is 3.2%. The box plots show the median, the interquartile range, and the 5th and 95th percentile of the effect size estimates in the 26 original studies and the 26 replication studies.
Replication results based on the small-telescopes approach (a secondary replication indicator)
Plotted are the 90% CIs of replication effect sizes in relation to small-effect sizes as defined by the small-telescopes approach¹¹² (the effect size that the original study would have had 33% power to detect). Studies within the three panels (top-12, random, bottom-12) are sorted based on the decision market prices as in Fig. 1. A study is defined as failing to replicate if the 90% CI is below the small effect (with ‘ub’ denoting the upper bound of the 90% CI). According to the small-telescopes approach, 15 out of 26 studies (57.7%; 95% CI (36.9%, 76.6%)) replicate. The error bars denote the 90% CIs of the estimated replication effect sizes. The numbers of observations used to estimate the 90% CIs are the replication sample sizes noted on the right as nR.
Replication results based on Bayes factors (secondary replication indicators)
The figure plots the one-sided default Bayes factor (BF+0) and the replication Bayes factor (BFR0) for the 26 replications¹¹³. BF+0 > 1 favours the hypothesis of an effect in the direction of the original paper, whereas BF+0 < 1 favours the null hypothesis of no effect. BFR0 quantifies the additional evidence provided by the replication results on top of the original evidence. BFR0 > 1 indicates additional evidence in favour of the alternative over the null, whereas BFR0 < 1 indicates additional evidence for the null instead. The evidence categories proposed by Jeffreys¹¹⁵ are also shown (from extreme support for the null hypothesis to extreme support for the original hypothesis). Studies within the three panels (top-12, random, bottom-12) are sorted based on the decision market prices as in Fig. 1. The BF+0 is above 1 for all 14 replication studies that successfully replicated according to the statistical significance indicator and below 1 for all 12 replication studies that failed to replicate according to the statistical significance indicator. The BFR0 is above 1 for 13 of the 14 replication studies that replicated according to the statistical significance indicator and below 1 for Cooney et al.⁵⁶ whose estimated relative effect size of 0.36 is the lowest among these 14 studies; the BFR0 is below 1 for all of the 12 replication studies that failed to replicate according to the statistical significance indicator. The numbers of observations used to estimate BF+0 and BFR0 are the original and replication sample sizes noted on the right as nO and nR.

+3

Examining the replicability of online experiments selected by a decision market
  • Article
  • Full-text available

November 2024

·

261 Reads

·

6 Citations

Nature Human Behaviour

Here we test the feasibility of using decision markets to select studies for replication and provide evidence about the replicability of online experiments. Social scientists (n = 162) traded on the outcome of close replications of 41 systematically selected MTurk social science experiments published in PNAS 2015–2018, knowing that the 12 studies with the lowest and the 12 with the highest final market prices would be selected for replication, along with 2 randomly selected studies. The replication rate, based on the statistical significance indicator, was 83% for the top-12 and 33% for the bottom-12 group. Overall, 54% of the studies were successfully replicated, with replication effect size estimates averaging 45% of the original effect size estimates. The replication rate varied between 54% and 62% for alternative replication indicators. The observed replicability of MTurk experiments is comparable to that of previous systematic replication projects involving laboratory experiments.

Download

Meta-analysis of Empirical Estimates of Loss Aversion

June 2024

·

183 Reads

·

48 Citations

Journal of Economic Literature

Loss aversion is one of the most widely used concepts in behavioral economics. We conduct a large-scale, interdisciplinary meta-analysis to systematically accumulate knowledge from numerous empirical estimates of the loss aversion coefficient reported from 1992 to 2017. We examine 607 empirical estimates of loss aversion from 150 articles in economics, psychology, neuroscience, and several other disciplines. Our analysis indicates that the mean loss aversion coefficient is 1.955 with a 95 percent probability that the true value falls in the interval [1.820, 2.102]. We record several observable characteristics of the study designs. Few characteristics are substantially correlated with differences in the mean estimates. (JEL D81, D91)



Approximate Expected Utility Rationalization

April 2023

·

25 Reads

·

6 Citations

Journal of the European Economic Association

We propose a new measure of deviations from the expected utility theory. For any positive number e, we give a characterization of the datasets with a rationalization that is within e (in beliefs, utility, or perceived prices) of expected utility theory, under the assumption of risk aversion. The number e can then be used as a measure of how far the data is to the expected utility theory. We apply our methodology to data from three large-scale experiments. Many subjects in these experiments are consistent with utility maximization, but not with expected utility maximization. Our measure of distance to expected utility is correlated with the subjects’ demographic characteristics.


Approximate Expected Utility Rationalization

February 2021

·

9 Reads

We propose a new measure of deviations from expected utility theory. For any positive number~e, we give a characterization of the datasets with a rationalization that is within~e (in beliefs, utility, or perceived prices) of expected utility theory. The number~e can then be used as a measure of how far the data is to expected utility theory. We apply our methodology to data from three large-scale experiments. Many subjects in those experiments are consistent with utility maximization, but not with expected utility maximization. Our measure of distance to expected utility is correlated with subjects' demographic characteristics.



Citations (19)


... Our study also contributes to improving the replicability and generalizability of LLM experiments. Several studies have assessed the replicability of experimental studies with human subjects [14,15,48]. The findings indicate that experiments in economics exhibit a higher rate of replicability compared to those in psychological sciences [14], which may be attributed to the rigorous methodological standards and established norms developed over decades within experimental economics [27,14,39,68]. ...

Reference:

When Experimental Economics Meets Large Language Models: Tactics with Evidence
Examining the replicability of online experiments selected by a decision market

Nature Human Behaviour

... Diese Verlustaversion wird durch einen Parameter in der Nutzenfunktion operationalisiert. In einigen klassischen parametrischen Modellen wirken die Verluste etwa doppelt so stark wie gleichhohe Gewinne(Brown et al., 2024). Der sogenannte Endowment Effect kann mit Verlustaversion erklärt werden: Auf individueller Ebene ist die Bereitschaft, für ein Gut zu bezahlen, meist geringer als der angegebene Mindestverkaufspreis für dieses Gut, wenn man es schon besitzt(Kahneman et al., 1991).Wirtschaftspolitisch spielen beide Aspekte eineRolle. ...

Meta-analysis of Empirical Estimates of Loss Aversion
  • Citing Article
  • June 2024

Journal of Economic Literature

... While GPT exhibits a high level of rationality, it is possible that its decisions are simply clustered at the corners or in certain areas. To address such concern, we examine whether GPT behavior respects the property of downwardsloping demand, a fundamental principle in the analysis of consumer behavior whereby the demand for a commodity decreases with its price (21,25,60). ...

Approximate Expected Utility Rationalization
  • Citing Article
  • April 2023

Journal of the European Economic Association

... Thus, some intertemporal choice experiments use less-fungible rewards that will be consumed immediately like snacks (Read et al., 1998)) and real effort tasks (Augenblick et al., 2015;Carvalho et al., 2016;Augenblick, 2018;Augenblick & Rabin, 2019;Le Yaouanq & Schwardmann, 2019;Bisin & Hyndman, 2020;Breig et al., 2020;Hardisty & Weber, 2020;Fedyk, 2021;Zou, 2021)). Most papers in this literature find that subjects are present biased on average, a finding less pronounced for monetary rewards (Augenblick et al., 2015); see also meta studies by Imai et al. (2020) and Cheung et al. (2021)). Our design uses a real effort task from Augenblick et al. (2015). ...

Meta-Analysis of Present-Bias Estimation Using Convex Time Budgets
  • Citing Article
  • January 2019

SSRN Electronic Journal

... As the safe option had a lower expected value ($2.00), it would be rational to bet on every trial. However, given the known risk-averse preferences of human agents in such settings [28,29], we anticipated that participants would bet less frequently than would be rational according to this strategy. ...

Meta-Analysis of Empirical Estimates of Loss-Aversion
  • Citing Article
  • January 2021

SSRN Electronic Journal

... Many lab experiments induce an objective probability, for example those by Choi et al. (2007Choi et al. ( , 2014. There are, however, experimental designs with uncertainty, such as those by Hey & Pace (2014) and Echenique et al. (2019a). ...

Decision Making under Uncertainty: An Experimental Study in Market Settings
  • Citing Article
  • November 2019

... While the experimental estimates of λ are typically in the range 1.8-2.3 (see [34] for a recent meta-analysis) and α is typically estimated to be in the neighborhood of 0.9, Table 2 reports the optimal asset allocation for a much wider range of parameters. This allows us to examine the robustness of the results to the parameter values, and to address the issue of possible heterogeneity among PT investors (as the parameter estimates are only population averages). ...

Meta-Analysis of Empirical Estimates of Loss-Aversion
  • Citing Preprint
  • December 2020

... This suggests that the exponential function can describe data for individual rats as well as the hyperbolic function for the majority of the rats. 49 Studies of human delay discounting have also reported that the best fitting function often differs between individuals, 42,43,[53][54][55] which led to the inclusion of the exponential function in our study. Using this dataset, we adopted three quantification methods to use in GWAS: the area under the curve (AUC), which is function-free, as well as the slope of the hyperbolic function (hyperbolic k) and the slope of the exponential function (exponential k). ...

Testable Implications of Models of Intertemporal Choice: Exponential Discounting and Its Generalizations
  • Citing Article
  • November 2020

American Economic Journal: Microeconomics

... Two important mechanisms, shown in previous research on gift-exchange experiments, are trust and positive reciprocity: Principals offer wages above Nash-equilibrium wages (under money maximization; see Section 2), and agents respond with effort levels that increase in the wage offered ('gift exchange' e.g., Fehr et al., 1993Fehr et al., , 1997Hannan, Kagel & Moser, 2002; see also Cooper & Kagel, 2016). Agents may also be motivated by negative reciprocity -that is, a willingness to punish principals if the OC is unfavorable for the agent (see and also the large literature on rejections of unfair offers in ultimatum games (e.g., Güth & Kocher, 2014;Lin et al., 2020). Negative reciprocity may result in rejecting the contract or in choosing minimal effort after the contract has been accepted. ...

Evidence of general economic principles of bargaining and trade from 2,000 classroom experiments

Nature Human Behaviour

... The empirical literature regarding behavioral biases in energy efficiency decision-making 4 Frederick et al. (2002) includes a critical review of the history and models of time discounting including time-consistent utility discounting models as well as time preferences and (quasi-)hyperbolic discounting models. 5 See the reviews Frederick et al. (2002) and DellaVigna (2009) for empirical estimates for present bias in various circumstances and Imai et al. (2021) and Cheung et al. (2021) for recent meta studies of papers reporting present bias estimates. 6 Kang (2015) shows that improvements in the Pareto criterion are also welfare-improving from the long-run perspective. ...

Meta-Analysis of Present-Bias Estimation Using Convex Time Budgets

The Economic Journal