Article

A rejoinder: ‘How can experiments play a greater role in public policy? Twelve proposals from an economic model of scaling’

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Al-Ubaydli et al. set out a valuable prospectus, but they operate with too simple a view of the policymaking process. Politicians and bureaucrats have additional objectives to that of maximizing human welfare: the former wish to endorse policies that get them re-elected; the latter need to manage complex bureaucracies and advance their careers. Both need to be persuaded that trials are in their long-term interests to adopt. Because Al-Ubaydli et al. 's proposals may increase the costs of doing trials, the demand for robust evidence might reduce rather than increase.
Article
Full-text available
Al-Ubaydli et al. point out that many research findings experience a reduction in magnitude of treatment effects when scaled, and they make a number of proposals to improve the scalability of pilot project findings. While we agree that scalability is important for policy relevance, we argue that non-scalability does not always render a research finding useless in practice. Three practices ensuring (1) that the intervention is appropriate for the context; (2) that heterogeneity in treatment effects are understood; and (3) that the temptation to try multiple interventions simultaneously is avoided can allow us to customize successful policy prescriptions to specific real-world settings.
Article
For educational researchers, Al-Ubaydli et al. raise a crucial question: How can the science of scaling experimental innovations contribute to school improvement? By assessing how particular innovative programs work, why, for whom and under what conditions, experimenters test theories: about how children and youth learn, about how adults can collaborate to create the required learning opportunities and about how policy can supply the required incentives and resources to support such effective collaboration. My focus in this response article thus shifts from the perspective of innovators who hope to scale their interventions to the perspective of practitioners who face the challenge of adopting, adapting or borrowing ideas from experimental studies and harmonizing those ideas with expert clinical judgment.
Article
I highlight two important factors particular to less-developed countries that can bias evidence generation and contribute to the ‘voltage drop’ in programme benefits, moving from field research experiments to policy implementation at scale. The first is the non-linear increase in information processing and coordination costs associated with upscaling in less-developed countries, given limited state capacity and rigid organizational hierarchies. The second is political bias in the choice of programmes considered for rigorous evaluation itself, resulting in distorted evidence and policy choice. These two factors raise considerations that complement the economics-based approach outlined by Al-Ubaydli et al. in the quest for more rigorous, evidence-based policy.
Article
Policymakers are increasingly turning to insights gained from the experimental method as a means to inform large-scale public policies. Critics view this increased usage as premature, pointing to the fact that many experimentally tested programs fail to deliver their promise at scale. Under this view, the experimental approach drives too much public policy. Yet, if policymakers could be more confident that the original research findings would be delivered at scale, even the staunchest critics would carve out a larger role for experiments to inform policy. Leveraging the economic framework of Al-Ubaydli et al. (2019), we put forward 12 simple proposals, spanning researchers, policymakers, funders and stakeholders, which together tackle the most vexing scalability threats. The framework highlights that only after we deepen our understanding of the scale-up problem will we be on solid ground to argue that scientific experiments should hold a more prominent place in the policymaker's quiver.
Article
The current state of the art in field experiments does not give me any confidence that we should be assuming that we have anything worth scaling, assuming we really care about the expected welfare of those about to receive the instant intervention. At the very least, we should be honest and explicit about the need for strong priors about the welfare effects of changes in averages of observables to warrant scaling. What we need is a healthy dose of theory and the implied econometrics.
Article
Al-Ubaydli et al. provide a far-reaching, insightful and directly actionable analysis of how social-behavioral research may exert more influence over the development and implementation of public policy. Their paper offers a sophisticated understanding of the ‘scale-up effect’, or factors that influence the extent to which positive experimental effects replicate as an intervention is implemented more broadly. Using economic principles, models and analyses, they offer 12 proposals for improving the process of scaling up effective and policy-relevant interventions. The current paper outlines how their proposals share a number of complementary features with behavioral psychology and applied behavior analysis. This response considers three possible points of intersection: (1) perspectives on the importance and challenges of studying and controlling our own behavior; (2) approaches to determining the social value of intervention outcomes and the procedures for achieving them; and (3) recommendations for deploying meaningful, common measures across phases of research.
Article
The economic model for scaling described by Al-Ubaydli and colleagues offers recommendations to policymakers who make decisions about whether or not to implement evidence-based programs. The core economic model does not currently acknowledge the broader context of policy decision-making and therefore may fail to achieve its objectives. The model focuses primarily on the generation and use of available research in the decision on whether to scale a program. Research studying the use of evidence in policymaking points to a complex set of factors beyond just the strength of the evidence such as leadership, relationships, timing and financial resources that contribute to decisions to scale a program. Second, there is already a robust evidence-based policy movement at the federal, state and local levels. The economic model should leverage this movement rather than providing recommendations that might stall or redirect the movement. The economic model can push the field to strengthen the available evidence while providing recommendations on selecting models to scale within the currently available evidence. This commentary finishes with suggestions for moving forward.
Article
Very often, significantly smaller benefits are observed in final policy outcomes than are indicated by initial research discoveries. Al-Ubaydli et al. have identified a poor understanding of the ‘science of scaling’ as the underlying cause of this discrepancy. They propose a framework to increase our understanding of the science of scaling. We build on this framework by making six specific suggestions capturing three key ideas. First, researchers need to move away from their preoccupation with general theoretical models and focus on subject-specific theories of intervention, leading to individualized treatments. Second, there should be greater collaboration between researchers and policymakers, as well as more transparency in reporting findings, to ensure that the research environment is more representative of the policy environment. Third, researchers should recognize that policymakers do not always maximize social welfare; policymakers may have their own short-term incentives. Therefore, researchers must consider policymakers’ short-term incentives in designing interventions in order to increase the chances of a research intervention becoming a policy.
Article
What was once broadly viewed as an impossibility—learning from experimental data in economics—has now become commonplace. Governmental bodies, think tanks, and corporations around the world employ teams of experimental researchers to answer their most pressing questions. For their part, in the past two decades academics have begun to more actively partner with organizations to generate data via field experimentation. Although this revolution in evidence‐based approaches has served to deepen the economic science, recently a credibility crisis has caused even the most ardent experimental proponents to pause. This study takes a step back from the burgeoning experimental literature and introduces 12 actions that might help to alleviate this credibility crisis and raise experimental economics to an even higher level. In this way, we view our “12 action wish list” as discussion points to enrich the field.
Article
Through good and bad economic times, charitable gifts have continued to roll in largely unabated over the past half century. In a typical year, total charitable gifts of money now exceed 2 percent of gross domestic product. Moreover, charitable giving has nearly doubled in real terms since 1990, and the number of nonprofit organizations registered with the IRS grew by nearly 60 percent from 1995 to 2005. This study provides a perspective on the economic interplay of three types of actors: donors, charitable organizations, and government. How much is given annually? Who gives? Who are the recipients of these gifts? Would changes in the tax treatment of charitable contributions lead to more or less giving? How can charitable institutions design mechanisms to generate the greatest level of gifts? What about the effectiveness of seed money and matching grants?
  • E Czibor
  • D Jimenez-Gomez
Czibor, E., D., Jimenez-Gomez & J. A., List (2019), 'The Dozen Things Experimental Economists Should Do (More of)', Southern Economic Journal, 86(2): 371-432.