ArticlePDF Available

The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains

Authors:

Abstract

Significance Changing individuals’ behavior is key to tackling some of today’s most pressing societal challenges such as the COVID-19 pandemic or climate change. Choice architecture interventions aim to nudge people toward personally and socially desirable behavior through the design of choice environments. Although increasingly popular, little is known about the overall effectiveness of choice architecture interventions and the conditions under which they facilitate behavior change. Here we quantitatively review over a decade of research, showing that choice architecture interventions successfully promote behavior change across key behavioral domains, populations, and locations. Our findings offer insights into the effects of choice architecture and provide guidelines for behaviorally informed policy making.
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
The effectiveness of nudging: A meta-analysis of
choice architecture interventions across behavioral
domains
Stephanie Mertensa,1 , Mario Herberza,b , Ulf J. J. Hahnela,b , and Tobias Broscha,b,1
aSwiss Center for Affective Sciences, University of Geneva, CH-1202 Geneva, Switzerland; and bDepartment of Psychology, University of Geneva, CH-1205
Geneva, Switzerland
Edited by Susan Fiske, Psychology Department, Princeton University, Princeton, NJ; received April 27, 2021; accepted November 24, 2021
Over the past decade, choice architecture interventions or so-
called nudges have received widespread attention from both
researchers and policy makers. Built on insights from the be-
havioral sciences, this class of behavioral interventions focuses
on the design of choice environments that facilitate personally
and socially desirable decisions without restricting people in their
freedom of choice. Drawing on more than 200 studies reporting
over 440 effect sizes (n= 2,148,439), we present a comprehensive
analysis of the effectiveness of choice architecture interventions
across techniques, behavioral domains, and contextual study char-
acteristics. Our results show that choice architecture interventions
overall promote behavior change with a small to medium effect
size of Cohen’s d= 0.43 (95% CI [0.38, 0.48]). In addition, we
nd that the effectiveness of choice architecture interventions
varies signicantly as a function of technique and domain. Across
behavioral domains, interventions that target the organization
and structure of choice alternatives (decision structure) consis-
tently outperform interventions that focus on the description of
alternatives (decision information) or the reinforcement of behav-
ioral intentions (decision assistance). Food choices are particularly
responsive to choice architecture interventions, with effect sizes
up to 2.5 times larger than those in other behavioral domains.
Overall, choice architecture interventions affect behavior rela-
tively independently of contextual study characteristics such as
the geographical location or the target population of the inter-
vention. Our analysis further reveals a moderate publication bias
toward positive results in the literature. We end with a discussion
of the implications of our ndings for theory and behaviorally
informed policy making.
choice architecture | nudge | behavioral insights | behavior change |
meta-analysis
Many of today’s most pressing societal challenges such as the
successful navigation of the COVID-19 pandemic or the
mitigation of climate change call for substantial changes in in-
dividuals’ behavior. Whereas microeconomic and psychological
approaches based on rational agent models have traditionally
dominated the discussion about how to achieve behavior change,
the release of Thaler and Sunstein’s book Nudge—Improving
Decisions about Health, Wealth, and Happiness (1) widely
introduced a complementary intervention approach known as
choice architecture or nudging, which aims to change behavior by
(re)designing the physical, social, or psychological environment
in which people make decisions while preserving their freedom
of choice (2). Since the publication of the first edition of Thaler
and Sunstein (1) in 2008, choice architecture interventions have
seen an immense increase in popularity (Fig. 1). However, little
is known about their overall effectiveness and the conditions
under which they facilitate behavior change—a gap the present
meta-analysis aims to address by analyzing the effects of the most
widely used choice architecture techniques across key behavioral
domains and contextual study characteristics.
Traditional microeconomic intervention approaches are often
built around a rational agent model of decision making, which
assumes that people base their decisions on known and consistent
preferences that aim to maximize the utility, or value, of their
actions. In determining their preferences, people are thought
to engage in an exhaustive analysis of the probabilities and
potential costs and benefits of all available options to identify
which option provides the highest expected utility and is thus
the most favorable (3). Interventions aiming to change behavior
are accordingly designed to increase the utility of the desired
option, either by educating people about the existing costs and
benefits of a certain behavior or by creating entirely new in-
centive structures by means of subsidies, tax credits, fines, or
similar economic measures. Likewise, traditional psychological
intervention approaches explain behavior as the result of a delib-
erate decision making process that weighs and integrates internal
representations of people’s belief structures, values, attitudes,
and norms (4, 5). Interventions accordingly focus on measures
such as information campaigns that aim to shift behavior through
changes in people’s beliefs or attitudes (6).
Over the past years, intervention approaches informed by re-
search in the behavioral sciences have emerged as a complement
to rational agent-based approaches. They draw on an alternative
model of decision making which acknowledges that people are
bounded in their ability to make rational decisions. Rooted in
dual-process theories of cognition and information processing
Signicance
Changing individuals’ behavior is key to tackling some of
today’s most pressing societal challenges such as the COVID-19
pandemic or climate change. Choice architecture interventions
aim to nudge people toward personally and socially desir-
able behavior through the design of choice environments.
Although increasingly popular, little is known about the over-
all effectiveness of choice architecture interventions and the
conditions under which they facilitate behavior change. Here
we quantitatively review over a decade of research, showing
that choice architecture interventions successfully promote
behavior change across key behavioral domains, populations,
and locations. Our ndings offer insights into the effects of
choice architecture and provide guidelines for behaviorally
informed policy making.
Author contributions: S.M., M.H., U.J.J.H., and T.B. designed research; S.M. and M.H.
performed research; S.M. analyzed data; and S.M., M.H., U.J.J.H., and T.B. wrote the
paper.
The authors declare no competing interest.
This article is a PNAS Direct Submission.
This open access article is distributed under Creative Commons Attribution-
NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
1To whom correspondence may be addressed. Email: stephanie.mertens@unige.ch or
tobias.brosch@unige.ch.
This article contains supporting information online at https://www.pnas.org/lookup/
suppl/doi:10.1073/pnas.2107346118/-/DCSupplemental.
Published December 30, 2021.
PNAS 2022 Vol. 119 No. 1 e2107346118 https://doi.org/10.1073/pnas.2107346118 1of10
SEE CORRECTION FOR THIS ARTICLE
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
0
50
100
150
200
250
300
350
Publication year
Number of citations
Fig. 1. Number of citations of Thaler and Sunstein (1) between 2008 and 2020. Counts are based on citation search in Web of Science.
(7), this model recognizes that human behavior is not always
driven by the elaborate and rational thought processes assumed
by the rational agent model but instead often relies on automatic
and computationally less intensive forms of decision making that
allow people to navigate the demands of everyday life in the
face of limited time, available information, and computational
power (8, 9). Boundedly rational decision makers often construct
their preferences ad hoc based on cognitive shortcuts and biases,
which makes them susceptible to supposedly irrational contextual
influences, such as the way in which information is presented or
structured (10–12). This susceptibility to contextual factors, while
seemingly detrimental to decision making, has been identified as
a promising lever for behavior change because it offers the op-
portunity to influence people’s decisions through simple changes
in the so-called choice architecture that defines the physical,
social, and psychological context in which decisions are made
(2). Rather than relying on education or significant economic
incentives, choice architecture interventions aim to guide people
toward personally and socially desirable behavior by designing
environments that anticipate and integrate people’s limitations in
decision making to facilitate access to decision-relevant informa-
tion, support the evaluation and comparison of available choice
alternatives, or reinforce previously formed behavioral intentions
(13) (see Table 1 for an overview of intervention techniques based
on choice architecture*).
Addressing Psychological Barriers through Choice
Architecture
Unlike the assumption of the rational agent model, people rarely
have access to all relevant information when making a decision.
Instead, they tend to base their decisions on information that is
directly available to them at the moment of the decision (14, 15)
and to discount or even ignore information that is too complex or
meaningless to them (16, 17). Choice architecture interventions
*While alternative classication schemes of choice architecture interventions can be
found in the literature, the taxonomy used in the present meta-analysis distinguishes
itself through its comprehensiveness, which makes it a highly reliable categorization
tool and allows for inferences of both theoretical and practical relevance.
based on the provision of decision information aim to facilitate
access to decision-relevant information by increasing its availabil-
ity, comprehensibility, and/or personal relevance to the decision
maker. One way to achieve this is to provide social reference
information that reduces the ambiguity of a situation and helps
overcome uncertainty about appropriate behavioral responses.
In a natural field experiment with more than 600,000 US house-
holds, for instance, Allcott (18) demonstrated the effectiveness
of descriptive social norms in promoting energy conservation.
Specifically, the study showed that households which regularly
received a letter comparing their own energy consumption to that
of similar neighbors reduced their consumption by an average
of 2%. This effect was estimated to be equivalent to that of a
short-term electricity price increase of 11 to 20%. Other exam-
ples of decision information interventions include measures that
increase the visibility of otherwise covert information (e.g., feed-
back devices and nutrition labels; refs. 19, 20), or that translate
existing descriptions of choice options into more comprehensible
or relevant information (e.g., through simplifying or reframing
information; ref. 21).
Not only do people have limited access to decision-relevant
information, but they often refrain from engaging in the elabo-
rate cost-benefit analyses assumed by the rational agent model to
evaluate and compare the expected utility of all choice options.
Instead, they use contextual cues about the way in which choice
alternatives are organized and structured within the decision
environment to inform their behavior. Choice architecture in-
terventions built around changes in the decision structure uti-
lize this context dependency to influence behavior through the
arrangement of choice alternatives or the format of decision
making. One of the most prominent examples of this intervention
approach is choice default, or the preselection of an option that
is imposed if no active choice is made. In a study comparing
organ donation policies across European countries, Johnson and
Goldstein (22) demonstrated the impact of defaults on even
highly consequential decisions, showing that in countries with
presumed consent laws, which by default register individuals
as organ donors, the rate of donor registrations was nearly 60
percentage points higher than in countries with explicit consent
laws, which require individuals to formally agree to becoming an
2of10 PNAS
https://doi.org/10.1073/pnas.2107346118
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
Table 1. Taxonomy of choice architecture categories and intervention techniques
Psychological barrier Intervention category Intervention technique
Limited access to
decision-relevant information
Decision information: increase the
availability, comprehensibility, and/or
personal relevance of information
Translate information: adapt attributes to facilitate
processing of already available information and/or shift
decision maker’s perspective
Make information visible: provide access to relevant
information
Provide social reference point: provide social normative
information to reduce situational ambiguity and behavioral
uncertainty
Limited capacity to evaluate and
compare choice options
Decision structure: alter the utility of
choice options through their
arrangement in the decision
Change choice defaults: set no action default or prompt
active choice to address behavioral inertia, loss aversion,
and/or perceived endorsement
environment or the format of decision
making
Change option-related effort: adjust physical or nancial
effort to remove friction from desirable choice option
Change range or composition of options: adapt categories or
grouping of choice options to facilitate evaluation
Change option consequences: adapt social consequences or
microincentives to address present bias, bias in probability
weighting, and/or loss aversion
Limited attention and self-control Decision assistance: facilitate
self-regulation
Provide reminders: increase the attentional salience of
desirable behavior to overcome inattention due to
information overload
Facilitate commitment: encourage self or public commitment
to counteract failures of self-control
organ donor. Other examples of decision structure interventions
include changes in the effort related to choosing an option (23),
the range or composition of options (24), and the consequences
attached to options (25).
Even if people make a deliberate and potentially rational
decision to change their behavior, limited attentional capacities
and a lack of self-control may prevent this decision from actually
translating into the desired actions, a phenomenon described as
the intention–behavior gap (26). Choice architecture interven-
tions that provide measures of decision assistance aim to bridge
the intention–behavior gap by reinforcing self-regulation. One
example of this intervention approach are commitment devices,
which are designed to strengthen self-control by removing psy-
chological barriers such as procrastination and intertemporal
discounting that often stand in the way of successful behavior
change. Thaler and Benartzi (27) demonstrated the effective-
ness of such commitment devices in a large-scale field study
of the Save More Tomorrow program, showing that employees
increased their average saving rates from 3.5 to 13.6% when
committing in advance to allocating parts of their future salary
increases toward retirement savings. If applied across the United
States, this program was estimated to increase the total of annual
retirement contributions by approximately $25 billion for each
1% increase in saving rates. Other examples of decision assis-
tance interventions are reminders, which affect decision making
by increasing the salience of the intended behavior (28).
The Present Meta-analysis
Despite the growing interest in choice architecture, only a few
attempts have been made to quantitatively integrate the empir-
ical evidence on its effectiveness as a behavior change tool (29–
32). Previous studies have mostly been restricted to the analysis
of a single choice architecture technique (33–35) or a specific
behavioral domain (36–39), leaving important questions unan-
swered, including how effective choice architecture interventions
overall are in changing behavior and whether there are systematic
differences across choice architecture techniques and behavioral
domains that so far may have remained undetected and that may
offer new insights into the psychological mechanisms that drive
choice architecture interventions.
The aim of the present meta-analysis was to address these
questions by first quantifying the overall effect of choice archi-
tecture interventions on behavior and then providing a systematic
comparison of choice architecture interventions across different
techniques, behavioral domains, and contextual study character-
istics to answer 1) whether some choice architecture techniques
are more effective in changing behavior than others, 2) whether
some behavioral domains are more receptive to the effects of
choice architecture interventions than others, 3) whether choice
architecture techniques differ in their effectiveness across vary-
ing behavioral domains, and finally, 4) whether the effectiveness
of choice architecture interventions is impacted by contextual
study characteristics such as the location or target population
of the intervention. Drawing on an exhaustive literature search
that yielded more than 200 published and unpublished studies,
this comprehensive analysis presents important insights into the
effects and potential boundary conditions of choice architecture
interventions and provides an evidence-based guideline for se-
lecting behaviorally informed intervention measures.
Results
Effect Size of Choice Architecture. Our meta-analysis of 447 effect
sizes from 212 publications (n= 2, 148, 439) revealed a statis-
tically significant effect of choice architecture interventions on
behavior, (Cohen’s d= 0.43, 95% CI [0.38, 0.48], t(333) = 16.51,
P<0.001) (Fig. 2). Using conventional criteria, this effect can be
classified to be of small to medium size (40). The effect size was
reliable across several robustness checks, including the removal
of influential outliers, which marginally decreased the overall size
of the effect but did not change its statistical significance (d=
0.41, 95% CI [0.37, 0.46], t(331) = 17.61, P<0.001). Additional
leave-one-out analyses at the individual effect size level and the
publication level found the effect of choice architecture inter-
ventions to be robust to the exclusion of any one effect size and
publication, with dranging from 0.42 to 0.44 and all P<0.001.
The total heterogeneity was estimated to be τ2=0.16, indicat-
ing considerable variability in the effect size of choice architec-
ture interventions. More specifically, the dispersion of effect sizes
suggests that while the majority of choice architecture interven-
tions will successfully promote the desired behavior change with
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
PNAS 3of10
https://doi.org/10.1073/pnas.2107346118
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
Model estimate with prediction
interval: d = 0.43*** [-0.36, 1.22]
Observation
Effect size (Cohen’s d) with 95% CI
01 3245-1
Fig. 2. Forest plot of all effect sizes (k=447) included in the meta-analysis
with their corresponding 95% condence intervals. Extracted Cohen’s
d-values ranged from 0.69 to 3.08. The proportion of true to total variance
was estimated at I2=99.52%. ***P<0.001.
a small to large effect size, 15% of interventions are likely to
backfire, i.e., reduce or even reverse the desired behavior, with
a small to medium effect (95% prediction interval [0.36, 1.22])
(40–42).
Publication Bias. Visual inspection of the relation between effect
sizes and their corresponding SEs (Fig. 3) revealed an asymmet-
ric distribution that suggested a one-tailed overrepresentation of
positive effect sizes in studies with comparatively low statistical
power (43). This finding was formally confirmed by Egger’s test
(44), which found a positive association between effect sizes and
SEs (b= 2.10, 95% CI [1.31, 2.89], t(332) = 5.22, P<0.001).
Together, these results point to a publication bias in the literature
that may favor the reporting of successful as opposed to un-
successful implementations of choice architecture interventions
in studies with small sample sizes. Sensitivity analyses imposing
a priori weight functions on a simplified random effects model
suggested that this one-tailed publication bias could have po-
tentially affected the estimate of our meta-analytic model (43).
Assuming a moderate one-tailed publication bias in the literature
attenuated the overall effect size of choice architecture inter-
ventions by 22.5% from Cohen’s d=0.40, 95% CI [0.36, 0.44],
τ2=0.16(SE =0.01) to d= 0.31, τ2= 0.18. Assuming a severe
one-tailed publication bias attenuated the overall effect size even
further to d= 0.08, τ2=0.26, however, this assumption was only
partially supported by the funnel plot. Although our general con-
clusion about the effects of choice architecture interventions on
behavior remains the same in the light of these findings, the true
effect size of interventions is likely to be smaller than estimated
by our meta-analytic model due to the overrepresentation of
positive effect sizes in our sample.
Moderator Analyses. Supported by the high heterogeneity among
effect sizes, we next tested the extent to which the effectiveness
of choice architecture interventions was moderated by the type of
intervention, the behavioral domain in which it was implemented,
and contextual study characteristics.
Intervention category and technique. Our first analysis focused
on identifying potential differences between the effect sizes of
decision information, decision structure, and decision assistance
interventions. This analysis found that intervention category
indeed moderated the effect of choice architecture interventions
on behavior (F(2, 330) = 12.23, P<0.001). With average effect
sizes ranging from d=0.28 to 0.54, interventions across all
three categories were effective in inducing statistically significant
behavior change (all P<0.001; Fig. 4). Planned contrasts
between categories, however, revealed that interventions in the
decision structure category had a stronger effect on behavior
compared to interventions in the decision information (b=0.19,
95% CI [0.08, 0.31], t(330) = 3.26, P= 0.001) and the decision
assistance category (b= 0.26, 95% CI [0.15, 0.36], t(330) = 4.93,
P<0.001). No difference was found in the effectiveness of
decision information and decision assistance interventions
(b=0.06, 95% CI [0.16, 0.04], t(330) = 1.26, P= 0.21).
Including intervention category as a moderator in our meta-
analytic model marginally reduced the proportion of true to
total variability in effect sizes from I2= 99.52% to I2= 99.33%
(I2
(3) = 87.18%; I2
(2) = 12.15%;SI Appendix, Table S3).
To test whether the effect sizes of the three intervention cate-
gories adequately represented differences on the underlying level
of choice architecture techniques, we reran our analysis with in-
tervention technique rather than category as the key moderator.
As illustrated in Fig. 4, each of the nine intervention techniques
was effective in inducing behavior change, with Cohen’s dranging
from 0.23 to 0.62 (all P<0.01). Within intervention categories,
techniques were largely consistent in their effect sizes. Between
categories, however, techniques showed in parts substantial dif-
ferences in effect sizes. In line with the previously reported
results, techniques within the decision structure category were
consistently stronger in their effects on behavior than interven-
tion techniques within the decision information or the decision
assistance category. The observed effect size differences between
0123-1
0.60
0.45
0.30
0.15
0.00
Effect size (Cohen’s d)
Standard error
Fig. 3. Funnel plot displaying each observation as a function of its effect
size and SE. In the absence of publication bias, observations should scatter
symmetrically around the pooled effect size indicated by the gray vertical
line and within the boundaries of the 95% condence intervals shaded
in white. The asymmetric distribution shown here indicates a one-tailed
publication bias in the literature that favors the reporting of successful
implementations of choice architecture interventions in studies with small
sample sizes.
4of10 PNAS
https://doi.org/10.1073/pnas.2107346118
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
Decision information
Decision structure
Decision assistance
Translationa
Visibilityb
Social referencec
Average effect for categoryg
Defaulta,b,c,d,e,f
Effort
Composition
Consequenced
Average effect for categoryg,h
Remindere
Commitmentf
Average effect for categoryh
0.28
0.32
0.36
0.34
0.62
0.48
0.44
0.38
0.54
0.29
0.23
0.28
Intervention d95% CI
[0.17, 0.39]
[0.25, 0.40]
[0.27, 0.46]
[0.27, 0.42]
[0.52, 0.73]
[0.26, 0.70]
[0.25, 0.63]
[0.31, 0.46]
[0.46, 0.62]
[0.21, 0.37]
[0.08, 0.39]
[0.21, 0.35]
0 0.2 0.4 0.6 0.8-0.2
Cohen’s d with 95% CI
Fig. 4. Forest plot of effect sizes across categories of choice architecture
intervention techniques (see Table 1 for more detailed description of tech-
niques). The position of squares on the xaxis indicates the effect size of
each respective intervention technique. Bars indicate the 95% condence
intervals of effect sizes. The size of squares is inversely proportional to the
SE of effect sizes. Diamond shapes indicate the average effect size and
condence intervals of intervention categories. The solid line represents an
effect size of Cohen’s d=0. The dotted line represents the overall effect
size of choice architecture interventions, Cohen’s d=0.43, 95% CI [0.38,
0.48]. Identical letter superscripts indicate statistically signicant (P<0.05)
pairwise comparisons.
the decision information, the decision structure, and the decision
assistance category were thus unlikely to be driven by a single
intervention technique but rather representative of the entire set
of techniques within those categories.
Behavioral domain. Following our analysis of the effectiveness
of varying types of choice architecture interventions, we
next focused on identifying potential differences among the
behavioral domains in which interventions were implemented.
As illustrated in Fig. 5, effect sizes varied quite substantially
across domains, with Cohen’s dranging from 0.24 to 0.65. Our
analysis confirmed that the effectiveness of interventions was
moderated by domain, F(5, 327) = 3.64, P= 0.003. Specifically,
it showed that choice architecture interventions, while generally
effective in inducing behavior change across all six domains,
had a particularly strong effect on behavior in the food domain,
with d=0.65 (95% CI [0.47, 0.83]). The smallest effects were
observed in the financial domain. With an average intervention
effect of d=0.24 (95% CI [0.14, 0.35]), this domain was less
receptive to choice architecture interventions than the other
behavioral domains we investigated. Introducing behavioral
domain as a moderator in our meta-analytic model marginally
reduced the ratio of true to total heterogeneity among effect sizes
from I2= 99.52% to I2= 99.40%(I2
(3) = 91.95%; I2
(2) = 7.44%;
SI Appendix, Table S3).
Intervention category across behavioral domain. Comparing the
effectiveness of decision information, decision structure, and
decision assistance interventions across domains consistently
showed interventions within the decision structure category to
have the largest effect on behavior, with Cohen’s dranging
from 0.33 to 0.78 (Fig. 5). This result suggests that the observed
effect size differences between the three categories of choice
architecture interventions were relatively stable and independent
from the behavioral domain in which interventions were applied.
Including the interaction of intervention category and behavioral
domain in our meta-analytic model reduced the proportion of
true to total effect size variability from I2= 99.52% to I2=
99.29% (I2
(3) = 87.34%;I2
(2) = 11.95%;SI Appendix, Table S3).
Study characteristics. Last, we were interested in the extent to
which the effect size of choice architecture interventions was
moderated by contextual study characteristics, such as the loca-
tion of the intervention (inside vs. outside of the United States),
the target population of the intervention (adults vs. children and
adolescents), the experimental setting in which the intervention
was investigated (conventional laboratory experiment, artifactual
field experiment, framed field experiment, or natural field exper-
iment; ref. 45), and the year in which the data were published. As
can be seen in Table 2, choice architecture interventions affected
Decision assistancee
A
verage effect for domaini
Decision assistance
A
verage effect for domaing,j
Decision assistanceb
A
verage effect for domainf
Decision information
Decision structurea
Decision assistancea
Health
Food
Environment
Finance
Pro-social
Other
dIntervention
A
verage effect for domainf,g,h,i
Decision information
Decision structureb
Decision assistancec
Decision information
Decision structurec
A
verage effect for domainh,j
Decision information
Decision structure
Decision assistanced
Decision information
Decision structured
A
verage effect for domain
Decision information
Decision structuree
0.26
0.44
0.20
0.34
0.44
0.78
0.43
0.65
0.40
0.52
0.25
0.43
0.23
0.33
0.21
0.24
0.27
0.41
0.37
0.48
0.21
0.41
0.20
0.31
[0.09, 0.43]
[0.29, 0.59]
[0.05, 0.35]
[0.25, 0.43]
[0.19, 0.70]
[0.54, 1.01]
[0.28, 0.59]
[0.47, 0.83]
[0.22, 0.58]
[0.37, 0.68]
[0.06, 0.43]
[0.33, 0.54]
[0.13, 0.33]
[0.20, 0.46]
[0.10, 0.33]
[0.14, 0.35]
[0.23, 0.50]
[0.31, 0.66]
[0.13, 0.30]
[0.27, 0.54]
[0.20, 0.35]
[0.16, 0.66]
[0.09, 0.31]
[0.09, 0.52]
95% CI
0 0.2 0.4 0.6 0.8 1-0.2
Cohen’s d with 95% CI
Fig. 5. Forest plot of effect sizes across categories of choice architecture
interventions and behavioral domains. The position of squares on the xaxis
indicates the effect size of each intervention category within a behavioral
domain. Bars indicate the 95% condence intervals of effect sizes. The size
of squares is inversely proportional to the SE of effect sizes. Diamond shapes
indicate the overall effect size and condence intervals of choice architec-
ture interventions within a behavioral domain. The solid line represents an
effect size of Cohen’s d=0. The dotted line represents the overall effect
size of choice architecture interventions, Cohen’s d=0.43, 95% CI [0.38,
0.48]. Identical letter superscripts indicate statistically signicant (P<0.05)
pairwise comparisons within a behavioral domain.
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
PNAS 5of10
https://doi.org/10.1073/pnas.2107346118
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
Table 2. Parameter estimates of three-level meta-analytic models showing the overall effect size of choice architecture interventions
as well as effect sizes across categories, techniques, behavioral domains, and contextual study characteristics
Effect size
Effect knd95% CI Test statistic P
Random-effects model
Overall effect size 447 2, 148, 439 0.43 [0.38, 0.48] t(333) = 16.51 <0.001
Mixed-effects models: substantive moderators
Choice architecture category F(2, 330) = 12.23 <0.001
Decision informationa130 913, 151 0.34 [0.27, 0.42]
Decision structurea,b 223 356, 911 0.54 [0.46, 0.62]
Decision assistanceb94 878, 377 0.28 [0.21, 0.35]
Choice architecture technique F(8, 324) = 4.48 <0.001
Translationc50 52, 170 0.28 [0.17, 0.39]
Visibilityd31 822, 026 0.32 [0.25, 0.40]
Social referencee49 38, 955 0.36 [0.27, 0.46]
Defaultc,d,e,f,g,h 128 139, 844 0.62 [0.52, 0.73]
Effort 23 7, 985 0.48 [0.26, 0.70]
Composition 53 7, 319 0.44 [0.25, 0.63]
Consequencef19 201, 763 0.38 [0.31, 0.46]
Reminderg69 870, 386 0.29 [0.21, 0.37]
Commitmenth25 7, 991 0.23 [0.08, 0.39]
Behavioral domain F(5, 327) = 3.64 0.003
Healthi84 122, 702 0.34 [0.25, 0.43]
Foodi,j,k,l 111 12, 077 0.65 [0.47, 0.83]
Environmentj,m 76 105, 345 0.43 [0.33, 0.54]
Financek,m 45 38, 730 0.24 [0.14, 0.35]
Prosociall58 1, 041, 501 0.41 [0.27, 0.54]
Other 73 828, 084 0.31 [0.09, 0.52]
Mixed-effects models: contextual study characteristics
Location t(332) = 0.87 0.387
Outside United States 186 1, 214, 261
Inside United States 261 934, 178
Population t(332) = –0.54 0.587
Children and adolescents 27 9, 896
Adults 420 2, 138, 543
Type of experiment F(3, 330) = 0.16 0.922
Conventional laboratory 120 12, 336 0.45 [0.36, 0.55]
Artifactual eld 156 48, 824 0.41 [0.24, 0.57]
Framed eld 81 15, 032 0.47 [0.32, 0.61]
Natural eld 90 2, 072, 247 0.41 [0.14, 0.67]
Year of publication 1982 to 2021* 2, 148, 439 t(332) = –3.56 <0.001
k, number of effect sizes; n, sample size. Within each moderator with more than two subgroups, identical letter superscripts indicate statistically signicant
(P<0.05) pairwise comparisons between subgroups.
*Values refer to range of publication years rather than number of effect sizes.
behavior relatively independently of contextual influences since
neither location nor target population had a statistically signifi-
cant impact on the effect size of interventions. In support of the
external validity of behavioral measures, our analysis moreover
did not find any difference in the effect size of different types of
experiments. Only year of publication predicted the effect of in-
terventions on behavior, with more recent publications reporting
smaller effect sizes than older publications.
Discussion
Changing individuals’ behavior is key to solving some of
today’s most pressing societal challenges. However, how can
this behavior change be achieved? Recently, more and more
researchers and policy makers have approached this question
through the use of choice architecture interventions. The present
meta-analysis integrates over a decade’s worth of research to
shed light on the effectiveness of choice architecture and the
conditions under which it facilitates behavior change. Our results
show that choice architecture interventions promote behavior
change with a small to medium effect size of Cohen’s d= 0.43,
which is comparable to more traditional intervention approaches
like education campaigns or financial incentives (46–48). Our
findings are largely consistent with those of previous analyses that
investigated the effectiveness of choice architecture interven-
tions in a smaller subset of the literature (e.g., refs. 29, 30, 32, 33).
In their recent meta-analysis of choice architecture interventions
across academic disciplines, Beshears and Kosowksy (30), for
example, found that choice architecture interventions had an
average effect size of d=0.41. Similarly, focusing on one choice
architecture technique only, Jachimowicz et al. (33) found that
choice defaults had an average effect size of d=0.68,which
is slightly higher than the effect size our analysis revealed for
this intervention technique (d= 0.62). Our results suggest a
somewhat higher overall effectiveness of choice architecture
interventions than meta-analyses that have focused exclusively
on field experimental research (31, 37), a discrepancy that holds
even when accounting for differences between experimental
settings (45). This inconsistency in findings may in part be
explained by differences in metaanalytic samples. Only 7% of
the studies analyzed by DellaVigna and Linos (31), for example,
6of10 PNAS
https://doi.org/10.1073/pnas.2107346118
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
meet the strict inclusion and exclusion criteria of the present
meta-analysis. Among others, these criteria excluded studies
that combined multiple choice architecture techniques. While
this restriction allowed us to isolate the unique effect of each
individual intervention technique, it may conflict with the reality
of field experimental research that often requires researchers to
leverage the effects of several choice architecture techniques to
address the specific behavioral challenge at hand (see Materials
and Methods for details on the literature search process and inclu-
sion criteria). Similarly, the techniques that are available to field
experimental researchers may not always align with the under-
lying psychological barriers to the target behavior (Table 1), de-
creasing their effectiveness in encouraging the desired behavior
change.
Not only does choice architecture facilitate behavior change,
but according to our results, it does so across a wide range
of behavioral domains, population segments, and geographical
locations. In contrast to theoretical and empirical work challeng-
ing its effectiveness (49–51), choice architecture constitutes a
versatile intervention approach that lends itself as an effective
behavior change tool across many contexts and policy areas.
Although the present meta-analysis focuses on studies that tested
the effects of choice architecture alone, the applicability of choice
architecture is not restricted to stand-alone interventions but
extends to hybrid policy measures that use choice architecture as
a complement to more traditional intervention approaches (52).
Previous research, for example, has shown that the impact of
economic interventions such as taxes or financial incentives can
be enhanced through choice architecture (53–55).
In addition to the overall effect size of choice architecture
interventions, our systematic comparison of interventions across
different techniques, behavioral domains, and contextual study
characteristics reveals substantial variations in the effectiveness
of choice architecture as a behavior change tool. Most notably, we
find that across behavioral domains, decision structure interven-
tions that modify decision environments to address decision mak-
ers’ limited capacity to evaluate and compare choice options are
consistently more effective in changing behavior than decision
information interventions that address decision makers’ limited
access to decision-relevant information or decision assistance
interventions that address decision makers’ limited attention
and self-control. This relative advantage of structural choice
architecture techniques may be due to the specific psychological
mechanisms that underlie the different intervention techniques
or, more specifically, their demands on information processing.
Decision information and decision assistance interventions rely
on relatively elaborate forms of information processing in that
the information and assistance they provide needs to be en-
coded and evaluated in terms of personal values and/or goals
to determine the overall utility of a given choice option (56).
Decision structure interventions, by contrast, often do not re-
quire this type of information processing but provide a general
utility boost for specific choice options that offers a cognitive
shortcut for determining the most desirable option (57, 58).
Accordingly, decision information and decision assistance inter-
ventions have previously been described as attempts to facilitate
more deliberate decision making processes, whereas decision
structure interventions have been characterized as attempts to
advance more automatic decision making processes (59). Deci-
sion information and decision assistance interventions may thus
more frequently fail to induce behavior change and show overall
smaller effect sizes than decision structure interventions because
they may exceed people’s cognitive limits in decision making
more often, especially in situations of high cognitive load or time
pressure.
The engagement of internal value and goal representations
by decision information and decision assistance interventions
introduces a second factor that may impact their effectiveness to
change behavior: the moderating influence of individual differ-
ences. Nutrition labels, a prominent example of decision infor-
mation interventions, for instance, have been shown to be more
frequently used by consumers who are concerned about their
diet and overall health than consumers who do not share those
concerns (60). By targeting only certain population segments,
information and assistance-based choice architecture interven-
tions may show an overall smaller effect size when assessed at
the population level compared to structure-based interventions,
which rely less on individual values and goals and may there-
fore have an overall larger impact across the whole population.
From a practical perspective, this suggests that policy makers
who wish to use choice architecture as a behavioral intervention
measure may need to precede decision information and deci-
sion assistance interventions by an assessment and analysis of
the values and goals of the target population or, alternatively,
choose a decision structure approach in cases when a segmen-
tation of the population in terms of individual differences is not
possible.
In summary, the higher effectiveness of decision structure
interventions may potentially be explained by a combination of
two factors: 1) lower demand on information processing and
2) lower susceptibility to individual differences in values and
goals. Our explanation remains somewhat speculative, however,
as empirical research especially on the cognitive processes un-
derlying choice architecture interventions is still relatively scarce
(but see refs. 53, 56, 57). More research efforts are needed to clar-
ify the psychological mechanisms that drive the impact of choice
architecture interventions and determine their effectiveness in
changing behavior.
Besides the effect size variations between different categories
of choice architecture techniques, our results reveal considerable
differences in the effectiveness of choice architecture interven-
tions across behavioral domains. Specifically, we find that choice
architecture interventions had a particularly strong effect on
behavior in the food domain, with average effect sizes up to 2.5
times larger than those in the health, environmental, financial,
prosocial, or other behavioral domain.A key characteristic of
food choices and other food-related behaviors is the fact that
they bear relatively low behavioral costs and few, if any, per-
ceived long-term consequences for the decision maker. Previ-
ous research has found that the potential impact of a decision
can indeed moderate the effectiveness of choice architecture
interventions, with techniques such as gain and loss framing
having a smaller effect on behavior when the decision at hand
has a high, direct impact on the decision maker than when
the decision has little to no impact (61). Consistent with this
research, we observe not only the largest effect sizes of choice
architecture interventions in the food domain but also the overall
smallest effect sizes of interventions in the financial domain, a
domain that predominantly represents decisions of high impact
to the decision maker. This systematic variation of effect sizes
across behavioral domains suggests that when making decisions
that are perceived to have a substantial impact on their lives,
people may be less prone to the influence of automatic biases
and heuristics, and thus the effects of choice architecture inter-
ventions, than when making decisions of comparatively smaller
impact.
Another characteristic of food choices that may explain the
high effectiveness of choice architecture interventions in the food
Please note that our results are robust to the exclusion of nonretracted studies by the
Cornell Food and Brand Laboratory which has been criticized for repeated scientic
misconduct; retracted studies by this research group were excluded from the meta-
analysis.
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
PNAS 7of10
https://doi.org/10.1073/pnas.2107346118
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
domain is the fact that they are often driven by habits. Commonly
defined as highly automatized behavioral responses to cues in
the choice environment, habits distinguish themselves from other
behaviors through a particularly strong association between be-
havior on the one hand and choice environment on the other
hand (62, 63). It is possible that choice architecture interventions
benefit from this association to the extent that they target the
choice environment and thus potentially alter triggers of habit-
ualized, undesirable behaviors. To illustrate, previous research
has shown that people tend to adjust their food consumption
relative to portion size, meaning that they consume more when
presented with large portions and less when presented with small
portions (39). Here portion size acts as an environmental cue
that triggers and guides the behavioral response to eat. Choice
architecture interventions that target this environmental cue, for
example, by changing the default size of a food portion, are likely
to be successful in changing the amount of food people consume
because they capitalize on the highly automatized association
between portion size and food consumption. The congruence
between factors that trigger habitualized behaviors and factors
that are targeted by choice architecture interventions may not
only explain why interventions in our sample were so effective
in changing food choices but more generally indicate that choice
architecture interventions are an effective tool for changing in-
stances of habitualized behaviors (64). This finding is particularly
relevant from a policy making perspective as habits tend to be
relatively unresponsive to traditional intervention approaches
and are therefore generally considered to be difficult to change
(62). Given that choice architecture interventions can only target
the environmental cues that trigger habitualized responses but
not the association between choice environment and behavior
per se, it should be noted though that the effects of interventions
are likely limited to the specific choice contexts in which they are
implemented.
While the present meta-analysis provides a comprehensive
overview of the effectiveness of choice architecture as a behav-
ior change tool, more research is needed to complement and
complete our findings. For example, our methodological focus
on individuals as the unit of analysis excludes a large number of
studies that have investigated choice architecture interventions
on broader levels, such as households, school classes, or orga-
nizations, which may reduce the generalizability of our results.
Future research should target these studies specifically to add
to the current analysis. Similarly, our data show very high levels
of heterogeneity among the effect sizes of choice architecture
interventions. Although the type of intervention, the behavioral
domain in which it is applied, and contextual study characteristics
account for some of this heterogeneity (SI Appendix, Table S3),
more research is needed to identify factors that may explain the
variability in effect sizes above and beyond those investigated
here. Research has recently started to reveal some of those po-
tential moderators of choice architecture interventions, including
sociodemographic factors such as income and socioeconomic
status as well as psychological factors such as domain knowl-
edge, numerical ability, and attitudes (65–67). Investigating these
moderators systematically cannot only provide a more nuanced
understanding of the conditions under which choice architecture
facilitates behavior change but may also help to inform the
design and implementation of targeted interventions that take
into account individual differences in the susceptibility to choice
architecture interventions (68). Ethical considerations should
play a prominent role in this process to ensure that potentially
more susceptible populations, such as children or low-income
households, retain their ability to make decisions that are in their
personal best interest (66, 69, 70). Based on the results of our own
moderator analyses, additional avenues for future research may
include the study of how information processing influences the
effectiveness of varying types of choice architecture interventions
and how the overall effect of interventions is determined by the
type of behavior they target (e.g., high-impact vs. low-impact
behaviors and habitual vs. one-time decisions). In addition, we
identified a moderate publication bias toward the reporting of
effect sizes that support a positive effect of choice architecture
interventions on behavior. Future research efforts should take
this finding into account and place special emphasis on appropri-
ate sample size planning and analysis standards when evaluating
choice architecture interventions. Finally, given our choice to
focus our primary literature search on the terms “choice architec-
ture” and “nudge,” we recognize that the present meta-analysis
may have failed to capture parts of the literature published before
the popularization of this now widely used terminology, despite
our efforts to expand the search beyond those terms (for details
on the literature search process, see Materials and Methods). Due
to the large increase in choice architecture research over the past
decade (Fig. 1), however, the results presented here likely offer a
good representation of the existing evidence on the effectiveness
of choice architecture in changing individuals’ behavior.
Conclusion
Few behavioral intervention measures have lately received as
much attention from researchers and policy makers as choice
architecture interventions. Integrating the results of more than
440 behavioral interventions, the present meta-analysis finds
that choice architecture is an effective and widely applicable
behavior change tool that facilitates personally and socially desir-
able choices across behavioral domains, geographical locations,
and populations. Our results provide insights into the overall
effectiveness of choice architecture interventions as well as sys-
tematic effect size variations among them, revealing promising
directions for future research that may facilitate the development
of theories in this still new but fast-growing field of research.
Our work also provides a comprehensive overview of the effec-
tiveness of choice architecture interventions across a wide range
of intervention contexts that are representative of some of the
most pressing societal challenges we are currently facing. This
overview can serve as a guideline for policy makers who seek
reliable, evidence-based information on the potential impact of
choice architecture interventions and the conditions under which
they promote behavior change.
Materials and Methods
The meta-analysis was conducted in accordance with guidelines for conduct-
ing systematic reviews (71) and conforms to the Preferred Reporting Items
for Systematic Reviews and Meta-Analyses (72) standards.
Literature Search and Inclusion Criteria. We searched the electronic
databases PsycINFO, PubMed, PubPsych, and ScienceDirect using a
combination of keywords associated with choice architecture (nudge OR
“choice architecture”) and empirical research (method* OR empiric* OR
procedure OR design).Since the terms nudge and choice architecture
were established only after the seminal book by Thaler and Sunstein (1),
we restricted this search to studies that were published no earlier than
2008. To compensate for the potential bias this temporal restriction might
introduce to the results of our meta-analysis, we identied additional
studies, including studies published before 2008, through the reference lists
of relevant review articles and a search for research reports by governmental
and nongovernmental behavioral science units. To reduce the possibly
confounding effects of publication status on the estimation of effect
sizes, we further searched for unpublished studies using the ProQuest
Dissertations & Theses database and requesting unpublished data through
academic mailing lists. The search concluded in June 2019, yielding a total
of 9,606 unique publications.
Search terms were adapted from Szaszi et al. (73).
8of10 PNAS
https://doi.org/10.1073/pnas.2107346118
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
Given the exceptionally high heterogeneity in choice architecture re-
search, we restricted our meta-analysis to studies that 1) empirically tested
one or more choice architecture techniques using a randomized controlled
experimental design, 2) had a behavioral outcome measure that was as-
sessed in a real-life or hypothetical choice situation, 3) used individuals as
the unit of analysis, and 4) were published in English. Studies that examined
choice architecture in combination with other intervention measures, such
as signicant economic incentives or education programs, were excluded
from our analyses to isolate the unique effects of choice architecture inter-
ventions on behavior.
The nal sample comprised 447 effect sizes from 212 publications with a
pooled sample size of 2,148,439 participants (nranging from 14 to 813,990).
SI Appendix, Fig. S1 illustrates the literature search and review process. All
meta-analytic data and analyses reported in this paper are publicly available
on the Open Science Framework (https://osf.io/fywae/) (74).
Effect Size Calculation and Moderator Coding. Due to the large variation in
behavioral outcome measures, we calculated Cohen’s d(40) for a standard-
ized effect size measure of the mean difference between control and treat-
ment conditions. Positive Cohen’s dvalues were coded to reect behavior
change in the desired direction of the intervention, whereas negative values
reected an undesirable change in behavior.
To categorize systematic differences between choice architecture inter-
ventions, we coded studies for seven moderators describing the type of
intervention, the behavioral domain in which it was implemented, and
contextual study characteristics. The type of choice architecture intervention
was classied using a taxonomy developed by Münscher and colleagues (13),
which distinguishes three broad categories of choice architecture: decision
information, decision structure, and decision assistance. Each of these cat-
egories targets a specic aspect of the choice environment, with decision
information interventions targeting the way in which choice alternatives
are described (e.g., framing), decision structure interventions targeting the
way in which those choice alternatives are organized and structured (e.g.,
choice defaults), and decision assistance interventions targeting the way
in which decisions can be reinforced (e.g., commitment devices). With its
tripartite categorization framework the taxonomy is able to capture and
categorize the vast majority of choice architecture interventions described
in the literature, making it one of the most comprehensive classication
schemes of choice architecture techniques in the eld (see Table 1 for
an overview). Many alternative attempts to organize and structure choice
architecture interventions are considered problematic because they combine
descriptive categorization approaches, which classify interventions based on
choice architecture technique, and explanatory categorization approaches,
which classify interventions based on underlying psychological mechanisms,
within a single framework. The taxonomy we use here adopts a descriptive
categorization approach in that it organizes interventions exclusively in
terms of choice architecture techniques. We chose this approach to not
only omit common shortcomings of hybrid classication schemes, such as
a reduction in the interpretability of results, but also to warrant a highly
reliable categorization of interventions in the absence of psychological
outcome measures that would allow us to infer explanatory mechanisms.
Using a descriptive categorization approach further allowed us to generate
theoretically meaningful insights that can be easily translated into concrete
recommendations for policy making. Each intervention was coded according
to its specic technique and corresponding category. Interventions that
combined multiple choice architecture techniques were excluded from our
analyses to isolate the unique effect of each approach. Based on previous
reviews (73) and inspection of our data, we distinguished six behavioral
domains: health, food, environment, nance, prosocial behavior, and other
behavior. Contextual study characteristics included the type of experiment
that had been conducted (conventional laboratory experiment, artifactual
eld experiment, framed eld experiment, or natural eld experiment), the
location of the intervention (inside vs. outside of the United States), the
target population of the intervention (adults vs. children and adolescents),
and the year in which the data were published. Interrater reliability across a
random sample of 20% of the publications was high, with Cohen’s κranging
from 0.76 to 1 (M=0.87).
Statistical Analysis. We estimated the overall effect of choice architecture
interventions using a three-level meta-analytic model with random effects
on the treatment and the publication level. This approach allowed us to
account for the hierarchical structure of our data due to publications that
reported multiple relevant outcome variables and/or more than one exper-
iment (75–77). To further account for dependency in sampling errors due
to overlapping samples (e.g., in cases where multiple treatment conditions
were compared to the same control condition), we computed cluster-robust
SEs, condence intervals, and statistical tests for the estimated effect sizes
(78, 79).
To identify systematic differences between choice architecture interven-
tions, we ran multiple moderator analyses in which we tested for the effects
of type of intervention, behavioral domain, and study characteristics using
mixed-effects meta-analytic models with random effects on the treatment
and the publication level. All analyses were conducted in R using the
package metafor (80).
Data Availability. Data have been deposited in the Open Science Frame-
work (https://osf.io/fywae/).
ACKNOWLEDGMENTS. This research was supported by Swiss National Sci-
ence Foundation Grant PYAPP1_160571 awarded to Tobias Brosch and Swiss
Federal Ofce of Energy Grant SI/501597-01. It is part of the activities of
the Swiss Competence Center for Energy Research Competence Center
for Research in Energy, Society and Transition, supported by the Swiss
Innovation Agency (Innosuisse). The funding sources had no involvement in
the preparation of the article; in the study design; in the collection, analysis,
and interpretation of data; nor in the writing of the manuscript. We thank
Allegra Mulas and Laura Pagel for their assistance in data collection and
extraction.
1. R. H. Thaler, C. R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and
Happiness (Yale University Press, 2008).
2. R. H. Thaler, C. R. Sunstein, J. P. Balz, “Choice architecture” in The Behavioral
Foundations of Public Policy, E. Shar, Ed. (Princeton University Press, 2013), pp.
428–439.
3. G. S. Becker, The Economic Approach to Human Behavior (University of Chicago
Press, ed. 1, 1976).
4. I. Ajzen, The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50,
179–211 (1991).
5. P. C. Stern, Toward a coherent theory of environmentally signicant behavior. J. Soc.
Issues 56, 407–424 (2000).
6. D. Albarracin, S. Shavitt, Attitudes and attitude change. Annu. Rev. Psychol. 69, 299–
327 (2018).
7. J. S. B. T. Evans, Dual-processing accounts of reasoning, judgment, and social
cognition. Annu. Rev. Psychol. 59, 255–278 (2008).
8. H. A. Simon, A behavioral model of rational choice. Q. J. Econ. 69, 99–118 (1955).
9. H. A. Simon, Models of Bounded Rationality (MIT Press, 1982).
10. G. Gigerenzer, W.Gaissmaier, Heuristic decision making. Annu. Rev. Psychol. 62, 451–
482 (2011).
11. S. Lichtenstein, P. Slovic, Eds., The Construction of Preference (Cambridge University
Press, 2006).
12. J. W. Payne, J. R. Bettman, E. J. Johnson, Behavioral decision research: A constructive
processing perspective. Annu. Rev. Psychol. 43, 87–131 (1992).
13. R. Münscher, M. Vetter, T. Scheuerle, A review and taxonomy of choice architecture
techniques. J. Behav. Decis. Making 29, 511–524 (2016).
14. D. Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011).
15. P. Slovic, From Shakespeare to Simon: Speculations—and some evidence—about
man’s ability to process information. Or. Res. Inst. Res. Bull. 12, 1–19 (1972).
16. C. K. Hsee, J. Zhang, General evaluability theory. Perspect. Psychol. Sci. 5, 343–355
(2010).
17. A. K. Shah, D. M. Oppenheimer, Easy does it: The role of uency in cue weighting.
Judgm. Decis. Mak. 2, 371–379 (2007).
18. H. Allcott, Social norms and energy conservation. J. Public Econ. 95, 1082–1095
(2011).
19. K. Jessoe, D. Rapson, Knowledge is (less) power: Experimental evidence from
residential energy use. Am. Econ. Rev. 104, 1417–1438 (2014).
20. C. A. Roberto, P. D. Larsen, H. Agnew, J. Baik, K. D. Brownell, Evaluating the impact
of menu labeling on food choices and intake. Am. J. Public Health 100, 312–318
(2010).
21. R. P. Larrick, J. B. Soll, Economics. The MPG illusion. Science 320, 1593–1594 (2008).
22. E. J. Johnson, D. Goldstein, Medicine. Do defaults save lives? Science 302, 1338–1339
(2003).
23. J. Maas, D. T. D. de Ridder,E. de Vet, J. B. F. de Wit, Do distant foods decrease intake?
The effect of food accessibility on consumption. Psychol. Health 27 (suppl. 2), 59–73
(2012).
24. J. M. Martin, M. I. Norton, Shaping online consumer choice by partitioning the web.
Psychol. Mark. 26, 908–926 (2009).
25. M. A. Sharif, S. B. Shu, Nudging persistence after failure through emergency
reserves. Organ. Behav. Hum. Decis. Process. 163, 17–29 (2021).
26. P. Sheeran, T. L. Webb, The intention-behavior gap. Soc. Personal. Psychol. Compass
10, 503–518 (2016).
27. R. H. Thaler, S. Benartzi, Save more tomorrow: Using behavioral economics to
increase employee saving. J. Polit. Econ. 112, 164–187 (2004).
28. C. Loibl, L. Jones, E. Haisley, Testing strategies to increase saving in individual
development account programs. J. Econ. Psychol. 66, 45–63 (2018).
29. S. Benartzi et al., Should governments invest more in nudging? Psychol. Sci. 28,
1041–1055 (2017).
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
PNAS 9of10
https://doi.org/10.1073/pnas.2107346118
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
30. J. Beshears, H. Kosowsky, Nudging: Progress to date and future directions. Organ.
Behav. Hum. Decis. Process. 161 (suppl.), 3–19 (2020).
31. S. DellaVigna, E. Linos, “RCTs to scale: Comprehensive evidence from two nudge
units” (Working Paper 27594, National Bureau of Economic Research, 2020;
https://www.nber.org/papers/w27594).
32. D. Hummel, A. Maedche, How effective is nudging? A quantitative review on the
effect sizes and limits of empirical nudging studies. J. Behav. Exp. Econ. 80, 47–58
(2019).
33. J. M. Jachimowicz, S. Duncan, E. U. Weber, E. J. Johnson, When and why defaults
inuence decisions: A meta-analysis of default effects. Behav. Public Policy 3, 159–
186 (2019).
34. A. N. Kluger, A. DeNisi, The effects of feedback interventions on performance: A
historical review, a meta-analysis, and a preliminary feedback intervention theory.
Psychol. Bull. 119, 254–284 (1996).
35. A. Kühberger, The inuence of framing on risky decision: A meta-analysis. Organ.
Behav. Hum. Decis. Process. 75, 23–55 (1998).
36. W. Abrahamse, L. Steg, C. Vlek, T. Rothengatter, A review of intervention studies
aimed at household energy conservation. J. Environ. Psychol. 25, 273–291 (2005).
37. R. Cadario, P. Chandon, Which healthy eating nudges work best? A meta-analysis
of eld experiments. Mark. Sci. 39, 459–486 (2020).
38. C. F. Nisa, J. J. Bélanger, B. M. Schumpe, D. G. Faller, Meta-analysis of randomised
controlled trials testing behavioural interventions to promote household action on
climate change. Nat. Commun. 10, 4545 (2019).
39. N. Zlatevska, C. Dubelaar, S. S. Holden, Sizing up the effect of portion size on
consumption: A meta-analytic review. J. Mark. 78, 140–154 (2014).
40. J. Cohen, Statistical Power Analysis for the Behavioral Sciences (Lawrence Erlbaum
Associates, 1988).
41. M. Borenstein, L. V. Hedges, J. P. Higgins, H. R. Rothstein, Identifying and Quantify-
ing Heterogeneity (John Wiley & Sons, Ltd, 2009), pp. 107–125.
42. M. Borenstein, J. P. Higgins, L. V. Hedges, H. R. Rothstein, Basics of meta-analysis: I2
is not an absolute measure of heterogeneity. Res. Synth. Methods 8, 5–18 (2017).
43. J. L. Vevea, C. M. Woods, Publication bias in research synthesis: Sensitivity analysis
using a priori weight functions. Psychol. Methods 10, 428–443 (2005).
44. M. Egger, G. Davey Smith, M. Schneider, C. Minder, Bias in meta-analysis detected
by a simple, graphical test. BMJ 315, 629–634 (1997).
45. G. W. Harrison, J. A. List, Field experiments. J. Econ. Lit. 42, 1009–1055 (2004).
46. A. Maki, R. J. Burns, L. Ha, A. J. Rothman, Paying people to protect the environment:
A meta-analysis of nancial incentive interventions to promote proenvironmental
behaviors. J. Environ. Psychol. 47, 242–255 (2016).
47. E. Mantzari et al., Personal nancial incentives for changing habitual health-related
behaviors: A systematic review and meta-analysis. Prev. Med. 75, 75–85 (2015).
48. L.B.Snyderet al., A meta-analysis of the effect of mediated health communication
campaigns on behavior change in the United States. J. Health Commun. 9(suppl.
1), 71–96 (2004).
49. D. Hagmann, E. H. Ho, G. Loewenstein, Nudging out support for carbon tax. Nat.
Clim. Chang. 9, 484–489 (2019).
50. H. IJzerman et al., Use caution when applying behavioural science to policy. Nat.
Hum. Behav. 4, 1092–1094 (2020).
51. A. S. Kristal, A. V. Whillans, What we can learn from ve naturalistic eld experi-
ments that failed to shift commuter behaviour. Nat. Hum. Behav. 4, 169–176 (2020).
52. G. Loewenstein, N. Chater, Putting nudges in perspective. Behav. Public Policy 1,
26–53 (2017).
53. D. J. Hardisty, E. J. Johnson, E. U. Weber, A dirty word or a dirty world?: Attribute
framing, political afliation, and query theory. Psychol. Sci. 21, 86–92 (2010).
54. T. A. Homonoff, Can small incentives have large effects? The impact of taxes versus
bonuses on disposable bag use. Am. Econ. J. Econ. Policy 10, 177–210 (2018).
55. E. J. McCaffery, J. Baron, Thinking about tax. Psychol. Public Policy Law 12, 106–135
(2006).
56. S. Mertens, U. J. J. Hahnel, T. Brosch, This way please: Uncovering the directional
effects of attribute translations on decision making. Judgm. Decis. Mak. 15, 25–46
(2020).
57. I. Dinner, E. J. Johnson, D. G. Goldstein, K. Liu, Partitioning default effects: Why
people choose not to choose. J. Exp. Psychol. Appl. 17, 332–341 (2011).
58. D. Knowles, K. Brown, S. Aldrovandi, Exploring the underpinning mechanisms of
the proximity effect within a competitive food environment. Appetite 134, 94–102
(2019).
59. C. R. Sunstein, People prefer System 2 nudges (kind of). Duke Law J. 66, 121–168
(2016).
60. S. Campos, J. Doxey, D. Hammond, Nutrition labels on pre-packaged foods: A
systematic review. Public Health Nutr. 14, 1496–1506 (2011).
61. T. M. Marteau, Framing of information: Its inuence upon decisions of doctors and
patients. Br. J. Soc. Psychol. 28, 89–94 (1989).
62. B. Verplanken, W. Wood, Interventions to break and create consumer habits. J.
Public Policy Mark. 25, 90–103 (2006).
63. W. Wood, D. Rünger, Psychology of habit. Annu. Rev. Psychol. 67, 289–314 (2016).
64. T. A. G. Venema, F. M. Kroese, B. Verplanken, D. T. D. de Ridder, The (bitter) sweet
taste of nudge effectiveness: The role of habits in a portion size nudge, a proof of
concept study. Appetite 151, 104699 (2020).
65. H. Allcott, Site selection bias in program evaluation. Q. J. Econ. 130, 1117–1165
(2015).
66. C. Ghesla, M. Grieder, R. Schubert, Nudging the poor and the rich—A eld study
on the distributional effects of green electricity defaults. Energy Econ. 86, 104616
(2020).
67. K. Mrkva, N. A. Posner, C. Reeck, E. J. Johnson, Do nudges reduce disparities? Choice
architecture compensates for low consumer knowledge. J. Mark. 85, 67–84 (2021).
68. C. J. Bryan, E. Tipton, D. S. Yeager, Behavioural science is unlikely to change the
world without a heterogeneity revolution. Nat. Hum. Behav. 5, 980–989 (2021).
69. U. J. J. Hahnel, G. Chatelain, B. Conte, V. Piana, T. Brosch, Mental accounting
mechanisms in energy decision-making and behaviour. Nat. Energy 5, 952–958
(2020).
70. C. R. Sunstein, The distributional effects of nudges. Nat. Hum. Behav.
10.1038/s41562-021-01236-z (2021).
71. A. P. Siddaway, A. M. Wood, L. V. Hedges, How to do a systematic review: A best
practice guide for conducting and reporting narrative reviews, meta-analyses, and
meta-syntheses. Annu. Rev. Psychol. 70, 747–770 (2019).
72. D. Moher, A. Liberati, J. Tetzlaff, D. G. Altman; PRISMA Group, Preferred reporting
items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med.
6, e1000097 (2009).
73. B. Szaszi, A. Palinkas, B. Pal, A. Szollosi, B. Aczel, A systematic scoping review of
the choice architecture movement: Toward understanding when and why nudges
work. J. Behav. Decis. Making 31, 355–366 (2018).
74. S. Mertens, M. Herberz, U. J. J. Hahnel, T. Brosch, The effectiveness of nudging: A
meta-analysis of choice architecture interventions across behavioral domains. Open
Science Framework. https://osf.io/fywae/. Deposited 11 September 2021.
75. M. W. L. Cheung, Modeling dependent effect sizes with three-level meta-analyses:
A structural equation modeling approach. Psychol. Methods 19, 211–229 (2014).
76. W. Van den Noortgate, J. A. López-López, F. Marín-Martínez, J. Sánchez-Meca,
Three-level meta-analysis of dependent effect sizes. Behav. Res. Methods 45,576
594 (2013).
77. W. Van den Noortgate, J. A. López-López, F. Marín-Martínez, J. Sánchez-Meca, Meta-
analysis of multiple outcomes: A multilevel approach. Behav.Res. Methods 47, 1274–
1294 (2015).
78. A. C. Cameron, D. L. Miller, A practitioner’s guide to cluster-robustinference. J. Hum.
Resour. 50, 317–372 (2015).
79. L. V. Hedges, E. Tipton, M. C. Johnson, Robust variance estimation in meta-regression
with dependent effect size estimates. Res. Synth. Methods 1, 39–65 (2010).
80. W. Viechtbauer, Conducting meta-analyses in R with the metafor package. J. Stat.
Softw. 36, 1–48 (2010).
10 of 10 PNAS
https://doi.org/10.1073/pnas.2107346118
Mertens et al.
The effectiveness of nudging: A meta-analysis of choice architecture interventions
across behavioral domains
Downloaded from https://www.pnas.org by 142.114.191.166 on June 15, 2022 from IP address 142.114.191.166.
... Although the performance of a nudge crucially depends on the context in which it is implemented, recent research has shown that the food domain is generally where nudging interventions seem to have had the greatest impact; however, the magnitude estimates of such interventions differ widely from a moderate-to-large average effect size of d = 0.65 (Mertens, Herberz, Hahnel, & Brosch, 2022) to a small effect size of d = 0.23 when using experimental field evidence as an inclusion criterion (Cadario & Chandon, 2020). Some scholars have even claimed that, once publication bias has been accounted for, "no evidence for the effectiveness of nudges remains" (Maier et al., 2022, p. 1). ...
... These vastly different estimates underscore a prevalent problem in the nudging literature: publication bias (Mertens et al., 2022;Szaszi, Goldstein, Soman, & Michie, 2024). In other words, questionable research practices and the file drawer problem-putting studies that show nonsignificant results in the file drawer and only submitting studies that show significant results to journals (Rosenthal, 1979)might have distorted findings in this stream of research, failing to paint a complete picture of the true effects (Szaszi, Palinkas, Palfi, Szollosi, & Aczel, 2018). ...
... These two nudges fall within the intervention categories of decision information and decision structure. These categories have been identified as the most effective in altering people's behavior, with average effect sizes when aggregated across decision-making domains of d = 0.34 and d = 0.54, respectively (Mertens et al., 2022). ...
Article
Full-text available
Healthier eating is crucial to tackle the rapid rise of obesity and noncommunicable diseases worldwide. This research examined two nudging interventions intended to decrease food consumption: price display and serving utensils. Forecasting experiments showed that people predicted displaying the price of the food per kg (vs. hg) should decrease the amount of food purchased (Study 1A), but that using tongs (vs. spoon) would be ineffective (Study 1B). In contrast to these results, a high-powered preregistered field study (Study 2) at a university canteen revealed that price display had no notable effect; however, tongs (vs. spoon) reliably decreased the average amount of food purchased per meal by 14 g or 3.1%, also when compared to weeks when both types of serving utensils were available. Study 3 replicated the results regarding tongs (vs. spoon) for a particularly unhealthy category (candy), while highlighting a psychological mechanism driving the effect. Using tongs required more effort, which decreased satisfaction tied to using said serving utensils, thereby reducing people’s willingness to consume candy. Given the simplicity and cost effectiveness of swapping spoons with tongs, combined with the behavioral evidence underscoring its practical relevance, these findings might aid in steering consumers to healthier food decisions, ultimately benefiting public health.
... The first significant critique of nudging effectiveness came from the meta-analysis by While one interpretation of these findings might be to dismiss nudging as a loweffectiveness approach, a more constructive response is to explore the factors driving these differences. The mixed-effects model later presented by Mertens et al. (2022) highlights that study characteristics significantly influence nudge variability. This aligns 13 with broader discussions in the field, such as those by Beshears & Kosowsky (2020), which emphasise the importance of understanding the mechanisms underlying nudging. ...
... Mertens et al. (2022). This analysis revealed substantial variation in the effectiveness ofnudges across different domains and types. ...
Preprint
Full-text available
Behavioural public policy (BPP) applies behavioural insights to aid policy-making and implementation. Emerging from the burgeoning interest in nudges in the late 2000s, BPP has matured over the last decade. It has produced novel concepts and rich empirical data, helping researchers and policy-makers understand and explain human behaviour with a multidisciplinary lens. Even though BPP is an established field of endeavour, it is at a crucial juncture when it is important to assess how it is likely to grow as a field. To help understand the dynamics of knowledge innovation and utilisation, this article discusses the past, present, and future of behavioural public policy. We draw on the substantive achievements of nudge and BPP, the engagement of policy-makers in nudge units and commissioned BPP policies, the growth of empiricism via experiments, and an emergent ethical dimension illustrated by new tools, such as boost and nudge+. We highlight the increasingly prominent role of agency and reasonableness in future initiatives, and we outline benefits that newer methods, such as computational social sciences, can have on the field in making sense of heterogeneity. BPP shows a sustained attention over time, with growing richness in its academic and policy agenda, and is not a cyclical or temporary phenomenon.
... Isso porque as práticas de mudança de comportamento são manipuladoras quando não se envolvem com a capacidade de escolha das pessoas. No entanto, nem todos os aspectos da arquitetura de escolha são igualmente influentes (Mertens et al., 2021). Isso levanta questões sobre quais aspectos da arquitetura de escolha são manipuladores, e, portanto, dignos de proibição (Franklin et al., 2022). ...
Article
Full-text available
This conceptual study, organized by a literature survey, aims to reflect and critically observe the impacts and transformations artificial intelligence (AI), supported by algorithm systems and machine learning, has brought to sociocultural relations and processes in contemporary Brazil. It also outlines the concept of responsible artificial intelligence, emphasizing the key role of social participation in the regulatory constructions encompassing the development, application, and social uses of AI. As a result, this study provides an informative theoretical and technical framework that signalizes and discusses some points of attention and recommendations on the expressions of AI in Brazil in a critical and interdisciplinary way. These recommendations can inform and support future interventions and research on the topics in focus. Keywords: responsible AI; communication; social participation; regulation.
... Nudges, often described as freedom-preserving interventions that steer people in particular directions, attempt to change behavior without imposing mandates or significantly altering material incentives see, Osman et al., (2018) or Thaler and Sunstein (2021) for an overview. These behavioral interventions have been applied across a variety of domains, including conservation, recycling, weight loss, medicine adherence, and the general promotion of health and well-being (see Thaler and Sunstein (2021), and Mertens et al. (2022) for overviews), and have also been applied to the financial domain. Examples of behavioral interventions in finance include changing the default on pensions, so that a portion of an employee's salary is put into retirement saving unless they opt out (Thaler and Benartzi, 2004), or removing minimum repayment for credit card repayments, to stop people from anchoring on the minimum repayment (Sakaguchi et al., 2022), thereby increasing their credit card debt repayments. ...
Article
Full-text available
Do people like financial nudges? To answer that question we conducted a pre-registered survey presenting people with 36 hypothetical scenarios describing financial interventions. We varied levels of transparency (i.e., explaining how the interventions worked), framing (interventions framed in terms of spending, or saving), and ‘System’ (interventions could target either System 1 or System 2). Participants were a random sample of 2,100 people drawn from a representative Australian population. All financial interventions were tested across six dependent variables: approval, benefit, ethics, manipulation, the likelihood of use, as well as the likelihood of use if the intervention were to be proposed by a bank . Results indicate that people generally approve of financial interventions, rating them as neutral to positive across all dependent variables (except for manipulation, which was reverse coded). We find effects of framing and System. People have strong and significant preferences for System 2 interventions, and interventions framed in terms of savings. Transparency was not found to have a significant impact on how people rate financial interventions. Financial interventions continue to be rated positive, regardless of the messenger. Looking at demographics, we find that participants who were female, younger, living in metro areas and earning higher incomes were most likely to favor financial interventions, and this effect is especially strong for those aged under 45. We discuss the implications for these results as applied to the financial sector.
Article
This paper presents a framework on the evolution of psychological theory in both consumer psychology and psychology more generally, moving from empirical observations of relationships to explanatory multi‐process models. The authors argue that psychological research appears unnecessarily inhibited, at times, to effects‐based or single‐process explanations of causal relationships. Such approaches, while fruitful for planting seeds of ideas, may limit the growth of psychological research. To provide a path to deeper theory building, involving more explanatory and predictive models, the authors introduce an approach based on the evolutionary stages of theory. This approach involves stating the necessary conditions to advance an evolutionary stage, along with visual examples of theory and empirical examples largely from the literature in consumer psychology. The upshot is a framework aimed to align scholars in understanding how to think about and discuss theoretical contributions. Finally, by adopting this approach, the authors suggest that psychological science can better address current challenges, such as the replication crisis, and foster more impactful research.
Article
Full-text available
In the past decade, behavioural science has gained influence in policymaking but suffered a crisis of confidence in the replicability of its findings. Here, we describe a nascent heterogeneity revolution that we believe these twin historical trends have triggered. This revolution will be defined by the recognition that most treatment effects are heterogeneous, so the variation in effect estimates across studies that defines the replication crisis is to be expected as long as heterogeneous effects are studied without a systematic approach to sampling and moderation. When studied systematically, heterogeneity can be leveraged to build more complete theories of causal mechanism that could inform nuanced and dependable guidance to policymakers. We recommend investment in shared research infrastructure to make it feasible to study behavioural interventions in heterogeneous and generalizable samples, and suggest low-cost steps researchers can take immediately to avoid being misled by heterogeneity and begin to learn from it instead.
Article
Full-text available
Choice architecture tools, commonly known as nudges, can powerfully impact decisions and improve welfare. Yet it is unclear who is most impacted by nudges. If nudge effects are moderated by socioeconomic status (SES), nudges could increase or decrease disparities across consumers. Using several pre-registered studies as well as self-reports of real retirement decisions, we demonstrate that consumers with lower SES, lower domain knowledge, and lower numerical ability are impacted more by a wide variety of nudges. As a result, “good nudges” that facilitated selection of superior options reduced choice disparities, improving choices more among consumers with lower SES, financial literacy, and numeracy than among those with higher levels of these traits. Compared to “good nudges”, “bad nudges” that facilitated selection of inferior options exacerbated choice disparities. These results generalized across real retirement decisions, different types of nudges (defaults, sorting, and choice overload), and several consumer decision contexts. Across studies, we tested different explanations of why SES, domain knowledge, and numeracy moderate nudges. Our results suggest that nudges are a useful tool for those who wish to reduce disparities, and that choice architects should focus on nudges that help low-SES consumers. We discuss implications for marketing firms, policymakers, and segmentation.
Article
Full-text available
Nudges influence behavior by changing the environment in which decisions are made, without restricting the menu of options and without altering financial incentives. This paper assesses past empirical research on nudging and provides recommendations for future work in this area by discussing examples of successful and unsuccessful nudges and by analyzing 174 articles that estimate nudge treatment effects. Researchers in disciplines spanning the behavioral sciences, using varied data sources, have documented that many different types of nudges succeed in changing behavior in a wide range of domains. Nudges that automate some aspect of the decision-making process have an average effect size, measured by Cohen’s d, that is 0.193 larger than that of other nudges. Our analyses point to the need for future research to pay greater attention to (1) determining which types of nudges tend to be most impactful; (2) using field and laboratory research approaches as complementary methods; (3) measuring long-run effects of nudges; (4) considering effects of nudges on non-targeted outcomes; and (5) examining interaction effects among nudges and other interventions.
Article
Full-text available
Social and behavioural scientists have attempted to speak to the COVID-19 crisis. But is behavioural research on COVID-19 suitable for making policy decisions? We offer a taxonomy that lets our science advance in ‘evidence readiness levels’ to be suitable for policy. We caution practitioners to take extreme care translating our findings to applications.
Article
Full-text available
Seemingly insignificant daily practices, such as sugar usage in tea, can have a great accumulated impact on societal issues, such as obesity. That is why these behaviours are often the target of nudge interventions. However, when these behaviours are performed frequently they may turn into habits that are difficult to change. The aim of the current study was to investigate whether a portion size nudge has the potential to work in accordance with (instead of against) existing habits. Specifically, it was tested whether a portion size nudge would be more effective in reducing the amount of sugar added to tea, when people have a strong habit of adding a fixed amount of teaspoons of sugar to a cup of tea. The study (N = 123) had a mixed factorial design with teaspoon size (reduced size vs. control) as a within-subject factor, and habit disruption context condition (hot tea vs. cold tea) as a between-subjects factor. A paired t-test indicated that this nudge reduced sugar intake on average by 27% within subjects. When the context allowed for automatic enactment of the habit, the effectiveness of this nudge was moderated by habit strength. Surprisingly, the nudge effect was actually less pronounced when people had a strong habit. Implications for effective nudge interventions are discussed.
Article
Full-text available
The translation of choice attributes into more meaningful information (e.g., from kWh to costs) is a form of choice architecture that is thought to facilitate decision making by providing decision signposts that activate personally relevant but latent objectives and guide decisions towards options that are most congruent with the activated objectives. Here, we investigated the psychological mechanisms that underlie and drive the directional effects of attribute translations on decision making. Across two choice experiments (total N = 973), we provide empirical support for our proposition that attribute translations operate via pre-decisional attention processes. Specifically, we demonstrate that attribute translations focus individuals’ attention on choice options that are most congruent with the concerns highlighted by translations, and that this attentional prioritization of alternatives predicts choice. In addition to the cognitive mechanisms underlying attribute translations, we highlight the choice architectural principles that moderate the effectiveness of translations. We show that the directional effects of attribute translations are driven by the information that translations provide rather than by contextual changes in the decision environment. In line with previous research on evaluability, we find the effectiveness of attribute translations to depend on information format, with translations conveying evaluative information having a larger impact on decision making than translations providing numerical information. The present study is among the first to investigate the decision making processes underlying a choice architectural intervention. It provides insights into the mechanisms that drive and facilitate the signpost effect and renders recommendations for the implementation of attribute translations in policy making.
Article
Nudge interventions have quickly expanded from academic studies to larger implementation in so‐called Nudge Units in governments. This provides an opportunity to compare interventions in research studies, versus at scale. We assemble a unique data set of 126 RCTs covering 23 million individuals, including all trials run by two of the largest Nudge Units in the United States. We compare these trials to a sample of nudge trials in academic journals from two recent meta‐analyses. In the Academic Journals papers, the average impact of a nudge is very large—an 8.7 percentage point take‐up effect, which is a 33.4% increase over the average control. In the Nudge Units sample, the average impact is still sizable and highly statistically significant, but smaller at 1.4 percentage points, an 8.0% increase. We document three dimensions which can account for the difference between these two estimates: (i) statistical power of the trials; (ii) characteristics of the interventions, such as topic area and behavioral channel; and (iii) selective publication. A meta‐analysis model incorporating these dimensions indicates that selective publication in the Academic Journals sample, exacerbated by low statistical power, explains about 70 percent of the difference in effect sizes between the two samples. Different nudge characteristics account for most of the residual difference.
Article
Nudges are tools to achieve behavioural change. To evaluate nudges, it is essential to consider not only their overall welfare effects but also their distributional effects. Some nudges will not help, and might hurt, identifiable groups. More targeted, personalized nudging may be needed to maximize social welfare and promote distributive justice.
Article
Mental accounting refers to the fact that people create mental budgets to organize their resource use and to create linkages between specific acts of consumption and specific payments. Research on financial decision-making and consumer behaviour shows that these mechanisms can have a large impact on decisions and behaviours, deviating from normative economic principles. Here we introduce a theoretical framework illustrating how mental accounting mechanisms may influence individual decisions and behaviours driving energy consumption and carbon emissions. We demonstrate the practical relevance of mental accounting in the context of designing carbon pricing mechanisms and discuss the ethical dimensions of applying the concept to intervention design. By bridging the mental accounting literature and research in the energy domain, we aim to stimulate the study of the cognitive mechanisms underlying energy-relevant decisions and the development of novel theory-based interventions targeting reductions of energy use and carbon emissions. Full text available for free at https://rdcu.be/b8ppB
Article
Choice defaults are an increasingly popular public policy tool. Yet there is little knowledge of the distributional consequences of such nudges for different groups in society. We report results from an elicitation study in the residential electricity market in Switzerland in which we contrast consumers' actual contract choices under an existing default regime with the same consumers' active choices in a survey presenting the same choice-set without any default. We find that the default is successful at curbing greenhouse gas emissions, but it leads poorer households to pay more for their electricity consumption than they would want to, while leaving a significant willingness to pay for green electricity by richer households untapped.