ArticlePDF AvailableLiterature Review

Abstract

In recent years, policy makers worldwide have begun to acknowledge the potential value of insights from psychology and behavioral economics into how people make decisions. These insights can inform the design of nonregulatory and nonmonetary policy interventions—as well as more traditional fiscal and coercive measures. To date, much of the discussion of behaviorally informed approaches has emphasized “nudges,” that is, interventions designed to steer people in a particular direction while preserving their freedom of choice. Yet behavioral science also provides support for a distinct kind of nonfiscal and noncoercive intervention, namely, “boosts.” The objective of boosts is to foster people’s competence to make their own choices—that is, to exercise their own agency. Building on this distinction, we further elaborate on how boosts are conceptually distinct from nudges: The two kinds of interventions differ with respect to (a) their immediate intervention targets, (b) their roots in different research programs, (c) the causal pathways through which they affect behavior, (d) their assumptions about human cognitive architecture, (e) the reversibility of their effects, (f) their programmatic ambitions, and (g) their normative implications. We discuss each of these dimensions, provide an initial taxonomy of boosts, and address some possible misconceptions.
https://doi.org/10.1177/1745691617702496
Perspectives on Psychological Science
1 –14
© The Author(s) 2017
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1745691617702496
www.psychologicalscience.org/PPS
Numerous governments and international organizations
such as the World Bank (2015) and the European Com-
mission (Lourenco, Ciriolo, Almeida, & Troussard, 2016)
have begun to acknowledge the enormous potential of
behavioral science evidence in helping to design more
effective and efficient public policies. For instance,
behavioral science is now used or seriously considered
as a policy tool in many of the 35 member countries of
the Organisation for Economic Co-operation and Devel-
opment (OECD), whose mission it is to “promote policies
that will improve the economic and social well-being of
people around the world” (http://www.oecd.org/about/).
In fact, the OECD is currently drafting a collection of
more than 100 case studies of behavioral insights in prac-
tice. Without doubt, drawing attention to the importance
of behavioral science for policy making is the outstand-
ing achievement of the nudge approach, presented most
prominently in Thaler and Sunstein (2008). “Nudges” are
nonregulatory and nonmonetary interventions that steer
people in a particular direction while preserving their
freedom of choice (e.g., Alemanno & Sibony, 2015;
Halpern, 2015). Paradigmatic examples include automatic
(default) enrollment in organ-donation schemes and pen-
sion plans unless individuals specifically choose to opt
out (rather than having to actively opt in if they want to
enroll), the redesign of cafeterias such that healthier food
is displayed at eye level, and use of social norms (e.g.,
that many taxpayers pay on time; see Cialdini & Goldstein,
2004) to increase tax compliance. The nudge approach
has also prompted critical and informative debates about
its underlying political philosophy of libertarian paternal-
ism (e.g., Rebonato, 2012), the ethics of nudging (e.g.,
Barton & Grüne-Yanoff, 2015; Bovens, 2009), the empiri-
cal success of nudging policy interventions (e.g., House
of Lords Science and Technology Select Committee,
2011), and the approach’s starting proposition: that defi-
cits in human decision-making competence are pervasive
and difficult to alter (e.g., Grüne-Yanoff & Hertwig, 2016).
702496PPSXXX10.1177/1745691617702496Hertwig, Grüne-YanoffPathways to Good Decision
research-article2017
Corresponding Author:
Ralph Hertwig, Center for Adaptive Rationality, Max Planck Institute
for Human Development, Lentzeallee 94, 14195 Berlin, Germany
E-mail: hertwig@mpib-berlin.mpg.de
Nudging and Boosting: Steering or
Empowering Good Decisions
Ralph Hertwig1 and Till Grüne-Yanoff2
1Max Planck Institute for Human Development, Berlin and 2Royal Institute of Technology, Stockholm
Abstract
In recent years, policy makers worldwide have begun to acknowledge the potential value of insights from psychology
and behavioral economics into how people make decisions. These insights can inform the design of nonregulatory
and nonmonetary policy interventions—as well as more traditional fiscal and coercive measures. To date, much of
the discussion of behaviorally informed approaches has emphasized “nudges,” that is, interventions designed to steer
people in a particular direction while preserving their freedom of choice. Yet behavioral science also provides support
for a distinct kind of nonfiscal and noncoercive intervention, namely, “boosts.” The objective of boosts is to foster
people’s competence to make their own choices—that is, to exercise their own agency. Building on this distinction,
we further elaborate on how boosts are conceptually distinct from nudges: The two kinds of interventions differ
with respect to (a) their immediate intervention targets, (b) their roots in different research programs, (c) the causal
pathways through which they affect behavior, (d) their assumptions about human cognitive architecture, (e) the
reversibility of their effects, (f) their programmatic ambitions, and (g) their normative implications. We discuss each of
these dimensions, provide an initial taxonomy of boosts, and address some possible misconceptions.
Keywords
boost, nudge, choice architecture, education, public policy, autonomy, welfare
2 Hertwig, Grüne-Yanoff
The current interest in behavioral science within gov-
ernments, owed to the enormous impact of the nudge
approach, offers psychology a new channel for informing
and influencing public policy (Teachman, Norton, &
Spellman, 2015). Yet, we believe it would be a mistake to
equate all public policy making informed by behavioral
science evidence with nudging or to assume that all such
evidence ultimately points to nudge interventions. We
suggest that the scientific study of human behavior also
provides support for a decidedly distinct kind of inter-
vention, namely, boosts (Grüne-Yanoff & Hertwig, 2016).
The objective of boosts is to improve people’s compe-
tence to make their own choices. The focus of boosting
is on interventions that make it easier for people to exer-
cise their own agency by fostering existing competences
or instilling new ones. Examples include the ability to
understand statistical health information, the ability to
make financial decisions on the basis of simple account-
ing rules, and the strategic use of automatic processes
(we return to these examples later).
In this article, we distinguish between nudges and
boosts on seven dimensions, summarized in Table 1. Not
all of these dimensions are independent of each other
but we believe that they are sufficiently important to
merit separate discussion. Our text is structured largely
along these seven dimensions. After discussing the differ-
ences between nudges and boosts with respect to their
immediate intervention targets (i.e., behavior vs. compe-
tences), their roots in different research programs, and
the causal pathways through which they affect behavior,
we provide an initial taxonomy of boosts. We then con-
tinue to discuss the differences between nudging and
boosting with respect to their assumptions about the
human cognitive architecture, the reversibility of their
effects, their programmatic ambitions, and their norma-
tive implications. We conclude by addressing some of the
misconceptions about boosts that we have encountered
in recent discussions and the literature.
A Plurality of Views on How Real
People Reason and Decide
We begin by reviewing the plurality of views within the
behavioral sciences on how and how well people make
decisions. Our review is brief and theoretical rather than
empirical and exhaustive. The goal is to illustrate the sur-
prising range of views on the nature of human decision
making and to show that the rich behavioral evidence
available is indeed consistent with more than just nudg-
ing. We begin with the view on which nudging rests.
Nudging’s starting point is a drastically different view
of the real-world decision maker from that of the stylized,
hyperrational homo economicus or the Olympian model
of rationality, which, according to Simon (1990), “serves,
perhaps, as a model of the mind of God, but certainly
not as a model of the mind of man” (p. 34). Thaler and
Sunstein (2008) put it this way: “If you look at economics
textbooks, you will learn that homo economicus can
think like Albert Einstein, store as much memory as IBM’s
Big Blue, and exercise the willpower of Mahatma Gandhi”
(p. 6). Yet real and boundedly rational people not only
lack these heroic qualities, so the nudge approach argues,
but they are fallible, inconsistent, ill-informed, unrealisti-
cally optimistic, and myopic, and they suffer from inertia
and self-control problems (Sunstein, 2014; Thaler & Sunstein,
2008; see also Halpern, 2015).
Table 1. Seven Dimensions on Which the Nudging (Non-educative) and Boosting (Long-Term) Approaches to Public Policy
Can Be Distinguished
Dimension Nudging Boosting
Intervention target Behavior Competences
Roots in research
programs and evidence
Show decision maker as systematically
imperfect and subject to cognitive and
motivational deficiencies
Acknowledge bounds but identify human
competences and ways to foster them
Causal pathways Harness cognitive and motivational
deficiencies in tandem with changes in
the external choice architecture
Foster competences through changes in
skills, knowledge, decision tools, or external
environment
Assumptions about
cognitive architecture
Dual-system architecture Cognitive architectures are malleable
Empirical distinction
criterion (reversibility)
Once intervention is removed, behavior
reverts to preintervention state
Implied effects should persist once (successful)
intervention is removed
Programmatic ambition Correct momentous mistakes in specific
contexts—“local repair”
Equip individuals with domain-specific or
generalizable competences
Normative implications Might violate autonomy and transparency Necessarily transparent and require cooperation—
an offer that may or may not be accepted
Pathways to Good Decision 3
This dismal portrayal of people’s decision-making
competence has its roots in the heuristics-and-biases pro-
gram (e.g., Kahneman, 2003, 2011; Kahneman, Slovic, &
Tversky, 1982). This program has—over more than four
decades—cataloged a large set of “cognitive illusions,”
that is, systematic violations of norms of reasoning and
decision making (e.g., logic, probability theory, axioms of
rational choice models). The underlying idea is that
humans, as a consequence of their inherent cognitive
limitations, are unable to perform rational calculations
and instead rely on heuristics. These heuristics are “highly
economical and usually effective, but they lead to sys-
tematic and predictable errors” (Tversky & Kahneman,
1974, p. 1124). The cumulative weight of these errors has
thus “raised serious questions about the rationality of
many judgments and decisions that people make” (Thaler
& Sunstein, 2008, p. 7) and necessitates as well as enables
a new approach to public policy.
The innovative core of nudging is the insight that pol-
icy makers can harness individuals’ cognitive and motiva-
tional deficiencies rather than having to yield to them as
insurmountable obstacles to good decisions and welfare.
By enlisting these deficiencies, policy makers can steer
(nudge) individuals’ behavior toward behaviors that are
consistent with their ultimate goals or preferences—and
that result in better outcomes than would otherwise be
obtained (Rebonato, 2012; Thaler & Sunstein, 2008).
Take, for illustration, defaults as one paradigmatic nudge.
Default rules establish what will automatically happen if
a person does nothing—and “nothing is what many peo-
ple will do” (Sunstein, 2014, p. 9). Betting on this inertia,
a policy maker can put in place a default that brings
people closer to a desired behavioral outcome (Beshears,
Choi, Laibson, & Madrian, 2010). For example, automatic
enrollment in employer-sponsored savings plans increases
employees’ retirement income. Because people tend to
stay with the default option, automatic enrollment raises
participation rates in retirement savings plans (but not
necessarily contribution rates; see Butrica & Karamcheva,
2015).
Although undoubtedly influential, the heuristics-and-
biases program is not the only view about human deci-
sion makers and their competence, nor has its conclusions
remained unquestioned. What some perceived as “the
message that man is a ‘cognitive cripple’” (Edwards, 1983,
p. 508) was by no means unanimously endorsed—as
illustrated by one early conceptual criticism of the heuris-
tics-and-biases program that far preceded the more con-
tentious discussions of the 1990s (e.g., Gigerenzer, 1996;
Kahneman & Tversky, 1996):
In the research literature [on heuristics and biases],
subjects are almost never given feedback about the
logical implications of their judgements, never
shown their inconsistencies and invited to resolve
them, rarely asked for redundant judgements so that
inconsistency can be utilised as part of the
assessment process, and almost never asked to
make judgements in a group setting. . . . It is perfectly
possible that many people, given the right tasks in
the right circumstances, could make precise, reliable,
accurate assessments of probability. (Phillips, 1983,
p. 536)
Phillips argued that “research on heuristics and biases
has become a psychology of first impressions” (p. 538)
and that there is more to human decision making and
problem solving than this first response. Indeed, let us
briefly consider five other research programs also con-
cerned with human decision making and problem solv-
ing that suggest different views and conclusions. Preceding
the heuristics-and-biases program, a research program
often referred to as man as an intuitive statistician
(Peterson & Beach, 1967) reached a very different con-
clusion on how people make decisions. Reviewing stud-
ies conducted in the 1950s and 1960s that, like the
heuristics-and-biases program, used probability and sta-
tistics as a benchmark against which people’s intuitive
statistical inferences and predictions (e.g., about propor-
tions, means, variances, and sample sizes) were evalu-
ated, Peterson and Beach (1967) concluded that “the
normative model provides a good first approximation for
a psychological theory of inference” (p. 42). Although
this view of intuitive inference and prediction did not
deny the existence of discrepancies between norm and
intuition (e.g., probability updating being too conserva-
tive), the premise was that people “cannot help but to
gamble in an ecology that is of essence only partly acces-
sible to their foresight” and that the individual “gambles
well” (Brunswik, cited in Peterson & Beach, 1967, p. 29).
Since the mid-1980s, a research program with roots in
social psychology has been concerned with the dynamics
of social influence and persuasion (see, e.g., Cialdini, 2001;
Cialdini & Goldstein, 2004; Sherman, Gawronski, & Trope,
2014). This research shares with the heuristics-and-biases
program the assumption that people are “cognitive misers”
who, owing to their limited mental processing resources,
aim to save time and effort when navigating the social
world (Fiske & Taylor, 1991). Yet, and this is crucial, even
cognitive misers can be motivated and enabled to allocate
more cognitive resources and to engage more extensively
with arguments. Take, for illustration, two influential mod-
els of persuasion: the heuristic-systematic model (Chaiken,
1987) and the elaboration-likelihood model (Petty &
Cacioppo, 1986). In the former, an argument is processed
systematically or heuristically; in the latter, information
processing takes either the central or the peripheral pro-
cessing route. Simply put, the models’ core notion is that
4 Hertwig, Grüne-Yanoff
the quality of an argument will be systematically processed
(central route) only if it has high relevance or if the listener
is highly motivated. If, in contrast, listeners are on “autopi-
lot” and do not devote mental capacities to systematically
poring over arguments (see Booth-Butterfield & Welbourne,
2002; Todorov, Chaiken, & Henderson, 2002), their atti-
tudes will be shaped by peripheral cues (e.g., the exper-
tise of an argument’s source rather than its quality).
Originating in the late 1980s, the research program on
naturalistic decision making (Klein, 1999; Lipshitz, Klein,
Orasanu, & Salas, 2001) has studied how people make
decisions in complex, high-stakes, real-world settings
such as firefighting, nursing, and commercial aviation.
This program started from the premise that norms of
rational choice are not suitable for the typically ill-defined
and challenging tasks encountered by, for instance, fire-
ground commanders, in which conditions of uncertainty
and time pressure preclude any effort to generate and
comprehensively evaluate sets of options and then pick
the best one. Instead,
when people need to make a decision they can
quickly match the situation to the patterns they
have learned. If they find a clear match, they can
carry out the most typical course of action. In that
way, people can successfully make extremely rapid
decisions. The RPD [recognition-primed decision-
making] model explains how people can make
good decisions without comparing options. (Klein,
2008, p. 457)
This research program has been committed to revealing
the mechanisms behind the often impressive perfor-
mance of experts, without denying that failures may
occur (see also the joint article by Kahneman & Klein,
2009).
Another research program, initiated in the mid-1990s
(and to which one of the present authors has contrib-
uted), has studied which simple heuristics (or fast-and-
frugal heuristics) people use to make decisions and how
good those decisions are. The starting premise of this
program has been that individuals and organizations can-
not help but rely on simple heuristics in conditions of
uncertainty, lack of knowledge, and time pressure. Rather
than conceptualizing heuristics as inherently error-prone,
however, the program has provided evidence that less
information, computation, and time—conditions embod-
ied by heuristics—can help improve inferential and pre-
dictive accuracy (but may violate norms of coherence;
see Arkes, Gigerenzer, & Hertwig, 2016). This program
views the cognitive system as relying on an “adaptive
toolbox” of simple strategies, with the key to good per-
formance residing in the ability to select and match the
mind’s tools to the current social or nonsocial environment
(ecological rationality; Gigerenzer, Hertwig, & Pachur,
2011; Gigerenzer, Todd, & ABC Research Group, 1999;
Hertwig, Hoffrage, & ABC Research Group, 2013). Of
course, heuristics may still fail (e.g., when applied in the
wrong environment), but this approach emphasizes
that—relative to resource-intensive and general-purpose
normative strategies—heuristics can be surprisingly effi-
cient and robust (Gigerenzer etal., 2011).
Most recently, an approach sometimes referred to as
Bayesian rationality (Oaksford & Chater, 2009) or the
probabilistic mind (Griffiths, Chater, Kemp, Perfors, &
Tenenbaum, 2010) has suggested that many of the rea-
soning problems used in studies that have purportedly
found irrational behaviors are in fact better understood as
probabilistic problems. From this perspective, human
rationality and higher-level cognition are best captured
not by logic but by probability theory. Human thought
thus conceptualized has been found to be “sensitive to
subtle patterns of qualitative Bayesian, probabilistic rea-
soning” (Oaksford & Chater, 2009, p. 69).
To conclude, the goal of this short conceptual history of
psychological theorizing and evidence on how people rea-
son and make decisions was to demonstrate that the nudge
approach’s portrayal of the human decision maker as sys-
tematically imperfect is not the only legitimate conception.
Several others exist, and their conclusions about human
decision-making competences tend to be less disquieting.
Our objective here is not to champion one idea over the
other. Yet if behavioral science insights into how people
make decisions are to inform public policy, it is vital to
acknowledge the existence of different views and findings—
particularly as these different approaches may suggest
different types of policy interventions, including mea-
sures that foster existing competences or build new ones.
Boosts and Nudges: Definitions and
Causal Pathways to Behavior
Thaler and Sunstein (2008) defined a nudge as “any aspect
of the choice architecture that alters people’s behavior in
a predictable way without forbidding any options or sig-
nificantly changing their economic incentives” and where
this intervention is “easy and cheap to avoid” (p. 6). Nudg-
ing thus defined includes all behavioral policies that do
not coerce people or substantially change their financial
incentives and whose point of entry is the choice archi-
tecture—that is, the external context within which indi-
viduals make decisions. Within this extensive category,
nudges often come in the form of either “non-educative”
or “educative” nudges (Sunstein, 2016). We first focus on
non-educative nudges—the innovative core of nudging
and libertarian paternalism—and return to educative
nudges later when discussing boosts aimed to improve
performance in the short term.
Pathways to Good Decision 5
The intervention target of non-educative nudges is
behavior (Table 1). To causally steer behavior, non-educative
nudges harness cognitive or motivational deficiencies (e.g.,
inertia, procrastination, loss aversion; see also Rebonato,
2012) and effect corresponding changes in the choice
architecture to steer behavior in the desired direction. In
so doing, policy makers do not target features over which
people have explicit preferences (e.g., money, conve-
nience, taste, status, etc.) but rather exogenous properties
of the choice architecture that people typically claim not to
care about (e.g., position in a list, default settings, formula-
tion of semantically equivalent statements). Furthermore,
the behavior change brought about has to be easily revers-
ible, permitting the chooser to act otherwise. Because this
easy reversibility preserves individuals’ freedom of choice,
this kind of paternalism has been described as “libertarian”
in nature (Thaler & Sunstein, 2008).
Building on Grüne-Yanoff and Hertwig (2016), we
define boosts as interventions that target competences
rather than immediate behavior (Table 1). The targeted
competences can be specific to a single domain (e.g.,
financial accounting; Drexler, Fischer, & Schoar, 2014) or
generalize across domains (e.g., statistical literacy). A
boost may enlist human cognition (e.g., decision strate-
gies, procedural routines, motivational competences,
strategic use of automatic processes), the environment
(e.g., information representation or physical environ-
ment), or both. By fostering existing competences or
developing new ones, boosts are designed to enable spe-
cific behaviors. Furthermore, they have the goal of pre-
serving personal agency and enabling individuals to
exercise that agency. Consequently, if people endorse the
objectives of a boost—say, risk literacy, financial plan-
ning, healthy food choices, or implementing goals—they
can choose to adopt it; if not, they can decline to engage
with it. To this end, a boost’s objective must be transpar-
ent to the boosted individual. People can then harness
the new or “boosted” competence to make choices for
themselves (e.g., whether to undergo a medical test or
consume a particular food).
We distinguish two kinds of boosts. Some are short-
term boosts. They foster a competence, but the improve-
ment in performance is limited to a specific context.
Others are long-term boosts. Ideally, these permanently
change the cognitive and behavioral repertoire by adding
a new competence or enhancing an existing one, creat-
ing a “capital stock” (Sunstein, 2016, p. 32) that can be
engaged at will and across situations.
To appreciate this distinction, consider psychologists’
work on conditional probabilities, natural frequencies,
and Bayesian inferences.1 In the 1970s and 1980s, research-
ers within the heuristics-and-biases program (Kahneman,
2011) concluded that people systematically neglect base
rates in Bayesian inference: “the genuineness, the robust-
ness, and the generality of the base-rate fallacy are mat-
ters of established fact” (Bar-Hillel, 1980, p. 215). In the
1990s, others suggested that the mind’s statistical reason-
ing processes evolved to operate on natural frequencies
and that Bayesian computations are simpler to perform
with natural frequencies than with probabilities (the infor-
mation format used in the base-rate fallacy studies).2
Consistent with this hypothesis, Gigerenzer and Hoffrage
(1995) and Hoffrage, Lindsey, Hertwig, and Gigerenzer
(2000) showed that statistics expressed in terms of natu-
ral frequencies improved students’, patients’, doctors’,
and lawyers’ Bayesian inferences. This improvement was
achieved not by explicit instruction, but by changing the
information format in probabilistic reasoning problems
from probabilities to natural frequencies. This boost was
a short-term, context-specific fix, with no aspiration to
improve Bayesian reasoning beyond the given set of
problems.
A long-term boost of Bayesian reasoning, in contrast,
could foster people’s competence to actively translate
any probabilities they encounter into frequencies and
thereby simplify the Bayesian computations. Using a
computerized tutorial program, Sedlmeier and Gigerenzer
(2001) taught people to actively construct frequency from
probability representations, and found this newly devel-
oped competence to be robust after 15 weeks, with no
drop in performance.
Recently, Sunstein (2016) introduced the notion of
educative nudges, citing reminders, warnings, and infor-
mation such as nutrition labels as examples. In our view,
educative nudges and short-term boosts largely overlap.
Both represent local fixes to a given problem and
require—in contrast to classic nudges, such as defaults—
a modicum of motivation and cognitive skill. Yet even
local fixes, if they are to be successful, require psycho-
logical knowledge on the part of the booster. The mere
provision of information is often not enough. Health sta-
tistics or nutritional information, for instance, bring no
benefits if they are intransparent (e.g., reliant on condi-
tional probabilities), overwhelming (like software license
agreements), or misleading (e.g., expressed as relative
risk information; Gigerenzer, Gaissmaier, Kurz-Milcke,
Schwartz, & Woloshin, 2007).
The rest of this section will focus on the difference
between non-educative nudges and long-term boosts. To
illustrate, let us contrast a paradigmatic nudge that was
designed to boost retirement savings, namely, Save More
Tomorrow (SMT; Thaler & Benartzi, 2004), with a boost
that could be designed with the same goal in mind.
Although both policies have the same objective, the psy-
chological assumptions about the decision maker under-
lying the respective interventions differ greatly.
6 Hertwig, Grüne-Yanoff
The SMT intervention assumes specific cognitive and
motivational deficiencies, which it enlists to increase
employees’ contributions to retirement savings accounts.
One deficiency is the present bias, a strong preference for
present over future rewards, which causes people to save
less for their old age than they should. This bias decreases
when a present reward is projected into the near future
(Loewenstein & Prelec, 1992). This change in preference
would not be expected to occur if people discounted the
future consistently. SMT harnesses this inconsistency in
discounting by not asking people to choose between
consumption now versus consumption later. Instead, it
offers a choice between consumption in the near future
(say, a year from now) and consumption later. Specifi-
cally, participants commit today to a series of increases in
contributions that are timed to coincide with salary
increases in the future. A second deficiency that the savings
program enlists is inertia. Because “nothing is what many
people will do” (Sunstein, 2014, p. 9), they typically will
not opt out of a program they are enrolled in, even when
future contributions escalate with every pay raise.
How, in contrast, might a boost approach achieve the
goal of increasing people’s retirement savings? As no par-
adigmatic boost has yet been proposed in this context,
let us outline a hypothetical savings boost that combines
two components known to be effective. The first is a
“simple heuristics” module. Drexler etal. (2014) found
that providing microentrepreneurs with training in basic
accounting heuristics and procedural routines signifi-
cantly improved their financial practices, objective report-
ing quality, and even their revenues. Importantly, the
impact of the “rule-of-thumb” training was significantly
larger than that of standard accounting training designed
to teach the basics of double-entry accounting, working
capital management, and investment decisions. For
example, whereas standard accounting training teaches
students to keep their business and personal accounts
separate by instructing them how to calculate business
profits, the “rule-of-thumb” training offers participants
a physical rule to keep their money in two separate
drawers (or purses) and to only transfer money
from one drawer to the other with an explicit “IOU”
note between the business and the household. At
the end of the month they could then count how
much money was in the business drawer and know
what their profits were. (Drexler etal., 2014, p. 3)
Following the same rationale of replacing factual knowl-
edge with simple heuristic procedures, a retirement
savings boost would not teach participants about interest
compounding, inflation, and risk diversification but
instead offer simple rules of thumb (e.g., a simple 1/N
diversification strategy; DeMiguel, Garlappi, & Uppal,
2009).
The second component of a hypothetical retirement
savings boost might involve fostering people’s compe-
tence to vary their sense of psychological connectedness,
that is, their sense of connection with their future self
(Ainslie, 1975; Parfit, 1987; Schelling, 1984). In the con-
text of savings, this could mean that the more aware
someone is of being the future recipient of today’s
savings, the more prepared that person will be to save for
retirement. By the same logic, someone who is estranged
from his or her future self—through lack of belief or
imagination—is less likely to save. Following this reason-
ing, Hershfield etal. (2011) presented people with ren-
derings of their future selves, made using age-progression
algorithms that forecast how physical appearances will
change over time. In all cases, participants who inter-
acted with their virtual future selves, and presumably
overcame or reduced disconnectedness, were more likely
to accept later monetary rewards over immediate ones. In
another study, participants wrote a short essay about
how they wanted to be remembered by future genera-
tions (Zaval, Markowitz, & Weber, 2015). This method
was found to be helpful in getting people to consider the
long view and promoting proenvironmental intentions
and behaviors. To the extent that people are equipped
with the psychological competence to mentally bridge
long time horizons, they themselves (rather than a choice
architect) can choose to enlist that competence whenever
they perceive asymmetries between short-term benefits
and long-term costs.
In sum, the SMT nudge does not aim to foster people’s
competences. Instead, it skillfully designs an external
choice architecture—involving automatic enrollment, pro-
jection of the choice to give up consumption into the near
future, and dynamic adjustment of savings rates—that har-
nesses cognitive and motivational deficiencies to prompt
behavior change. A savings boost, in contrast, would seek
to foster competences that would improve individuals’ sav-
ing behavior if so desired by, for instance, boosting the
ability to connect with one’s future self and teaching sim-
ple procedural rules. In short, the nudge approach steers
behavior without taking the detour of honing new compe-
tences, whereas the boost approach invests in building on
and developing people’s competences.
Let us emphasize four points. First, we do not suggest
that a savings boost would be more effective (in terms of
the rate of savings) than the SMT nudge. This is an empiri-
cal question, and we hope that the debate on the effec-
tiveness of financial education versus automatic enrollment
(e.g., Fernandes, Lynch, & Netemeyer, 2014; Willis, 2011)
will be extended to include potential (procedure-based)
boosts. Second, a nudge that affects behavior repeatedly
(e.g., daily food choices in a rearranged cafeteria) or that
lasts a number of years (e.g., SMT) may ultimately also
produce behavioral routines and engender a sense of self-
efficacy (Bandura, 1997). As a consequence, the desired
Pathways to Good Decision 7
behavior may “survive” the removal of the scaffolding
choice architecture. Yet this, again, is an empirical ques-
tion. If such competences did emerge, they would be
most welcome—but this is not the explicit intention of the
SMT nudge or nudging interventions more generally.
Third, the SMT nudge and the proposed savings boost are
distinct—but not necessarily mutually exclusive—policy
interventions. Different kinds of interventions can com-
plement each other. This raises an important question that
is likely to receive more attention in the future: Under
what circumstances is a particular intervention—boost
versus nudge—more desirable (see Grüne-Yanoff,
Marchionni, & Feufel, 2016; Hertwig, in press)? Fourth,
boost interventions already exist and can be enlisted
across a range of domains. In the following, we illustrate
this point by offering a first taxonomy of boosts.
A First Taxonomy of Long-Term Boosts
Our goal is not to provide an exhaustive account but to
show just how rich this class already is (even when limit-
ing the scope of our brief review to recent work).3 One
dimension on which boosts can be classified is according
to the competence to be boosted.
Risk literacy boosts establish or foster the competence
to understand statistical information in domains such as
health, weather, and finances. This competence can be
achieved through (a) graphical representations (e.g.,
Lusardi etal., 2014; Spiegelhalter, Person, & Short, 2011;
Stephens, Edwards, & Demeritt, 2012), (b) experienced-
based (as opposed to purely description-based) repre-
sentations (e.g., Hogarth & Soyer, 2015; Kaufmann,
Weber, & Haisley, 2013), (c) representations that avoid
biasing framing effects (e.g., absolute instead of relative
frequencies; Gigerenzer etal., 2007; Spiegelhalter etal.,
2011), (d) brief training in transforming opaque represen-
tations (e.g., single-event probabilities) into transparent
ones (e.g., frequency-based representations; Sedlmeier &
Gigerenzer, 2001), and (e) training of math skills in gen-
eral (e.g., during story time with parents; Berkowitz etal.,
2015). Boosts targeting risk literacy work as long as peo-
ple have access to actuarial information about risks.
Often, however, people need to make decisions under
uncertainty, with no explicit risk information available. In
this case, they need other mental tools.
Uncertainty management boosts establish or foster
procedural rules for making good decisions, predictions,
and assessments under uncertain conditions with the
help of (a) simple actuarial inferential methods (e.g.,
Dawes, Faust, & Meehl, 1989; Swets, Dawes, & Monahan,
2000), (b) simple rules of collective intelligence (e.g.,
Kurvers etal., 2016; Kurvers, Krause, Argenziano, Zalaudek,
& Wolf, 2015; Wolf, Krause, Carney, Bogart, & Kurvers,
2015; see also Herzog & Hertwig, 2014), and (c) fast and
frugal decision trees, simple heuristics, and procedural
routines (e.g., Drexler etal., 2014; Gigerenzer etal., 2011,
chaps. 29, 31, 32, 34, 36, 39; Hertwig & Herzog, 2009;
Jenny, Pachur, Williams, Becker, & Margraf, 2013).
Motivational boosts foster the competence to autono-
mously adjust one’s motivation, cognitive control, and
self-control through interventions such as expressive
writing (e.g., Beilock & Maloney, 2015), growth-mind-set
or sense-of-purpose exercises (e.g., Paunesku etal., 2015;
Rattan, Savani, Chugh, & Dweck, 2015), attention and
attention state training (e.g., Tang & Posner, 2009; Tang,
Tang, & Posner, 2013; see also Moffitt etal., 2011), psy-
chological connectedness training (Hershfield et al.,
2011), reward-bundling exercises (Ainslie, 1992, 2012),
the strategic use of automatic processes (i.e., harnessing
simple implementation intentions; Gollwitzer, 1999), and
training in precommitment strategies (Schelling, 1984)
and self-control strategies (e.g., see Table 30.1 in Fishbach
& Shen, 2014).
Another dimension on which boosts could be classi-
fied is the target audience. Some boosts target specific
developmental periods (e.g., childhood); others are
applicable across the adult life span (e.g., risk literacy
boosts). Some boosts target the population at large (e.g.,
Spiegelhalter et al., 2011); others target subsets of the
population, such as smokers (Tang etal., 2013), general
practitioners ( Jenny et al., 2013), or diagnosticians
(Kurvers etal., 2015).
Nudges Versus Boosts: Which
Cognitive Architecture Is Assumed?
Nudges and boosts differ in the target of intervention and
the causal pathways taken to prompt behavior change
(Table 1). Nudges co-opt the decision maker’s (internal)
cognitive and motivational processes and design the
(external) choice architecture such that it, in tandem with
the (untouched) functional processes, produces a change
in behavior. Thus, nudges target behavior directly. Boosts,
in contrast, target individual competences to bring about
behavior change. Their goal is either to train the func-
tional processes or to adapt the external world (e.g., rep-
resentation of information), or both, to improve decision
making and its outcomes.
To appreciate these distinct pathways, let us first clar-
ify the concept of functional processes. A construct often
used in cognitive science, artificial intelligence, and other
disciplines is that of the cognitive architecture. It speci-
fies the “infrastructure” of an artificial or naturally evolved
information-processing system, including the mental
hardware such as memory structures for the storage of
beliefs, goals, and knowledge, as well as the functional
processes operating on that hardware, such as cognitive
algorithms, heuristics, and reasoning processes (e.g.,
8 Hertwig, Grüne-Yanoff
Langley, Laird, & Rogers, 2009). Although psychologists
agree that the human mind is a natural information-
processing system, there is much debate about the nature
of its architecture and especially about the mind’s func-
tional processes and their rationality. Some proposals for
a cognitive architecture of the human mind are rooted in
neuroscientific findings (e.g., Anderson & Lebiere, 1998;
McClelland, Rumelhart, & PDP Research Group, 1986;
Rumelhart, McClelland, & PDP Research Group, 1986);
others are more metaphorical, with the function of gen-
erating new research hypotheses (e.g., the mind as a
Swiss army knife; Cosmides & Tooby, 1994) or summariz-
ing existing data (Kahneman, 2011). Differing assump-
tions about the mind’s functional processes also represent
important distinguishing criteria between nudging and
boosting.
Nudging
The nudge approach has its roots in the “dual-system”
view of the human cognitive architecture. According to
Kahneman (2003, 2011), the mind can be divided into
two processing systems: System 1 (or the automatic sys-
tem), which is fast, intuitive, and emotional, and System
2 (or the effortful system), which gives rise to slow, rule-
governed, and deliberate reasoning and is (emotionally)
neutral. System 1 is an efficient first-response system but
its speed and automatic processes render it susceptible to
systematic biases (“cognitive illusions”). System 2 could,
in principle, supervise System 1’s mental products and
conclusions as well as rectify biases—but it is often too
sluggish to do so.
Attempts to change behavior can thus take one of two
routes: One is to engage System 2 and foster it, the other
is to harness System 1’s deficiencies. Nudging, at least in
Thaler and Sunstein (2008; but see Jung & Mellers, 2016),
predominantly takes the latter approach. Attempts to
strengthen System 2 are rare for at least two reasons. One
is conceptual (Kahneman, 2011, p. 28). According to the
dual-process view, people’s cognitive and motivational
deficiencies are robust, often difficult to prevent, and
largely impervious to change; debiasing attempts are
often seen as futile. The fact that even experts—in busi-
ness, medicine, and politics (e.g., Bornstein & Emler,
2001; Heath, Larrick, & Klayman, 1998; Kahneman &
Renshon, 2007; Malmendier & Tate, 2005; Norman & Eva,
2010)—fall prey to cognitive illusions suggests that even
rich learning opportunities do not equip people to escape
them.
The second reason why System 2 nudges are rare
relates to another unique selling point of nudges, namely,
their cost efficiency. By putting in place simple nudges
with a large scope (e.g., “mass” default rules, automatic
enrollment), policy makers can effect substantial behavior
changes at relatively low costs. Indeed, cost efficiency in
combination with large-scale impact, that is, maximum
net benefits, has often been highlighted as a key advan-
tage of nudging relative to educating the public or, indeed,
traditional economic policies (e.g., Weber & Johnson,
2009, p. 75).
Boosting
Unlike proponents of nudging, proponents of boosting
do not share a single view of the human cognitive archi-
tecture as in the dual-system view (see also the section “A
Plurality of Views on How Real People Reason and
Decide”). Yet, what proponents of boosting necessarily
agree on is that the functional cognitive processes and
motivational processes are malleable and worth develop-
ing. Specifically, existing mental tools can be enhanced
or a person can learn to employ new procedural rules.
Furthermore, despite its focus on boosting the mind’s
competences, this policy approach is not “introversive.
On the contrary, competences are often best fostered by
redesigning aspects of individuals’ external environment
or by teaching them how to redesign them.
What are the theoretical foundations of boosting? In
Grüne-Yanoff and Hertwig (2016), we discussed to what
extent the necessary assumptions of nudging and boosting
are implied by a theoretical commitment to the heuristics-
and-biases program and to the simple heuristics (and eco-
logical rationality) program (Gigerenzer et al., 2011),
respectively. Our analysis of what we called policy–theory
coherence could be read to imply that boosting’s view of
the mind is that of an adaptive toolbox of ecologically
rational heuristics. In fact, we argue that boosts include—
but go beyond—simple and ecologically rational heuristics.
For instance, because boosts include motivational inter-
ventions, their development could benefit greatly from
links with programs on mind-set (Dweck, 2012) and lay
theory interventions (Yeager etal., 2016), cognitive control
and attention state training (Tang & Posner, 2009), the stra-
tegic use of automatic processes (Gollwitzer, 1999), and
knowledge of how people process arguments (in particu-
lar, factors that prompt them to invest cognitive effort in
evaluating arguments; for reviews, see Booth-Butterfield &
Welbourne, 2002; Todorov etal., 2002).
Reversibility: An Empirical Criterion
for Distinguishing Between Nudges
and Boosts
In theory, the conceptual distinction between non-
educative nudges and long-term boosts seems clear. But
once concepts hit the messy world of real-life policy
interventions, matters are rarely clear cut. Let us therefore
offer a pragmatic rule for distinguishing nudges from
Pathways to Good Decision 9
boosts. Boosts seek to foster people’s cognitive and moti-
vational competences, whereas nudges adapt a choice
architecture to people’s cognitive and motivational pro-
cesses and leave them unaltered. This difference implies
a different degree of reversibility in the behavioral effects
induced (Table 1):
If, ceteris paribus, the policy maker eliminates an
efficacious (nonmonetary and nonregulatory) be-
havioral intervention and behavior reverts to its
preintervention state, then the policy is likely to be
a nudge. If, ceteris paribus, behavior persists when
an intervention is eliminated, then the policy is
more likely to be a boost.
This criterion is based on the assumption that boosts ulti-
mately change behavior (e.g., healthier food choices, bet-
ter financial decisions, comprehension of health statistics)
by enhancing existing competences or establishing new
ones and that those competences, once in place, remain
stable over time. Consequently, the implied behavioral
effects should persist once the intervention is removed
and if the implied behavior is congruent with the per-
son’s value system. Nudges, in contrast, change behavior
by adapting the choice architecture, leaving individual
competences unchanged. Consequently, once the inter-
vention is removed, behavior is likely to revert to the
prenudging state.
One important qualification to this criterion is worthy
of note. As mentioned earlier, nudges that affect behavior
repeatedly may produce behavioral routines through
learning that “survive” the removal of the nudge in the
choice architecture. In such cases, our empirical criterion
indicates that the nudge intervention has a boosting “side
effect”: By changing the choice context and harnessing
cognitive and motivational deficiencies to affect behavior,
the nudge inadvertently affects the cognitive and motiva-
tional processes themselves. The nudge has thus turned
into a boost and had lasting effects.
The Vision Behind Boosts
In response to our distinction between nudging and
boosting (in Grüne-Yanoff & Hertwig, 2016), Sunstein
(2016) noted, “some of the best nudges are boosts”
(p. 10), and he described educative nudges (e.g., disclo-
sure requirements, warnings, nutrition labels, reminders)
as an attempt
to strengthen System 2 by improving the role of
deliberation and people’s considered judgments.
One example is disclosure of relevant (statistical)
information, framed in a way that people can
understand it. These kinds of nudges, sometimes
described as “boosts,” attempt to improve people’s
capacity to make choices for themselves. (Sunstein,
2016, p. 52)
Given this description, one might indeed conclude that
boosts are simply a special kind of nudge, even if their
objectives and aspirations differ. Yet there are clear differ-
ences. Take, for illustration, the case of risk literacy, men-
tioned in our taxonomy of boosts. Thaler and Sunstein
(2008) emphasized—and we believe rightly so—that
“choice architecture is inevitable, and hence certain influ-
ences on choices are also inevitable” (p. 21). This means,
however, that no governmental policy maker has full
control over how, for instance, players in the medical
marketplace—pharmaceutical companies, governments,
doctors, patient groups, and so on—communicate health
statistics. The vision behind boosting is to equip individ-
uals with, for instance, risk literacy competences that are
applicable across a wide range of circumstances, includ-
ing those that will not be reached by mandated disclo-
sure requirements, warnings, and labels. The notion of
educative nudges in Sunstein (2016) does not embrace
this more encompassing goal of empowering people
who will inevitably face commercially constructed choice
architectures and industry nudges. Nor is such empower-
ment part of Thaler and Sunstein’s (2008) vision of nudg-
ing. In fact, the notion of enhancing competences plays,
if at all, a marginal role in their book—words such as
“competence,” “knowledge,” “skills,” and empowerment
do not even feature as entries in the book’s index.
Nudges and Boosts and Their
Normative Implications
Of course, it is important to consider efficiency, effective-
ness, and welfare when choosing between the two kinds
of policy interventions. In addition, nudges and boosts
have different implications with respect to normative
dimensions of policy interventions. We briefly discuss two
such normative dimensions: transparency and autonomy.
Hard paternalistic interventions such as laws (manda-
tory seatbelt use), bans (on smoking in public places),
and financial disincentives (taxes on cigarettes) are visi-
ble and transparent (Glaeser, 2006). Citizens can there-
fore scrutinize them and hold governments accountable.
Some have argued that nudges are less transparent.
Indeed, some nudges may operate behind the chooser’s
back and therefore appear manipulative (e.g., Conly,
2012; Wilkinson, 2013). Default rules can be criticized on
these grounds—they take advantage of people’s assumed
inertia and skirt conscious deliberation, meaning that
they are perhaps not as easily reversible as thought and
thus fail to meet the criterion of freedom of choice. Fur-
thermore, even if default rules are completely transparent
10 Hertwig, Grüne-Yanoff
(and they often are—think automatic enrollment in
savings plans), a person’s ability to discern an interven-
tion as such (e.g., a default) is distinct from the ability to
discern how it changes their behavior—particularly if the
direction of the effect is counterintuitive. To the extent
that people are unable to fathom the underlying mecha-
nism that brings about the change in behavior, this
reduces transparency.
Boosts, in comparison, require the individual’s active
cooperation. They therefore need to be explicit, visible,
and transparent. The requirement of cooperation also
implies individual judgment and engagement. This, in
turn, implies—according to dominant notions of auton-
omy (Buss, 2014)—that boosts are more respectful of
autonomy than nudges are. This holds in particular for
those nudges that seek to bypass people’s “capacity for
reflection and deliberation” (Sunstein, 2016, p. 64).4
Individuals choose to engage or not to engage with a
boost. The policy maker is therefore entitled to assume
that a chosen boost reflects the individual’s genuine moti-
vation. A successful nudge does not necessarily reflect
such genuine motivations. Of course, the hope is that
policy makers, informed by data and the public discourse,
aim to promote people’s own ends, as they understand
them (Sunstein, 2014). Genuine motivations are often
seen as the proper evidential basis of welfare consider-
ations (e.g., Hausman, 2012). Therefore, the distinction
between boosts and nudges implies that boosts are more
likely to respect such considerations. Of course, this does
not necessarily mean that boosts are as successful as or
more successful than nudges in achieving a desired goal
(e.g., higher contributions to retirement plans).
Addressing Potential Misconceptions
About Boosts
Various misconceptions and oversimplifications exist
regarding nudging as a policy intervention. Some mis-
conceptions about boosting are likewise to be antici-
pated. We next address some of them.
Boosting is not the same as school
education
Boosting, as we conceptualize it, is not identical to school
education, although some boosts (e.g., representation
training, growth mind-set interventions) could easily be
included in school curricula. Of course, schools have the
task of providing students with knowledge and compe-
tences and thus do boost the individual mind. However,
the policy interventions we have in mind differ from
school education in several respects. First, the primary
goal of boosts is not to offer accurate declarative knowl-
edge and cultural skills such as reading, writing, gram-
mar, and algebra. Instead, boosts offer competences in
domains that are not typically addressed in school curri-
cula, such as good financial decision making, accurate
risk assessment, healthy food choices, informed medical
decisions, and effective self-regulation. Second, boosts,
like nudges, should be informed by behavioral science
evidence. This is not necessarily the case for what is
being taught in schools. Third, boosts aim to foster or
develop new competences under conditions of limited
time and resources (on the part of the target audience
and the policy makers) and typically in an adult citizenry
that cannot be subjected to years of additional schooling.
Fourth, the focus of boosts is typically on actionable
motivational and decisional competences (e.g., proce-
dural routines, heuristics, goal implementation skills) and
not on information per se. Fifth, boosts often are “just-in-
time” interventions, whereas school education provides
knowledge and competences on a schedule. In all likeli-
hood, people are most motivated to develop a new com-
petence when they experience a specific need for it.
Finally, boosts, as understood here, are interventions that
preserve and enable individuals’ personal agency and
autonomy. Admittedly, if boosts were included in a man-
datory school curriculum, the autonomy of the to-be-
boosted person (the student) would be curtailed.
Boosts need not be costly
Nudges are envisioned to be inexpensive policy mea-
sures. Indeed, some modifications of the choice architec-
ture can be made at low cost. They scale up and promise
immediate results. A default rule can, for instance, be
changed by government mandate (e.g., from opting in to
opting out). Changes in default rules also require mini-
mal effort on the part of the nudged individual; in fact,
sometimes the nudge rests on the very assumption that
individuals will do nothing. In contrast, boosts often
require investments in time, effort, and motivation on the
part of both the individual and the policy maker. Yet,
although boosts are rarely no-cost interventions, many of
them are low cost. The necessary time investment can be
as little as a few minutes (e.g., expressive writing, Beilock
& Maloney, 2015), or no more than a few hours (growth
mind-set and sense-of-purpose interventions, Paunesku
etal., 2015; representation training, Drexler etal., 2014;
Sedlmeier & Gigerenzer, 2001). Admittedly, the policy
maker faces the costs of setting up learning opportunities
for such interventions to be offered.
The domains of boosts are not
completely orthogonal to those of
nudging
Boosts and nudges are, of course, not perfect substitutes.
For instance, no nudge has been implemented to reduce
math anxiety (Beilock & Maloney, 2015; Maloney & Beilock,
Pathways to Good Decision 11
2012) or foster transparent communication of health risks
(Gigerenzer etal., 2007). In these cases, policy makers have
only one choice. Yet there are domains in which either
nudges or boosts could be used, including food choices,
financial decisions, and self-control problems. In each of
these classes, individuals’ competences can be boosted,
nudged, or both. Our introductory example of the SMT
nudge versus the savings boost illustrates that policy mak-
ers have a choice. As we emphasized before, which of the
two interventions is more efficient is, of course, an empiri-
cal issue. Our goal is not to champion one over the other
but to highlight the need for an analysis of the respective
circumstances and goals, allowing policy makers to select
the more appropriate intervention (Grüne-Yanoff et al.,
2016). Hertwig (in press) has discussed rules that policy
makers can apply to determine under what conditions
boosts, relative to nudges, are the preferable form of non-
monetary and nonregulatory intervention.
The Public Policy Maker’s Choice
Conceptual clarity is the key to understanding the tool-
box available to public policy makers and appreciating
each tool’s pros and cons. Although two tools may aim to
bring about the same behavioral effects, they can tread
different causal pathways. For instance, Thaler and
Sunstein (2008) have strictly distinguished nudges from
measures that change behavior through economic incen-
tives. Aiming for the same kind of conceptual clarity, we
have argued that (at least) two evidence-informed kinds
of nonregulatory and nonmonetary interventions should
be distinguished. Nudging and boosting represent differ-
ent causal pathways to behavior change. Making this dis-
tinction explicit contributes to the normative debate on
behavioral policies, and it offers policy makers a choice.
Acknowledgments
We are grateful to the members of the Center for Adaptive Ratio-
nality and to Cass Sunstein for many helpful discussions, and we
thank Susannah Goss and Anita Todd for editing the manuscript.
Declaration of Conflicting Interests
The authors declared that they had no conflicts of interest with
respect to their authorship or the publication of this article.
Funding
This research was supported by a grant from the German
Research Foundation (DFG) to Ralph Hertwig (HE 2768/7-2).
Notes
1. Bayesian inferences are statistical inferences that in the sim-
plest case encompass two exclusive hypotheses (e.g., having
breast cancer or not having cancer) and a datum such as the
outcome of a medical test (e.g., a mammography). Bayes’s
theorem is a mathematical formula that combines pieces of
probability information—i.e., the base rate of the hypothesis
(e.g., breast cancer is present), likelihood information (the true-
positive rate and the false-positive rate of the test), and a new
datum (e.g., a positive test result)—to arrive at the posterior
probability (e.g., the probability that someone with a positive
mammogram result actually has breast cancer).
2. Natural frequencies refer to the outcomes of natural sam-
pling—that is, the acquisition of information by updating event
frequencies without artificially fixing the marginal frequencies.
Unlike probabilities and relative frequencies, natural frequen-
cies are raw observations that have not been normalized with
respect to the base rates of the event in question.
3. Comprehensive frameworks for the classification of evidence-
informed behavioral change interventions already exist (e.g.,
Michie, van Stralen, & West, 2011). Because frameworks such as
the behavior change wheel (Michie etal., 2011) include inter-
ventions that go far beyond those targeted by the nudging and
boosting approach (e.g., coercion, incentivization, and restric-
tion of choice); however, we will not consider them further here.
Within the behavior change wheel, the boost interventions we
consider here would be classified under “education,” “training,”
“environmental restructuring,” “modeling,” and “enablement.”
4. Yet boosted competences can, of course, be employed to
restrain other people’s autonomy. For example, by coaching
parents to engage in playful bedtime math with their children
(Berkowitz etal., 2015), one might boost parents’ ability to steer
their children’s behavior. Parents then, without loss of auton-
omy, participate in a routine that may curtail their children’s
autonomy.
References
Ainslie, G. (1975). Specious reward: A behavioral theory of
impulsiveness and impulse control. Psychological Bulletin,
82, 463–496. doi:10.1037/h0076860
Ainslie, G. (1992). Picoeconomics: The strategic interaction of
successive motivational states within the person. Cambridge,
England: Cambridge University Press.
Ainslie, G. (2012). Pure hyperbolic discount curves predict
“eyes open” self-control. Theory and Decision, 73, 3–34.
doi:10.1007/s11238-011-9272-5
Alemanno, A., & Sibony, A. (2015). Nudge and the law: A
European perspective. Oxford, England: Hart. doi:10.5040/
9781474203463
Anderson, J. R., & Lebiere, C. (1998). The atomic compo-
nents of thought. Mahwah, NJ: Lawrence Erlbaum. doi:10
.4324/9781315805696
Arkes, H. R., Gigerenzer, G., & Hertwig, R. (2016). How bad is
incoherence? Decision, 3, 20–39. doi:10.1037/dec0000043
Bandura, A. (1997). Self-efficacy: The exercise of control. New
York, NY: Worth.
Bar-Hillel, M. (1980). The base rate fallacy in probability judg-
ments. Acta Psychologica, 44, 211–233. doi:10.1016/0001-
6918(80)90046-3
Barton, A., & Grüne-Yanoff, T. (2015). From libertarian pater-
nalism to nudging—And beyond. Review of Philosophy and
Psychology, 6, 341–359. doi:10.1007/s13164-015-0268-x
12 Hertwig, Grüne-Yanoff
Beilock, S. L., & Maloney, E. A. (2015). Math anxiety: A fac-
tor in math achievement not to be ignored. Policy
Insights From the Behavioral and Brain Sciences, 2, 4–12.
doi:10.1177/2372732215601438
Berkowitz, T., Schaeffer, M. W., Maloney, E. A., Peterson, L.,
Gregor, C., Levine, S. C., & Beilock, S. L. (2015). Math at
home adds up to achievement in school. Science, 350, 196–
198. doi:10.1126/science.aac7427
Beshears, J., Choi, J. J., Laibson, D., & Madrian, B. C. (2010).
The impact of employer matching on savings plan partic-
ipation under automatic enrollment. In D. A. Rise (Ed.),
Research findings in the economics of aging (pp. 311–327).
Chicago, IL: University of Chicago Press. doi:10.7208/chi
cago/9780226903088.003.0012
Booth-Butterfield, S., & Welbourne, J. (2002). The elaboration
likelihood model. In J. P. Dillard & M. Pfau (Eds.), The
persuasion handbook: Developments in theory and practice
(pp. 153–173). Thousand Oaks, CA: Sage.
Bornstein, B. H., & Emler, A. C. (2001). Rationality in medi-
cal decision making: A review of the literature on doc-
tors’ decision-making biases. Journal of Evaluation in
Clinical Practice, 7, 97–107. doi:10.1046/j.1365-2753.2001
.00284.x
Bovens, L. (2009). The ethics of nudge. In T. Grüne-Yanoff &
S. O. Hansson (Eds.), Preference change: Approaches from
philosophy, economics and psychology (pp. 207–219). Berlin,
Germany: Springer. doi:10.1007/978-90-481-2593-7_10
Buss, S. (2014). Personal autonomy. In E. N. Zalta (Ed.), The
Stanford encyclopedia of philosophy (Winter 2014 ed.).
Retrieved from https://plato.stanford.edu/entries/personal-
autonomy/
Butrica, B. A., & Karamcheva, N. S. (2015). The relationship
between automatic enrollment and DC plan contributions:
Evidence from a national survey of older workers (Center
for Retirement Research at Boston College Working Paper,
CRR WP 2015-14). Retrieved from http://crr.bc.edu/wp-
content/uploads/2015/06/wp_2015-14.pdf
Chaiken, S. (1987). The heuristic model of persuasion. In M. P.
Zanna, J. M. Olson, & C. P. Herman (Eds.), Social influence:
The Ontario symposium (Vol. 5, pp. 3–39). Hillsdale, NJ:
Lawrence Erlbaum.
Cialdini, R. B. (2001). Influence: Science and practice (4th ed.).
Boston, MA: Allyn & Bacon.
Cialdini, R. B., & Goldstein, N. J. (2004). Social influence:
Compliance and conformity. Annual Review of Psychology,
55, 591–621. doi:10.1146/annurev.psych.55.090902.142015
Conly, S. (2012). Against autonomy: Justifying coercive pater-
nalism. Cambridge, England: Cambridge University Press.
doi:10.1017/cbo9781139176101
Cosmides, L., & Tooby, J. (1994). Beyond intuition and instinct
blindness: Toward an evolutionarily rigorous cognitive
science. Cognition, 50, 41–77. doi:10.1016/0010-0277(94)
90020-5
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus
actuarial judgment. Science, 243, 1668–1674. doi:10.1126/
science.2648573
DeMiguel, V., Garlappi, L., & Uppal, R. (2009). Optimal ver-
sus naïve diversification: How inefficient is the 1/N port-
folio strategy. Review of Financial Studies, 22, 1915–1953.
doi:10.1093/rfs/hhm075
Drexler, A., Fischer, G., & Schoar, A. (2014). Keeping it simple:
Financial literacy and rules of thumb. American Economic
Journal: Applied Economics, 6, 1–31. doi:10.1257/app.6.2.1
Dweck, C. S. (2012). Mindset: How you can fulfill your poten-
tial. New York, NY: Constable & Robinson.
Edwards, W. (1983). Human cognitive capabilities, represen-
tativeness, and ground rules for research. Advances in
Psychology, 14, 507–513. doi:10.1016/S0166-4115(08)62254-2
Fernandes, D., Lynch, J. G., & Netemeyer, R. G. (2014). Financial
literacy, financial education and downstream finan-
cial behaviors? Management Science, 60, 1861–1883.
doi:10.1177/1745691612460685
Fishbach, A., & Shen, L. (2014). The explicit and implicit ways
of overcoming temptation. In J. W. Sherman, B. Gawronski,
& Y. Trope (Eds.), Dual process theories in the social mind
(pp. 454–467). New York, NY: Guilford.
Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.).
New York, NY: McGraw-Hill.
Gigerenzer, G. (1996). On narrow norms and vague heuristics:
A reply to Kahneman and Tversky (1996). Psychological
Review, 103, 592–596. doi:10.1037/0033-295X.103.3.592
Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M.,
& Woloshin, S. (2007). Helping doctors and patients to make
sense of health statistics. Psychological Science in the Public
Interest, 8, 53–96. doi:10.1111/j.1539-6053.2008.00033.x
Gigerenzer, G., Hertwig, R., & Pachur, T. (Eds.). (2011).
Heuristics: The foundations of adaptive behavior. Oxford,
England: Oxford University Press. doi:10.1093/acprof:
oso/9780199744282.001.0001
Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian
reasoning without instruction: Frequency formats. Psy-
chological Review, 102, 684–704. doi:10.1037/0033-295X
.102.4.684
Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999).
Simple heuristics that make us smart. New York, NY:
Oxford University Press.
Glaeser, E. L. (2006). Paternalism and psychology. University of
Chicago Law Review, 73, 133–156.
Gollwitzer, P. M. (1999). Implementation intentions: Strong
effects of simple plans. American Psychologist, 54, 493–503.
doi:10.1037/0003-066X.54.7.493
Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum,
J. B. (2010). Probabilistic models of cognition: Exploring
representations and inductive biases. Trends in Cognitive
Sciences, 14, 357–364. doi:10.1016/j.tics.2010.05.004
Grüne-Yanoff, T., & Hertwig, R. (2016). Nudge versus boost:
How coherent are policy and theory? Minds and Machines,
26, 149–183. doi:10.1007/s11023-015-9367-9
Grüne-Yanoff, T., Marchionni, C., & Feufel, M. (2016). The ecologi-
cal rationality of behavioural policies: How to choose between
boosts and nudges. Manuscript submitted for publication.
Halpern, D. (2015). Inside the nudge unit: How small changes
can make a big difference. New York, NY: Random House.
Hausman, D. M. (2012). Preference, value, choice, and wel-
fare. Cambridge, England: Cambridge University Press.
doi:10.1017/CBO9781139058537
Pathways to Good Decision 13
Heath, C., Larrick, R. P., & Klayman, J. (1998). Cognitive repairs:
How organizational practices can compensate for individ-
ual shortcomings. Research in Organizational Behavior,
20, 1–37.
Hershfield, H. E., Goldstein, D. G., Sharpe, W. F., Fox, J.,
Yeykelis, L., Carstensen, L. L., & Bailenson, J. N. (2011).
Increasing saving behavior through age-progressed ren-
derings of the future self. Journal of Marketing Research,
48(suppl.), S23–S37. doi:10.1509/jmkr.48.SPL.S23
Hertwig, R. (in press). When to consider boosting? Some rules
for policymakers. Behavioural Public Policy.
Hertwig, R., & Herzog, S. M. (2009). Fast and frugal heuristics:
Tools of social rationality. Social Cognition, 27, 661–698.
doi:10.1521/soco.2009.27.5.661
Hertwig, R., Hoffrage, U., & ABC Research Group. (2013).
Simple heuristics in a social world. New York, NY: Oxford
University Press.
Herzog, S. M., & Hertwig, R. (2014). Harnessing the wisdom of
the inner crowd. Trends in Cognitive Sciences, 18, 504–506.
doi:10.1016/j.tics.2014.06.009
Hoffrage, U., Lindsey, S., Hertwig, R., & Gigerenzer, G. (2000).
Communicating statistical information. Science, 290, 2261–
2262. doi:10.1126/science.290.5500.2261
Hogarth, R. M., & Soyer, E. (2015). Providing information for
decision making: Contrasting description and simulation.
Journal of Applied Research in Memory and Cognition, 4,
221–228. doi:10.1016/j.jarmac.2014.01.005
House of Lords Science and Technology Select Committee.
(2011). Behaviour change (2nd report of session 2010-12,
HL Paper 179). London, England: Stationery Office Limited.
Retrieved from http://www.publications.parliament.uk/pa/
ld201012/ldselect/ldsctech/179/179.pdf
Jenny, M. A., Pachur, T., Williams, S. L., Becker, E., & Margraf,
J. (2013). Simple rules for detecting depression. Journal of
Applied Research in Memory and Cognition, 2, 149–157.
doi:10.1016/j.jarmac.2013.06.001
Jung, J. Y., & Mellers, B. A. (2016). American attitudes toward
nudges. Judgment and Decision Making, 11, 62–74.
Kahneman, D. (2003). A perspective on judgment and choice:
Mapping bounded rationality. American Psychologist, 58,
697–720. doi:10.1037/0003-066X.58.9.697
Kahneman, D. (2011). Thinking, fast and slow. New York, NY:
Macmillan.
Kahneman, D., & Klein, G. (2009). Conditions for intuitive
expertise: A failure to disagree. American Psychologist, 64,
515–526. doi:10.1037/a0016755
Kahneman, D., & Renshon, J. (2007). Why hawks win. Foreign
Policy, 158, 34–38.
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under
uncertainty: Heuristics and biases. Cambridge, England:
Cambridge University Press. doi:10.1017/CBO9780511809477
Kahneman, D., & Tversky, A. (1996). On the reality of cog-
nitive illusions. Psychological Review, 103, 582–591. doi:
10.1037/0033-295X.103.3.582
Kaufmann, C., Weber, M., & Haisley, E. (2013). The role of
experience sampling and graphical displays on one’s
investment risk appetite. Management Science, 59, 323–
340. doi:10.1287/mnsc.1120.1607
Klein, G. (1999). Sources of power: How people make decisions.
Cambridge, MA: MIT Press.
Klein, G. (2008). Naturalistic decision making. Human Factors,
50, 456–460. doi:10.1518/001872008X288385
Kurvers, R. H. J. M., Herzog, S. M., Hertwig, R., Krause, J.,
Carney, P. A., Bogart, A., . . . Wolf, M. (2016). Boosting
medical diagnostics by pooling independent judgments.
Proceedings of the National Academy of Sciences USA, 113,
8777–8782. doi:10.1073/pnas.1601827113
Kurvers, R. H. J. M., Krause, J., Argenziano, G., Zalaudek, I., &
Wolf, M. (2015). Detection accuracy of collective intelligence
assessments for skin cancer diagnosis. JAMA Dermatology,
151, 1–8. doi:10.1001/jamadermatol.2015.3149
Langley, P., Laird, J. E., & Rogers, S. (2009). Cognitive archi-
tectures: Research issues and challenges. Cognitive Systems
Research, 10, 141–160. doi:10.1016/j.cogsys.2006.07.004
Lipshitz, R., Klein, G., Orasanu, J., & Salas, E. (2001). Taking
stock of naturalistic decision making. Journal of Behavioral
Decision Making, 14, 331–352. doi:10.1002/bdm.381
Loewenstein, G., & Prelec, D. (1992). Anomalies in intertem-
poral choice: Evidence and an interpretation. Quarterly
Journal of Economics, 107, 573–597. doi:10.2307/2118482
Lourenco, J. S., Ciriolo, E., Almeida, S. R., & Troussard, X.
(2016). Behavioural insights applied to policy: European
Report 2016 (EUR 27726 EN). Brussels, Belgium: European
Commission Joint Research Centre. doi:10.2760/903938
Lusardi, A., Samek, A. S., Kapteyn, A., Glinert, L., Hung, A., &
Heinberg, A. (2014). Visual tools and narratives: New ways
to improve financial literacy (Global Financial Literacy
Excellence Center Working Paper No. 2014-1; Becker
Friedman Institute for Research in Economics Working
Paper No. 2585231). Retrieved from https://ssrn.com/
abstract=2585231
Malmendier, U., & Tate, G. (2005). CEO overconfidence and
corporate investment. Journal of Finance, 60, 2661–2700.
doi:10.1111/j.1540-6261.2005.00813.x
Maloney, E. A., & Beilock, S. L. (2012). Math anxiety: Who
has it, why it develops, and how to guard against it.
Trends in Cognitive Sciences, 16, 404–406. doi:10.1016/j
.tics.2012.06.008
McClelland, J. L., Rumelhart, D. E., & PDP Research Group.
(1986). Parallel distributed processing: Explorations in the
microstructure of cognition (Vol. 2). Cambridge, MA: MIT
Press.
Michie, S., van Stralen, M. M., & West, R. (2011). The behav-
iour change wheel: A new method for characterising and
designing behaviour change interventions. Implementation
Science, 6(1), 42. doi:10.1186/1748-5908-6-42
Moffitt, T. E., Arseneault, L., Belsky, D., Dickson, N., Hancox, R.
J., Harrington, H., . . . Caspi, A. (2011). A gradient of child-
hood self-control predicts health, wealth, and public safety.
Proceedings of the National Academy of Sciences USA, 108,
2693–2698. doi:10.1073/pnas.1010076108
Norman, G. R., & Eva, K. W. (2010). Diagnostic error and clini-
cal reasoning. Medical Education, 44, 94–100. doi:10.1111/
j.1365-2923.2009.03507.x
Oaksford, M., & Chater, N. (2009). Precis of Bayesian ratio-
nality: The probabilistic approach to human reasoning.
14 Hertwig, Grüne-Yanoff
Behavioral & Brain Sciences, 32, 69–84. doi:10.1017/S0140
525X09000284
Parfit, D. (1987). Reasons and persons. Oxford, England:
Clarendon.
Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D.
S., & Dweck, C. S. (2015). Mind-set interventions are a scalable
treatment for academic underachievement. Psychological
Science, 26, 784–793. doi:10.1177/0956797615571017
Peterson, C. R., & Beach, L. R. (1967). Man as an intuitive stat-
istician. Psychological Bulletin, 68, 29–46. doi:10.1037/
h0024722
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood
model of persuasion. In L. Berkovitz (Ed.), Advances in
experimental social psychology (pp. 123–205). New York,
NY: Academic Press.
Phillips, L. D. (1983). A theoretical perspective on heuristics and
biases in probabilistic thinking. Advances in Psychology,
14, 525–543. doi:10.1016/S0166-4115(08)62256-6
Rattan, A., Savani, K., Chugh, D., & Dweck, C. S. (2015).
Leveraging mindsets to promote academic achievement:
Policy recommendations. Perspectives on Psychological
Science, 10, 721–726. doi:10.1177/1745691615599383
Rebonato, R. (2012). Taking liberties: A critical examination of
libertarian paternalism. New York, NY: Palgrave Macmillan.
Rumelhart, D. E., McClelland, J. L., & PDP Research Group. (1986).
Parallel distributed processing: Explorations in the micro-
structure of cognition (Vol. 1). Cambridge, MA: MIT Press.
Schelling, T. C. (1984). Self-command in practice, in policy, and
in a theory of rational choice. American Economic Review,
74, 1–11.
Sedlmeier, P., & Gigerenzer, G. (2001). Teaching Bayesian
reasoning in less than two hours. Journal of Experimental
Psychology: General, 130, 380–400. doi:10.1037/0096-3445
.130.3.380
Sherman, J. W., Gawronski, B., & Trope, Y. (Eds.). (2014). Dual-
process theories of the social mind. New York, NY: Guilford.
Simon, H. (1990). Reason in human affairs. Palo Alto, CA:
Stanford University Press.
Spiegelhalter, D., Person, M., & Short, I. (2011). Visualizing
uncertainty about the future. Science, 333, 1393–1400.
doi:10.1126/science.1191181
Stephens, E. M., Edwards, T. L., & Demeritt, D. (2012).
Communicating probabilistic information from climate
model ensembles: Lessons from numerical weather predic-
tion. Wiley Interdisciplinary Reviews: Climate Change, 3,
409–426. doi:10.1002/wcc.187
Sunstein, C. R. (2014). Why nudge? The politics of libertarian
paternalism. New Haven, CT: Yale University Press.
Sunstein, C. R. (2016). The ethics of influence: Government
in the age of behavioral science. Cambridge, England:
Cambridge University Press.
Swets, J. A., Dawes, R. M., & Monahan, J. (2000). Psychological
science can improve diagnostic decisions. Psychological
Science in the Public Interest, 1, 1–26. doi:10.1111/1529-
1006.001
Tang, Y.-Y., & Posner, M. I. (2009). Attention training and atten-
tion state training. Trends in Cognitive Sciences, 13, 222–
227. doi:10.1016/j.tics.2009.01.009
Tang, Y.-Y., Tang, R., & Posner, M. I. (2013). Brief medita-
tion training induces smoking reduction. Proceedings of
the National Academy of Sciences USA, 110, 13971–13975.
doi:10.1073/pnas.1311887110
Teachman, B. A., Norton, M. I., & Spellman, B. A. (2015).
Memos to the president from a “Council of Psychological
Science Advisers.” Perspectives on Psychological Science,
10, 697–700. doi:10.1177/1745691615605829
Thaler, R., & Benartzi, S. (2004). Save More Tomorrow: Using
behavioral economics to increase employee savings. Journal
of Political Economy, 112, 164–187. doi:10.1086/380085
Thaler, R., & Sunstein, C. R. (2008). Nudge: Improving decisions
about health, wealth and happiness. New York, NY: Simon
& Schuster.
Todorov, A., Chaiken, S., & Henderson, M. D. (2002). The
heuristic-systematic model of social information process-
ing. In J. P. Dillard & M. Pfau (Eds.), The persuasion hand-
book: Developments in theory and practice (pp. 195–211).
Thousand Oaks, CA: Sage.
Tversky, A., & Kahneman, D. (1974). Judgment under uncer-
tainty: Heuristics and biases. Science, 185, 1124–1131.
Weber, E. U., & Johnson, E. J. (2009). Mindful judgment and
decision making. Annual Review of Psychology, 60, 53–85.
doi:10.1146/annurev.psych.60.110707.163633
Wilkinson, T. M. (2013). Nudging and manipulation. Political
Studies, 61, 341–355. doi:10.1111/j.1467-9248.2012.00974.x
Willis, L. (2011). The financial education fallacy. American
Economic Review, 101, 429–434. doi:10.1257/aer.101.3.429
Wolf, M., Krause, J., Carney, P. A., Bogart, A., & Kurvers, R. H.
J. M. (2015). Collective intelligence meets medical decision-
making: The collective outperforms the best radiologist.
PLoS ONE, 10, e0134269. doi:10.1371/journal.pone.0134269
World Bank. (2015). World development report 2015: Mind,
society and behaviour. Washington, DC: Author. Retrieved
from http://www.worldbank.org/en/publication/wdr2015
Yeager, D. S., Walton, G. M., Brady, S. T., Akcinar, E. N.,
Paunesku, D., Keane, L., . . . Gomez, E. M. (2016). Teaching
a lay theory before college narrows achievement gaps at
scale. Proceedings of the National Academy of Sciences
USA, E3341–E3348. doi:10.1073/pnas.1524360113
Zaval, L., Markowitz, E. M., & Weber, E. U. (2015). How will I
be remembered? Conserving the environment for the sake
of one’s legacy. Psychological Science, 26, 231–236. doi:10
.1177/0956797614561266
... This approach is called 'boosting'. Boosting consists of noncoercive intervention strategies that aim to increase people's competence to make choices in line with their goals in a transparent way that promotes agency (Hertwig & Grüne-Yanoff, 2017;Lorenz-Spreen et al., 2020). A boosting approach to science communication might help people figure out the facts without potentially impeding their agency by relying only on a call to authority. ...
... The current strategy can be considered a 'boosting' approach to behavior change, which consists of a noncoercive intervention strategy that aims to increase people's competence to make their own choices. This competence can be fostered in a number of ways, such as through changes in skills or knowledge, but to classify as a boost an intervention needs to be transparent and promote agency (Hertwig & Grüne-Yanoff, 2017;Lorenz-Spreen et al., 2020). An example of boosting is when individuals are 'inoculated' against the persuasive effect of misleading information by warning them of impending exposure to such misleading information and explaining to them how the misleading technique works (e.g., the use of fake experts; Cook et al., 2017). ...
... A single scientist stating that GE food is bad for your health, for example, should not be persuasive to boosted individuals. Second, there is an ethical advantage to boosting over only communicating the consensus, namely that its goal is to empower individuals (Hertwig & Grüne-Yanoff, 2017). Consensus communication is often criticized on the grounds that it invokes scientists' authority as a means of persuasion (Pearce et al., 2015). ...
... The analysis of the two variants can take the form of comparisons between boost interventions and stimulation/impulse approaches (nudgets). In this respect, I consider relevant the analysis framework opened by Ralph Hertwig and Till Grüne-Yanoff (Grüne-Yanoff & Hertwig, 2016) on this topic, later summarized (Hertwig & Grüne-Yanoff, 2017) in a tabular form (Table no. 1), meant to show the differences between boost and nudget strategies. ...
... The analysis must focus on two different categories of problems: the causal explanation and the establishment of appropriate intervention solutions. A contribution to the clarification of the situation could be made by the summary presentation in the table below: Because boost interventions are presented as different from those specific to schooling, they are aimed at "domains that are not typically addressed in school curricula, such as good financial decision making, accurate risk assessment, healthy food choices, informed medical decisions, and effective self-regulation" (Hertwig & Grüne-Yanoff, 2017), their approach can be done either by modifying the school curriculum or by resorting to other training solutions. Even if the indicated areas belong to the area of interest specific to critical thinking, boost interventions aren`t in the same situation, they being very sensitive to uses in the interest of marketing or manipulation. ...
Preprint
Full-text available
The aim of this article is to estimate the relevance of developing a public policy aimed at increasing the importance of critical thinking skills among pupils, students and citizens in general, using the problem of vaccination as a reference. I thus aim to contribute to the possibility of identifying solutions designed to increase the impact of educational activities for the creation of critical thinking skills. Because perhaps the most important cognitive problem in this context is the decision to vaccinate against SARS-CoV-2, I use the report on the vaccination problem as a context for assessing the effectiveness of critical thinking, trying to test the hypothesis of reducing vaccine hesitation by using critical thinking. The case study on vaccine hesitation considers the use of the attitude towards vaccination as a potential revealer of critical thinking skills, contributing for the identification of both problems and solutions to remedy them. One of this article proposals isn`t to adopt critical thinking uncritically, being necessary to identify the potential impact of interventions based on the increase of critical thinking skills and their limits. Critical reflection on critical thinking implies the possibility of being aware of its specific limits. I approach some of the problems of defining critical thinking by suggesting in this way some risks aimed at its use for ideological purposes. I indicate the specific limits of the deficit model, thus contributing to the shaking of common places in the field of causal explanation for vaccine hesitation. I try to systematize the cognitive errors according to some specific causes and types of solving the existing problems in the specialized literature, thus generating a relevant frame of reference for the debates regarding the public policies in the area.
... Finally, we believe that our ndings will prompt scholars interested in decision making to rethink the criteria of good decisions as well as provide the impetus for redesigning boosting or nudging interventions aimed at supporting good choices 49,50 Table 2 for a detailed description of the samples). Participants were paid £1.88 for a study lasting approximately 15 minutes. ...
Preprint
Full-text available
In real-life situations involving risk and uncertainty, optimal policy hinges on selecting a course of action characterized by the highest expected value (i.e., future outcomes weighted by their probabilities). Nevertheless, a vast body of findings from economic and psychological studies indicate that people rarely follow this principle. The attempts to make optimal decisions are impeded by the complexity of a task and the computational capabilities of decision makers, often leading to suboptimal choices. Recent research has demonstrated that decision makers are systematically biased toward suboptimal options. However, little is known about the nature of this bias. Here we show that recurring suboptimal choices result in superior decision making. In one simulation study and three well-powered (N = 1,046) fully-incentivized empirical studies employing a task to mimic decision making under uncertainty, we demonstrated that people who traded off their decision accuracy for the number of possible choices performed better (i.e., they earned more money) than those who made optimal decisions in terms of maximizing the expected value. Our results demonstrate that decision makers can adapt to the requirements of a decision task. They are inclined to make more suboptimal decisions, resulting in better overall performance than normatively better choices.
... The exploratory analyses of maximal cheaters in Project 2 show that they were the only group to mis-predict their tendency to experience negative moral feelings after a transgression. While the effect is small, it does raise the question of whether providing an ethical reminder or boost (Hertwig and Grüne-Yanoff 2017) to those most likely to engage in a (large) transgression that they are likely to feel (more) poorly about such behaviour than they expect, may be an effective deterrent to unethical behaviour. ...
Thesis
Wann und warum entscheiden sich Menschen für unehrliches Verhalten? Durch das Verständnis von unehrlichem Verhalten sind politische Entscheidungsträger besser in der Lage, ein solches Verhalten zu verhindern und eine florierende Gesellschaft und Wirtschaft zu unterstützen. Das Studium der Unehrlichkeit hat in den letzten Jahren eine Blütezeit erlebt, angetrieben durch die Etablierung von Crowd-Sourced-Arbeitsplattformen, obwohl auch einige wichtige Feldarbeiten entstanden sind. Die empirischen Erkenntnisse aus diesen Studien haben die Entstehung neuer ökonomischer und psychologischer Modelle zur Erklärung unehrlichen Verhaltens unterstützt. Doch wie replizierbar und verallgemeinerbar sind die führenden experimentellen Ergebnisse? Und welche anderen kontextuellen Faktoren wie die Art und das Ausmaß der Belohnung und die Designentscheidungen des Experimentators können unehrliches Verhalten beeinflussen? Im Mittelpunkt dieser Arbeit stand der Versuch der Replikation einer in der akademischen Welt und in der populären Presse viel zitierten Arbeit. Frühere Replikationsversuche haben diese Arbeit umgangen, da es schwierig war, Zugang zu professionellen Teilnehmern zu bekommen. Die Arbeit, die wir zu wiederholen versuchten, ergab, dass nur Banker, deren berufliche Identität hervorgehoben wurde, sich unehrlich verhielten. Diese Arbeit basierte auf der Vorstellung, dass das Priming, also das Hervorheben eines Aspekts der Identität einer Person und der damit verbundenen Normen, das Verhalten beeinflussen würde. Da das Priming der professionellen Bankidentität Unehrlichkeit auslöste, wurde daraus geschlossen, dass dies ein Hinweis auf problematische Normen im Bankensektor ist. Es war jedoch unklar, ob dieses Ergebnis auch für andere Banken gilt, z. B. in der gleichen oder einer anderen Gerichtsbarkeit, in verschiedenen Segmenten (z. B. Commercial versus Investment Banking) und im Zeitverlauf.
Article
Teaching people clever heuristics is a promising approach to improve decision-making under uncertainty. The theory of resource rationality makes it possible to leverage machine learning to discover optimal heuristics automatically. One bottleneck of this approach is that the resulting decision strategies are only as good as the model of the decision problem that the machine learning methods were applied to. This is problematic because even domain experts cannot give complete and fully accurate descriptions of the decisions they face. To address this problem, we develop strategy discovery methods that are robust to potential inaccuracies in the description of the scenarios in which people will use the discovered decision strategies. The basic idea is to derive the strategy that will perform best in expectation across all possible real-world problems that could have given rise to the likely erroneous description that a domain expert provided. To achieve this, our method uses a probabilistic model of how the description of a decision problem might be corrupted by biases in human judgment and memory. Our method uses this model to perform Bayesian inference on which real-world scenarios might have given rise to the provided descriptions. We applied our Bayesian approach to robust strategy discovery in two domains: planning and risky choice. In both applications, we find that our approach is more robust to errors in the description of the decision problem and that teaching the strategies it discovers significantly improves human decision-making in scenarios where approaches ignoring the risk that the description might be incorrect are ineffective or even harmful. The methods developed in this article are an important step towards leveraging machine learning to improve human decision-making in the real world because they tackle the problem that the real world is fundamentally uncertain.
Article
Since the publication of the seminal book Nudge by Thaler and Sunstein, several critics have highlighted preference endogeneity as a serious obstacle to nudging. When individuals hold preferences that are dynamic and endogenous to the nudge frame, it is unclear what the normative benchmark for libertarian paternalistic policies should be. While acknowledging this issue, the pro-nudging camp has not yet sufficiently addressed it. This article aims to fill this void by presenting a conditional defence of nudging when preferences are endogenous. We explain the learning process through which individuals establish ‘agentic’ preferences: preferences that are sufficiently stable, reasonable, autonomous and associated with organismic well-being to ground the ‘welfare’ principle of libertarian paternalism. To describe this process, we draw on theories from psychological science, in particular self-discrepancy theory and self-determination theory. We argue that agentic preferences are not only welfare-relevant and thus appropriate to libertarian paternalism but can also be identified by choice architects.
Article
The effect of regulatory standards regarding the presentation of investment products on financial behavior is poorly understood. In two incentivized online experiments (N = 2,221) we examine the impact of information-based and tool-based guidance on the selection of retirement plan investment funds. Participants chose between funds that followed identical investment strategies but charged different fees. Over multiple trials, participants were instructed to identify the fund which charged the lowest fees given their hypothetical plan balance. Defaults and disclosures were found to be situationally helpful, but highlighted participants’ naivete with regard to the calculations underlying the fee structure. Advice tended to be underutilized but was beneficial when sought. Tool-based guidance in the form of a smart calculator had a moderate impact on accuracy but benefits did not persist. Together the results highlight the danger of taking a homogeneous approach to financial guidance and emphasize the need to build consumer competency.
Article
Full-text available
Full Issue
Article
Full-text available
Effective behavioral interventions are essential to address urgent societal challenges. Over the past decade, nudging interventions (i.e., arranging the environment to promote adaptive behavioral choices) have surged in popularity. Importantly, effective application of the nudging approach requires clear guiding principles with a firm basis in behavioral science. We present a framework for nudging interventions that builds on evidence about the goal-directed inferential processes underlying behavior (i.e., processes that involve context-dependent inferences about goals and the actions available to achieve these goals). We used this framework to develop nudging interventions that target context-relevant cognitive inferences. We examined the effectiveness of these inference nudging interventions for promoting two important types of societal behavior: pro-environmental actions and adherence to COVID-19 guidelines. As predicted, two online studies revealed that inference nudging interventions successfully increased energy conservation (Study 1) as well as social distancing during the COVID-19 crisis (Study 2). A field experiment found that inference nudging interventions increased hand disinfection in a real-life store during the COVID-19 crisis (Study 3). Our findings highlight the importance of applying state-of-the-art insights about the (inferential) determinants of behavior in behavior change interventions. 3
Article
Behavioral policies Nudge and Boost are often advocated as good candidates for evidence-based policy. Nudges present or “frame” options in a way that trigger people’s decision-making flaws and steer into the direction of better choices. Nudge aims to do this without changing the options themselves. Boosts also present choices in alternative ways without changing options. However, rather than steering, Boosts are aimed to increase people’s competences. Nudge and Boost originated in extensive research programs: the “heuristics-and-biases program” and the “fast-and-frugal heuristics program,” respectively. How exactly do Nudge and Boost policies relate to the theories they originated from in the first place? Grüne-Yanoff and Hertwig labeled this a question of “theory-policy coherence” and propose to use it for determining the evidence-base of Nudge and Boost. I explore the question: “In how far is theory-policy coherence in Nudge and Boost relevant for evidence-based policymaking?.” I argue that the implications of (weaker or stronger) theory-policy coherence are relevant in two ways. First, Grüne-Yanoff and Hertwig show that theory-policy coherence between Nudge and Boost and the research programs is not as strong as often assumed. It is crucial for the evidence-based policymaker to realize this. Assuming theory-policy coherence while it does not exist or is weaker than assumed can lead to an incorrect assessment of evidence. Ultimately it can even lead to adoption of policies on false grounds. Second, the concept of theory-policy coherence may assist the policymaker in the search and evaluation of (mechanistic) evidence. However, in order to do so, it is important to consider the limitations of theory-policy coherence. It can neither be employed as the (sole) criterion with which to determine how well-grounded a policy is in theory, nor be the (sole) basis for making comparative evaluations between policies.
Article
Full-text available
We review the progress of naturalistic decision making (NDM) in the decade since the first conference on the subject in 1989. After setting out a brief history of NDM we identify its essential characteristics and consider five of its main contributions: recognition-primed decisions, coping with uncertainty, team decision making, decision errors, and methodology. NDM helped identify important areas of inquiry previously neglected (e.g. the use of expertise in sizing up situations and generating options), it introduced new models, conceptualizations, and methods, and recruited applied investigators into the field. Above all, NDM contributed a new perspective on how decisions (broadly defined as committing oneself to a certain course of action) are made. NDM still faces significant challenges, including improvement of the quantity and rigor of its empirical research, and confirming the validity of its prescriptive models. Copyright © 2001 John Wiley & Sons, Ltd.
Article
Full-text available
In recent years, public officials have shown a growing interest in using evidence from the behavioural sciences to promote policy goals. Much of the discussion of behaviourally informed approaches has focused on ‘nudges’; that is, non-fiscal and non-regulatory interventions that steer (nudge) people in a specific direction while preserving choice. Less attention has been paid to boosts, an alternative evidence-based class of non-fiscal and non-regulatory intervention. The goal of boosts is to make it easier for people to exercise their own agency in making choices. For instance, when people are at risk of making poor health, medical or financial choices, the policy-maker – rather than steering behaviour through nudging – can take action to foster or boost individuals’ own decision-making competences. Boosts range from interventions that require little time and cognitive effort on the individual's part to ones that require substantial amounts of training, effort and motivation. This article outlines six rules that policy-makers can apply in order to determine under which conditions boosts, relative to nudges, are the preferable form of non-fiscal and non-regulatory intervention. The objective is not to argue that boosts are better than nudges or vice versa, but to begin to spell out the two approaches’ respective conditions for success.
Article
Full-text available
Collective intelligence refers to the ability of groups to outperform individual decision makers when solving complex cognitive problems. Despite its potential to revolutionize decision making in a wide range of domains, including medical, economic, and political decision making, at present, little is known about the conditions underlying collective intelligence in real-world contexts. We here focus on two key areas of medical diagnostics, breast and skin cancer detection. Using a simulation study that draws on large real-world datasets, involving more than 140 doctors making more than 20,000 diagnoses, we investigate when combining the independent judgments of multiple doctors outperforms the best doctor in a group. We find that similarity in diagnostic accuracy is a key condition for collective intelligence: Aggregating the independent judgments of doctors outperforms the best doctor in a group whenever the diagnostic accuracy of doctors is relatively similar, but not when doctors' diagnostic accuracy differs too much. This intriguingly simple result is highly robust and holds across different group sizes, performance levels of the best doctor, and collective intelligence rules. The enabling role of similarity, in turn, is explained by its systematic effects on the number of correct and incorrect decisions of the best doctor that are overruled by the collective. By identifying a key factor underlying collective intelligence in two important real-world contexts, our findings pave the way for innovative and more effective approaches to complex real-world decision making, and to the scientific analyses of those approaches.
Book
In recent years, ‘Nudge Units’ or ‘Behavioral Insights Teams’ have been created in the United States, the United Kingdom, Germany, and other nations. All over the world, public officials are using the behavioral sciences to protect the environment, promote employment and economic growth, reduce poverty, and increase national security. In this book, Cass R. Sunstein, the eminent legal scholar and best-selling co-author of Nudge, breaks new ground with a deep yet highly readable investigation into the ethical issues surrounding nudges, choice architecture, and mandates, addressing such issues as welfare, autonomy, self-government, dignity, manipulation, and the constraints and responsibilities of an ethical state. Complementing the ethical discussion, The Ethics of Influence: Government in the Age of Behavioral Science contains a wealth of new data on people’s attitudes towards a broad range of nudges, choice architecture, and mandates.
Article
The authors present and test a new method of teaching Bayesian reasoning, something about which previous teaching studies reported little success. Based on G. Gigerenzer and U. Hoffrage's (1995) ecological framework, the authors wrote a computerized tutorial program to train people to construct frequency representations (representation training) rather than to insert probabilities into Bayes's rule (rule training). Bayesian computations are simpler to perform with natural frequencies than with probabilities, and there are evolutionary reasons for assuming that cognitive algorithms have been developed to deal with natural frequencies. In 2 studies, the authors compared representation training with rule training; the criteria were an immediate learning effect, transfer to new problems, and long-term temporal stability. Rule training was as good in transfer as representation training, but representation training had a higher immediate learning effect and greater temporal stability.