Content uploaded by Arndt Bröder
Author content
All content in this area was uploaded by Arndt Bröder
Content may be subject to copyright.
Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Judgment and Decision Making, Vol. 3, No. 3, March 2008, pp. 205–214
Challenging some common beliefs: Empirical work within the
adaptive toolbox metaphor
Arndt Bröder
∗
University of Bonn and Max Planck Institute for Research on Collective Goods
Ben R. Newell
University of New South Wales
Abstract
The authors review their own empirical work inspired by the adaptive toolbox metaphor. The review examines factors
influencing strategy selection and execution in multi-attribute inference tasks (e.g., information costs, time pressure,
memory retrieval, dynamic environments, stimulus formats, intelligence). An emergent theme is the re-evaluation of
contingency model claims about the elevated cognitive costs of compensatory in comparison with non-compensatory
strategies. Contrary to common assertions about the impact of cognitive complexity, the empirical data suggest that
manipulated variables exert their influence at the meta-level of deciding how to decide (i.e., which strategy to select)
rather than at the level of strategy execution. An alternative conceptualisation of strategy selection, namely threshold
adjustment in an evidence accumulation model, is also discussed and the difficulty in distinguishing empirically between
these metaphors is acknowledged.
Keywords: strategy selection, contingency model, cognitive costs
1 Introduction
Over (2003) points out that many evolutionary psycholo-
gists have used tools as vivid metaphors for characteris-
ing the mind as comprising a range of specific modules.
For example, Cosmides and Tooby (1994) suggested that
the mind be viewed like a Swiss army knife, with indi-
vidual blades specialised for particular “survival-related”
tasks. In a similar vein, Gigerenzer, Todd and the ABC
Group (1999) proposed an “adaptive toolbox” containing
a variety of special tools for different tasks. Their idea
is that the mind has evolved mechanisms or heuristics
that are suited to particular tasks, such as choosing be-
tween alternatives, categorising items, estimating quanti-
ties, selecting a mate, judging habitat quality, even deter-
mining how much to invest in one’s children. Gigerenzer
and Todd argue that just as a car mechanic uses specific
wrenches, pliers and spanners in maintaining a car en-
gine rather than hitting everything with a hammer, so too
the mind relies on unique one-function devices to provide
∗
Ben Newell acknowledges the support of the Australian Research
Council (Grant: DP 0558181) and the University of New South Wales
for awarding him the John Yu Fellowship to Europe. Both authors
would also like to thank the Max Planck Institute for Research on
Collective Goods for hosting Ben Newell’s visit and the symposium.
Corresponding author: Arndt Bröder, Dept. of Psychology, Univer-
sity of Bonn, Kaiser-Karl-Ring 9, D-53111 Bonn, Germany. Email:
broeder@uni-bonn.de.
serviceable solutions to individual problems.
To illustrate the basic idea we describe the operation
of two of the heuristics contained in the toolbox. Imagine
you are facing a choice between two alternatives — such
as two companies to invest in — and your task is to pick
the one that is better with regard to some criterion (e.g.,
future returns on investments). “Take-the-Best” (TTB)
is designed for just such a situation. TTB operates ac-
cording to two principles. The first — the recognition
principle — states that for any decision made under un-
certainty, if only one amongst a range of alternatives is
recognised, then the recognised alternative will be cho-
sen. When this first principle can be relied on people
are said to be using the Recognition Heuristic (RH) —
i.e., choosing objects that they recognise (Goldstein &
Gigerenzer, 2002). The second principle is invoked when
more than one alternative is recognised, and the recogni-
tion principle cannot provide discriminatory information.
In such cases, people are assumed to have access to a
reference class of cues or features, which are searched
in descending order of feature validity (search rule) until
one that discriminates between alternatives is discovered.
Search then stops (stopping rule) and this single best dis-
criminating feature is used to make the choice (decision
rule). The algorithm is thus non-compensatory because,
rather than using all discriminatory pieces of information
(as a compensatory model like linear regression would),
205
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 206
it bases its choice on a single piece (Gigerenzer & Gold-
stein, 1996).
These simple steps for searching, stopping and decid-
ing might seem rather trivial, but Gigerenzer and Gold-
stein (1996) showed convincingly that the TTB algorithm
is as accurate — and sometimes even slightly more ac-
curate — than more computationally complex and time
consuming algorithms. These initial results, from a task
in which the goal was to decide which of two cities had
a higher population, were replicated in a variety of real-
world environments ranging from predicting professorial
salaries, to the amount of sleep engaged in by different
mammals (Czerlinski, Gigerenzer, & Goldstein, 1999).
The toolbox, however, is one of different metaphors used
to characterize intelligent decision making. On one hand,
the toolbox with its incorporation of the modularity as-
sumption challenges the idea of the mind as containing
a “master tool” that comes as a general problem solver.
On the other hand, the toolbox idea itself has also been
challenged by theoretical arguments. For example, some
authors claim that simple heuristics may not be so simple
in the first place because they need a vast amount of pre-
computation (e.g., for constructing a cue-search hierar-
chy, Juslin & Persson, 2002). Others conjecture that com-
pensatory strategies may not be so costly as the toolbox
and common wisdom in decision research presuppose
(e.g., Chater, Oaksford, Nakisa, & Reddington, 2003).
Theoretical objections to the toolbox are summarized and
discussed in Newell and Shanks (2007). Another chal-
lenge is empirical: Do people use different tools adap-
tively, and more specifically, do they use simple heuristics
like RH and TTB? In this article, we will review empiri-
cal work from our labs that addresses this latter question
and asks which factors affect the strategies people select.
Whereas the goal is of course not new, we are convinced
that our results have some new implications for the tool-
box metaphor as well as for multi-attribute decision re-
search in general.
2 Organization of the review
Newell and Bröder (2008) mentioned several facts and
topics about human cognition that have to be addressed
by theories of decision making, namely (1) capacity
limitation, (2) automaticity vs. controlled processing,
(3) learning, (4) categorization, and (5) metacognition.
These areas of interest and the question of whether peo-
ple adaptively choose strategies constitute one dimension
of our review. Our empirical work predominantly cov-
ers the question of adaptivity and areas (1), (2), and (5)
which are closely interconnected. Whereas capacity lim-
itations mainly concern controlled, effortful, and perhaps
serial processes, any degree of automatization will un-
burden the limited capacity (Schneider & Shiffrin, 1977).
Metacognition — deciding how to decide — is concerned
with allocating capacity to decision tasks and almost cer-
tainly consumes cognitive capacity itself. However, this
latter aspect has hitherto been neglected in decision re-
search and the toolbox approach. The second dimension
around which we examine the empirical evidence is the
“target” of the respective studies: different studies fo-
cus on the search rule, the stopping rule, or the decision
rule people use. Although these aspects are closely in-
tertwined empirically (e.g., Bröder, 2003), most studies
focus on one or two aspects for methodological reasons.
We will first report studies concerning adaptivity and the
use of simple heuristics and then turn to results relevant
for the question of capacity limitations, automatization,
and metacogntition.
3 Do people select simple and less
simple heuristics adaptively?
Payne, Bettman, and Johnson (1993) report many results
that suggest adaptive strategy changes contingent on task
demands. For example, time pressure or the dispersion
of attribute weights clearly influenced information search
behavior in a preferential choice task (Payne, Bettman,
& Johnson, 1988). Rieskamp and Hoffrage (1999) con-
firmed these results in a mutli-attribute inference task.
Under time pressure, participants search for less infor-
mation and do so more attribute-wise (rather than option-
wise) which is similar to the search rule predicted by lex-
icographic heuristics like TTB. Being forced into simple
processing by time pressure may not be a strong argu-
ment in favour of adaptive strategy selection, however,
so other investigators varied the nominal costs of infor-
mation purchases in a hypothetical stock market game
(Bröder, 2000; Newell & Shanks, 2003). In this task,
participants make repeated stock purchase decisions be-
tween hypothetical companies that are described by four
binary cues (e.g., Turnover growth in last months —
yes vs. no). Typically, cue values are hidden and have
to be actively uncovered by clicking the fields with the
computer mouse. Participants are free to uncover as
much information as they want in any sequence. This
MouseLab-like procedure (see Payne et al., 1988) al-
lows for outcome-based strategy assessment based on the
choices as well as monitoring of the information acquisi-
tion process. Newell and Shanks (2003) found that rais-
ing the costs for information search led to a lesser amount
of purchases, but still, participants on average bought
more cue information than “necessary” for performing a
simple lexicographic strategy. Hence, participants did not
generally adhere to the stopping rule dictated by TTB. In
another study of Newell, Weston, and Shanks (2003, Exp.
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 207
2), 38% of the participants even went on purchasing a
cue that was costly but objectively useless, hence using a
clearly maladaptive stopping rule. These studies and ad-
ditional asymmetries in favor of compensatory decision
making (see below) suggest that there is an initial pref-
erence for being “well-informed” before making a deci-
sion, at least as long as information is easy to obtain and
the task is not too complex with respect to the number of
options and/or attributes. There is converging evidence
from studies testing the assumed noncompensatory na-
ture of the RH which show that participants rarely ignore
information which is available in addition to the recogni-
tion cue (Bröder & Eichler, 2006; Newell & Fernandez
2006; Newell & Shanks, 2004; Pohl, 2006; Richter &
Späth, 2006). The process model of the RH clearly states
that “if one object is recognized and the other is not, the
inference is determined; no other information about the
recognized object is searched for and, therefore, no other
information can reverse the choice determined by recog-
nition” (Goldstein & Gigerenzer 2002, p. 82); however,
even under conditions that are ideal for the RH (high
recognition validity, natural recognition knowledge, in-
ferences from memory), the decisions of 50% of the par-
ticipants were affected by additional cue knowledge in a
study by Pachur, Bröder, & Marewski (in press). These
results suggest that lexicographic stopping rules may be
the exception rather than the rule in decision making.
Bröder (2000) focused on the decision rule people used
and also manipulated the nominal costs for information
purchase. An outcome-based classification procedure
suggested that the choices of about 65% of participants
were compatible with TTB under high search cost condi-
tions. A subsequent experiment confirmed that this high
percentage (which contrasted with low TTB percentages
in other studies) was in fact caused by the information
costs, but not by other factors such as outcome feedback,
or successive information retrieval. In addition, search
behavior corresponded well to the decision rule partici-
pants used. Hence, both our labs showed that stopping
and/or decision rules were sensitive to search costs to a
certain degree, probably reflecting adaptivity. However,
several criticisms can be raised: First, there was no for-
mal assessment of expected payoffs in these studies and
hence, strategy changes might not have been “adaptive”
but rather caused by stinginess. That is, high nominal
costs of information may simply have deterred partici-
pants from purchasing information despite its potential
value for good decisions. This would demonstrate sen-
sitivity to costs, but not necessarily adaptive behavior.
Second, in Bröder’s (2000) study, information about cues
could only be purchased in the order of their validities,
probably boosting the use of TTB-like strategies.
Limiting participants to searching information in one
particular order overlooks a crucial yet under-researched
issue: How people learn cue validities and construct cue-
search hierarchies. As noted earlier, Juslin and Persson
(2002) argued that a good deal of the “simplicity” inher-
ent in simple heuristics comes from the massive amounts
of precomputation required to construct cue hierarchies.
Newell, Rakow, Weston and Shanks (2004) sought to
gain some insight into how people learned cue validi-
ties and search rules by using experimental designs in
which participants could purchase cues in any, rather than
a fixed order. Following Martignon and Hoffrage (1999)
we noted that the overall usefulness of a cue must take
account of both its validity and its redundancy — or abil-
ity to discriminate between two options in two-alternative
forced choice task. More useful cues are those that can
frequently be used to make an inference (i.e., have a high
discrimination rate); and, when used, usually point in the
correct direction (i.e., have a high validity).
In support of this assertion, Newell et al. (2004) found
that, in a simulated stock market environment involving
a series of predictions about pairs of companies, partic-
ipants’ pre-decisional search strategies conformed to a
pattern that revealed sensitivity to both the validity and
discrimination rate of cues. Given sufficient practice in
the environment, participants searched through cues ac-
cording to how “successful” they were for predicting the
correct outcome (see Martignon & Hoffrage, 1999, for a
detailed discussion and definition of “success” — it is a
function of the validity and discrimination rate of cues).
Thus, rather than using a “validity” search rule — as pre-
scribed by TTB and enforced in some experimental tests
— participants tended to use a “success” search rule. (See
also Rakow, Newell, Fayers, & Hersby, 2005). This ini-
tial work on cue search needs to be supplemented by more
extensive explorations of potential mechanisms for learn-
ing and implementing cue hierarchies.
We noted earlier that in experiments with explicit
costs participants might be deterred from further search
through and acquisition of information simply because of
high nominal costs. To overcome this possibility Bröder
and colleagues kept the nominal search costs identical
in different conditions of their experiments but varied
the payoff functions to yield different expected payoffs
in different experimental conditions: some environments
were compensatory, meaning that the costs spent on addi-
tional cues were compensated by better accuracy and in-
creased payoff; and some were noncompensatory, so that
the costs for additional cues would in the long run exceed
their utility for making better decisions. The empirical
question of adaptivity was now whether people would be
able to figure out the appropriate strategies in the respec-
tive environments. The filled circles in Figure 1 sum-
marize the proportions of participants classified as using
TTB’s decision rule across a range of 11 experimental
conditions from several studies (Bröder, 2003; Bröder
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 208
0.4 0.6 0.8 1.0 1.2 1.4 1.6
0 10 20 30 40 50 60 70 80
Ratio of expected Payoffs TTB vs. FR
Percentage of TTB users
●
●
●
●
●
●
●
●
●
●
●
●
Figure 1: Adaptive strategy selection demonstrated by the
percentage of participants classified as TTB users in the
stock market game as a function of on the expected payoff
of TTB relative to a compensatory strategy. Filled circles
are experimental conditions in which the task was new to
participants, and they show a clear adaptive trend. Open
squares depict the maladaptive routines after the environ-
mental payoff structure had changed (Bröder & Schiffer,
2006a), and the triangle shows the high cognitive load
condition of Bröder and Schiffer (2003a).
& Eichler, 2001; Bröder & Schiffer, 2003a; 2006a) as a
function of the expected payoff of TTB relative to that of
a compensatory strategy known as “Franklin’s rule” (FR)
which is a weighted additive rule. It is easy to see that
there is an adaptive trend (r = .83) which shows that the
majority of people tend to use appropriate strategies in
compensatory (left of “1”) and noncompensatory (right
of “1”) environments. However, adaptivity is not perfect
since in all cases, there is a significant percentage of peo-
ple not using the appropriate strategy.
Hence, the results of both labs converge on similar con-
clusions: There is a certain extent of adaptivity in strat-
egy choice concerning search, stopping as well as deci-
sion rules. Participants are not only abhorred by costs,
but they seem able to figure out payoff structures (even if
differences are subtle — see Figure 1) and select the strat-
egy accordingly. However, there are large individual dif-
ferences in strategy selection. The attempt to find person-
ality dimensions as correlates of strategy preferences has
not been successful so far, even though we tried 15 plau-
sible dimensions (see Bröder, in press, for an overview).
However, it is yet an open question whether the differ-
ent strategy preferences diagnosed in a one-shot assess-
ment of an experiment will turn out to be stable across
tasks and situations. If not, then states rather than traits
should be investigated as variables causing the individ-
ual differences, for example mind-sets or spillover effects
from routines established in similar tasks.
Recently, Bröder and Schiffer (2006a) reported results
which qualify the optimistic notion of adaptivity docu-
mented by the filled circles in Figure 1. Three of the open
squares in the figure do not fit into the picture. These
experimental conditions have in common that the pay-
off structure of the environment had changed after par-
ticipants had become used to another environment be-
fore. That means, the low percentage of TTB users in the
noncompensatory environment reflects the fact that this
group had been exposed to a compensatory payoff struc-
ture before. Obviously, most participants adhered to a de-
cision strategy established as a routine before. These mal-
adaptive routine effects were only marginally relieved by
a hint about the change or even by a switch to a similar but
different task. This observation contrasts with most par-
ticipants’ obvious ability to adapt flexibly to a new task.
We conclude that different mechanisms for strategy se-
lection may be at work when people are confronted with
a new task than when they routinely use a strategy. Iner-
tia effects like these are predicted by Rieskamp’s (2006)
reinforcement learning model.
One additional observation was made repeatedly in the
stock market paradigm: There was an initial preference
for compensatory decision making and deep information
search (Bröder, 2000; 2003; Newell & Shanks, 2003;
Newell, Weston & Shanks, 2003; Rieskamp & Otto,
2006). Compensatory strategies were even somewhat
more subject to maladaptive routines than TTB (Bröder
& Schiffer, 2006a). We conjecture that participants feel
on the “safe” side if they use all information, and they
have to learn actively whether information can safely be
ignored. Many learn to adapt their stopping and/or deci-
sion rule, others keep on buying information even when
it is of no use (Newell, Weston & Shanks, 2003).
To summarize the adaptivity results: The toolbox idea
is corroborated in principle because many participants
adapt to payoff schemes. This supplements Payne et
al.’s (1993) work which showed that strategy selection is
contingent on task demands in the domain of preferen-
tial choices. In addition to the formal similarity between
multi-attribute preferential choice and multiple-cue prob-
abilistic inferences, these empirical similarities support
the idea of similar cognitive processes (or at least simi-
lar principles) in both domains. Note, however, that the
observation that people appear to choose among heuris-
tics of varying complexity could also be reinterpreted
as a threshold adjustment in an evidence accumulation
metaphor (e.g., Lee & Cummins, 2004; Newell, 2005).
Evidence accumulation models assume individual deci-
sion thresholds of evidence. Information search contin-
ues until a threshold in favour of one option has been
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 209
crossed and a decision is made. Thresholds can be set
at a continuum from strict to lenient. Lenient criteria im-
ply fast and frugal information searches, whereas strict
criteria demand more information before making a deci-
sion. Hence, “strategies” like TTB or WADD can also
be viewed as endpoints of a continuum that defines one
general process of decision making. Rather than select-
ing strategies, the decision maker might adjust thresh-
olds. At the moment, data do not allow for a clear de-
cision between the model classes because apparent strat-
egy switches can be reinterpreted as criterion shifts or
vice versa (see Hausmann & Läge, 2008). Large indi-
vidual differences in adaptivity remain, and the general
preference for compensatory deciding observed in studies
on TTB and the RH casts doubt on the assumption that
simple heuristics are the default mode of probabilistic in-
ferences — at least in tasks with cue information that is
easily accessible. Furthermore recent work suggests that
a “unified model” which treats TTB and more compen-
satory strategies as special cases of the same sequential
sampling process provides an interpretable account of in-
dividual differences in participants’ judgments. Although
such a threshold model is more complex than “parameter-
free” models like TTB, it is preferred to simpler models
on the grounds of model fit criteria (e.g., minimum de-
scription length) (Newell, Collins, & Lee, 2007).
4 Capacity limitations, automatic-
ity, and metacognition
In accordance with the multiple-strategy assumption in
decision research, Beach and Mitchell (1978) formulated
an early attempt to define criteria that might govern strat-
egy selection. In their contingency model “strategy se-
lection is viewed as a compromise between the press for
more decision accuracy as the demands of the decision
task increase and the decision maker’s resistance to the
expenditure of his or her personal resources” (Beach &
Mitchell, 1978, p. 447). They classified compensatory
strategies as “analytic” and noncompensatory ones as less
analytic and assumed that the “use of a less analytic strat-
egy requires, on the average, less expenditure of personal
resources than does use of a more analytic strategy” (p.
448). This intuitively plausible assumption has guided
a significant part of research, for example Payne et al.’s
(1993) systematic analysis of adaptive decision making.
Christensen-Szalanski (1978; 1980) as well as Chu and
Spires (2003) supported the assumption by showing that
it fits people’s intuitions. Payne et al. (1993) extended
and specified the model further by deriving a measure for
the cognitive costs caused by strategies: They counted
the elementary information processing steps necessary to
perform a decision rule and proposed that “the cogni-
tive effort needed to reach a decision using a particular
strategy is a function of the number and type of opera-
tors (productions) used by that strategy, with relative ef-
fort levels of various strategies contingent on task envi-
ronments” (Payne et al., 1993, p. 14). Both Beach and
Mitchell (1978) and Payne et al. (1993) admitted that the
exact nature of the deliberation process is unknown and
subject to further research, and the latter authors specu-
lated about different degrees of sophistication of this pro-
cess. This reasoning about the apparent costs of compen-
satory strategies is explicitly incorporated in the adaptive
toolbox metaphor and its rhetoric in which compensatory
strategies are associated with theories that “assume the
human mind has essentially unlimited demonic or super-
natural reasoning power” (Gigerenzer & Todd, 1999, p.
7). This image is contrasted against the fast and frugal
heuristics.
The emphasis on the execution costs of various deci-
sion strategies promoted by the contingency model and
the adaptive toolbox leads to a simple and straightforward
prediction: These relative costs should decrease with in-
creased cognitive capacity. Or in other words, greater
cognitive capacity should reduce the pressure to use sim-
plifying strategies like TTB (e.g., Beach & Mitchell,
1978; pp. 445–446). To our great surprise, in a first
study on that topic, our results were opposite to this pre-
diction and suggest a re-evaluation of the contingency
model. In the study of Bröder and Eichler (2001) par-
ticipants invested in the stock market game and subse-
quently filled out an intelligence test. After classifying
participants’ decision strategies, results showed that TTB
users were slightly more intelligent than compensatory
decision makers! This was opposite to the expectation
from the contingency logic which predicts simpler strate-
gies will be associated with less capacity. Only after a
post-hoc analysis of the game’s payoff structure, we real-
ized that there had been a relatively subtle (10%) advan-
tage in the expected payoff of TTB as compared to the
compensatory strategy WADD in this task. In two sub-
sequent experiments, we replicated the small, but consis-
tent superiority of TTB users with respect to intelligence
in environments with noncompensatory payoff structures
(Bröder, 2003). This suggests that cognitive capacity —
as indexed by intelligence — is not consumed by strat-
egy execution, but rather by strategy selection. Since in-
telligence can be related to many other causal variables,
we also manipulated cognitive capacity experimentally in
a subsequent experiment by imposing a very attention-
demanding secondary task on half of the participants dur-
ing their decisions (they had to count the occurrences of
the number “nine” in a stream of digits and were probed
in random intervals; Bröder & Schiffer, 2003a). In the
environment used, there was a very subtle payoff advan-
tage for TTB, and results showed 60% TTB users in the
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 210
condition without cognitive load, whereas only 26% used
TTB in the condition with heavy cognitive load. The
others were classified as using compensatory strategies.
This is again contrary to the expectation of the contin-
gency model and again supports another conclusion: At
least in our paradigm, the costs of strategy execution
do not seem to differ much between TTB and compen-
satory strategies, and participants were able to use com-
pensatory strategies even under conditions of heavy cog-
nitive load. Rather, the cognitive load impaired partici-
pants’ ability to figure out the payoff structure of the en-
vironment and to choose the appropriate heuristic.
This interpretation is also compatible with results re-
ported by Bröder and Schiffer (2006a) demonstrating
massive routine effects in the use of decision strategies.
Routine effects have been known as “Einstellung” effects
for a long time in the psychology of thinking (Luchins &
Luchins, 1959). Although Betsch and co-workers have
demonstrated routines in repeated decisions before (see
Betsch and Haberstroh, 2005, for a review), these demon-
strations concerned the choice of routine options rather
than strategies. Bröder and Schiffer (2006a) based their
research on these observations, but they demonstrated
that routines are retained also at the level of strategies,
even in a changing environment where they become mal-
adaptive (but see Rieskamp, 2008, for an alternative in-
terpretation). The combination of a quick adaptation to
new environments, but slow adaptation to changing en-
vironments suggests that strategy execution can become
routinized. Strategy selection, on the other hand, may re-
quire a costly re-examination of the environment in order
to adjust the strategy accordingly. This selection process
cannot become routinized, but it always requires delib-
erate processes. The apparently routinized strategy ex-
ecution was reflected in the time needed for each deci-
sion which was much shorter for later trials in the task
than for the first 10 to 20 trials. In the first phase of the
experiments, most participants adaptively chose the ap-
propriate strategy. When the environment changed after
80 trials, the reaction times did not increase again (even
after a hint about the change), and the result was a mal-
adaptive trend to stick to one’s established strategy. This
stickiness was even more pronounced for compensatory
strategies. We hypothesize that the meta-decision how to
decide was only executed at the beginning of the exper-
iment (consuming time and capacity), whereas the exe-
cution was routinized after a few trials, probably without
consuming cognitive capacity further. This routinization,
however, happens at the expense of flexibility (Schneider
& Shiffrin, 1977).
Hence, the contingency model’s and the toolbox’
rhetoric emphasis on processing costs of strategies and
heuristics may be mistaken, since the actual capacity-
consuming process is apparently the meta-decision rule
that selects strategies. But note that these conclusions
may be valid only for situations as complex as our exper-
iments, in which we only used up to three options with
up to six cues at most. Furthermore, the cues used in
our experiments were almost exclusively binary, and it is
conceivable that multi-valued cues are harder to process.
Perhaps, processing capacity limits were not reached, and
strategy execution costs may become a severe factor only
with very complex decision situations.
However, after having developed this interpretation,
one other result of Bröder and Schiffer (2006b) qual-
ifies this conclusion. In this study, participants again
had to work on various capacity-demanding secondary
tasks during decision making. In contrast to the study
mentioned before, this boosted the use of a TTB heuris-
tic at the expense of compensatory strategies. The im-
portant difference here was that all cue information had
to be retrieved from memory rather than from the com-
puter screen. In a virtual criminal case, participants had
learned various details about suspects which they later
used for decisions about the probability of being the per-
petrator. For example, they learned about aspects of the
suspects’ clothing and later received information about
witness reports that established a clear cue validity hierar-
chy. In a series of paired comparisons, they had to decide
which suspect was the perpetrator with a higher proba-
bility. Earlier studies in this memory-search paradigm
had already shown that TTB is much more prevalent here
than in screen-based information presentation. This sug-
gests that memory retrieval is costly and promotes early
stopping rules in the same way as high explicit costs
promote early stopping of information search in screen-
based tasks (Bröder, 2000; 2003; Newell & Shanks,
2003). Furthermore, the costs of retrieval are apparently
less severe when information is stored in a pictorial rather
than a verbal format (Bröder & Schiffer, 2003b; 2006b)
which is compatible with knowledge from cognitive psy-
chology (Paivio, 1991). However, recent work compar-
ing judgments made on the basis of pictorial and verbal
information in screen-based tasks found no evidence for
a difference in TTB use as a function of format. It ap-
pears then that the format effect is dependent on inducing
memory retrieval costs (Newell, Collins, & Lee, 2007).
Bröder and Gaissmaier (2007) reanalyzed response
times from published studies and found evidence that
people who were classified as TTB users based on de-
cision outcomes apparently also used TTB’s stopping
rule: The response times increased monotonically with
the number of cues that had to be retrieved for perform-
ing a lexicographic strategy. Other explanations (simi-
larity of options, difficulty of decisions) accounted for
less variance in the decision times than the assumption
of this simple stopping rule. In one experiment, there
were apparently several participants with an even simpler
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 211
strategy called “Take The First” who retrieved cues in the
order of retrieval ease (as defined by the learning proce-
dure) and showed response times consistent with a stop-
ping rule terminating search after one discriminating cue
had been found.
To conclude: If information is available on the screen
without burdening working memory too much, cogni-
tive processing costs for strategies are not a serious fac-
tor, the format of stimuli materials have little effect, and
cost differences between strategies like TTB and WADD
are neglible. Only if search costs are explicit are stricter
stopping (and decision) rules employed. Information in-
tegration also does not seem to be costly, a conclusion
demonstrated by the performance of participants in the
high cognitive load condition (counting nines) of Bröder
and Schiffer’s (2003a) experiment, described earlier, in
which 60% of participants probably used a compensatory
rule. Memory retrieval, on the other hand, appears to
cause cognitive costs and promotes early stopping rules,
but costs can be reduced by the use of integrated, pictorial
stimuli. Whereas the distinction between inference from
givens vs. inferences from memory is clear for controlled
laboratory experiments, it may be less so in the applied
everyday context. Here, we will often confront situations
that involve both kinds of information retrieval. For in-
stance, consumer choices may depend on an attribute ma-
trix provided in a “Consumer report” magazine as well
as on facts we remember about the options. Hence, ac-
tual decisions probably involve a mixture of information
sources and hence, a mixture of different cognitive costs.
5 Conclusions
In this review, we focused on our own empirical work
that was stimulated by and took place within the adaptive
toolbox metaphor. Since we did not report on numer-
ous other studies conducted within this framework, it is
fair to conclude that the toolbox has been extremely fruit-
ful in reanimating the interest in adaptive multi-attribute
decision making, supplementing Payne et al.’s (1993)
work on preferences with work on inferences. Because
metaphors are not “correct” or “wrong” per se (they are
all wrong, as Ebbinghaus [1885] already noted), they
have to be evaluated by their fruitfulness. In this respect,
the toolbox fares quite well. Whether the box crammed
with disparate tools is a more adequate metaphor than
an “adjustable spanner” (Newell, 2005) remains to be
seen. However, the success of evidence accumulation
models in many other areas of cognition leads us to be
optimistic that they can perhaps also be fruitfully ap-
plied to the more “controlled” processes in the decision
making domain (Busemeyer & Townsend, 1993: Wall-
sten & Barton, 1982). Techniques for specifying and em-
pirically testing evidence accumulation thresholds are ar-
guably more advanced and established than are models of
“strategy selection” (e.g., Vickers, 1979). Thus although
the two model classes may currently be difficult to dis-
tinguish at the data level, future investigations may deter-
mine the superior metaphor.
One main result emerging from the synopsis of the
work reported here is a fundamental re-evaluation of the
contingency model and its successors (Beach & Mitchell,
1978; Christensen-Szalanski, 1978; 1980; Chu & Spires,
2003; Payne et al., 1993). The wide-spread credo is
that compensatory strategies are cognitively more costly
than noncompensatory ones. On the other hand, they
are believed to be more accurate. Consequently, there
is a conflict demanding a compromise between the two.
However, the second conviction (higher accuracy) has
been called into question by the toolbox proponents who
showed via simulations that noncompensatory rules can
be as accurate as compensatory ones (Gigerenzer et al.,
1999). This clearly came as a surprise and has been repli-
cated and investigated more thoroughly since then (Hog-
arth & Kareleia, 2005, 2007). Interestingly, the toolbox
rhetoric on the other hand relies heavily on the assump-
tion that compensatory strategies are cognitively costly.
As all of our results suggest, this does not seem to be
the case. Compensatory strategies were performed under
high cognitive load and they were subject to “thought-
less” routines. Hence, multiple pieces of information can
be combined compensatorily without the “unlimited re-
sources” postulated for “rational demons” (see Gigeren-
zer & Todd, 1999). Whether this is done sequentially
in a simple random walk process (e.g., Lee & Cum-
mins, 2004) or by simultaneous constraint satisfaction
in a network model (e.g., Glöckner & Betsch, 2008) or
some other way remains open to question. What can be
costly is information search where costs are either deter-
mined by extrinsic (time pressure) or intrinsic (memory
retrieval) factors. We do not want to suggest that the exe-
cution of compensatory strategies is never costly: Several
studies have shown that the order of presentation, the pre-
sentation format (numerical vs. verbal), or the similarity
of alternatives have a strong influence on the way people
assess information (e.g. Schkade & Kleinmuntz, 1994;
Stone & Schkade, 1994), presumably reflecting different
levels of processing ease. Furthermore, there will cer-
tainly be costs in very complex situations with many al-
ternatives and attributes. However, our results suggest
that the cognitive costs for strategy execution may have
been overestimated in relation to the costs for strategy se-
lection in moderately complex situations.
A closely related result is that enhanced capacity in-
creased the proportion of people using simple heuristics
— in environments in which they were appropriate! This
was true for intelligence (Bröder, 2003) as well as for free
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 212
working memory capacity (Bröder & Schiffer, 2003a).
Since these factors had no direct effects on strategy ex-
ecution but rather on the adaptivity of the strategy use,
we conclude that the decision how to decide (or which
strategy to select) is the most demanding task in a new
decision situation. Although there has been some specu-
lation about this deliberation process (Payne et al., 1993),
it has been neglected as a target of research. Probably,
the empirical investigation of the rules used is challeng-
ing enough, and researchers have avoided adding another
level of complexity. We argue that the selection process
is the crux of the matter since it consumes cognitive re-
sources. Without modeling this demanding process, any
theory of tool selection or threshold adjustment remains
incomplete.
References
Beach, L. R., & Mitchell, T. R. (1978). A contingency
model for the selection of decision strategies. Academy
of Management Review, 3, 439–449.
Betsch, T., & Haberstroh, S. (2005). Preface. In T. Betsch
& S. Haberstroh, The routines of decision making (pp.
ix-xxv). Mahwah, NJ, US: Lawrence Erlbaum Asso-
ciates.
Bröder, A. (2000). Assessing the empirical validity of
the “Take The Best”-heuristic as a model of human
probabilistic inference. Journal of Experimental Psy-
chology: Learning, Memory, and Cognition, 26, 1332–
1346.
Bröder, A. (2003). Decision making with the “adaptive
toolbox”: Influence of environmental structure, intelli-
gence, and working memory load. Journal of Experi-
mental Psychology:Learning Memory, and Cognition,
29, 611–625.
Bröder, A. (in press). The quest for take the best - Insights
and outlooks from experimental research. To appear in
P. Todd, G. Gigerenzer, & the ABC Research Group,
Ecological rationality: Intelligence in the world, New
York: Oxford University Press.
Bröder, A. & Eichler, A. (2001). Individuelle Unter-
schiede in bevorzugten Entscheidungsstrategien. [In-
dividual differences of preferred decision strategies]
In: A. C. Zimmer, K. Lange et al. (Hrsg.). Experi-
mentelle Psychologie im Spannungsfeld von Grundla-
genforschung und Anwendung (p. 68–75), [CD-ROM].
Bröder, A. & Eichler, A. (2006). The use of recognition
and additional cues in inferences from memory. Acta
Psychologica, 121, 275–284.
Bröder, A. & Gaissmaier, W. (2007). Sequential process-
ing of cues in memory-based multi-attribute decisions.
Psychonomic Bulletin and Review, 14, 895–900.
Bröder, A. & Schiffer, S. (2003a). Bayesian strategy as-
sessment in multi-attribute decision research. Journal
of Behavioral Decision Making, 16, 193–213.
Bröder, A. & Schiffer, S. (2003b). “Take The Best” ver-
sus simultaneous feature matching: Probabilistic infer-
ences from memory and effects of representation for-
mat. Journal of Experimental Psychology: General,
132, 277–293.
Bröder, A. & Schiffer, S. (2006a). Adaptive flexibility
and maladaptive routines in selecting fast and frugal
decision strategies. Journal of Experimental Psychol-
ogy: Learning, Memory, & Cognition, 32, 904–918.
Bröder, A. & Schiffer, S. (2006b). Stimulus format and
working memory in fast and frugal strategy selection.
Journal of Behavioral Decision Making, 19, 361–380.
Busemeyer, J. R., & Townsend, J. T. (1993). Decision
field theory: A dynamic-cognitive approach to deci-
sion making in an uncertain environment. Psychologi-
cal Review, 100, 432–459.
Chater, N., Oasksford, M., Nakisa, R., & Redington, M.
(2003). Fast, frugal, and rational: How rational norms
explain behavior. Organizational Behavior and Hu-
man Decision Processes, 90, 63–86.
Christensen-Szalanski, J. J. J. (1978). Problem solving
strategies: a selection mechanism, some implications
and some data. Organizational Behavior and Human
Performance, 22, 307–323.
Christensen-Szalanski, J. J. J. (1980). A further exami-
nation fo the selection of problem solving strategies:
the effects of deadlines and analytic aptitudes. Organi-
zational Behavior and Human Performance, 25, 107–
122.
Chu, P. C., & Spires, E. E. (2003). Perceptions of accu-
racy and effort of decision strategies. Organizational
Behavior and Human Decision Processes, 91(2), 203–
214.
Cosmides, L., & Tooby, J. (1994). Beyond intuition and
instinct blindness: Toward an evolutionarily rigorous
cognitive science. Cognition, 50, 41–77.
Czerlinski, J., Gigerenzer, G., & Goldstein, D. G. (1999).
How good are simple heuristics? In G. Gigerenzer &
P. M. Todd & The ABC Research Group (Eds), Simple
heuristics that make us smart (pp. 97–118). Oxford:
Oxford University Press.
Ebbinghaus, H. (1885). Über das Gedächtnis. Unter-
suchungen zur Experimentellen Psychologie [About
memory. Investgations in experimental psychology].
Leipzig: Duncker & Humblot [Reprint 1966, Amster-
dam: E.J.Bonset].
Gigerenzer, G. & Goldstein, D. G. (1996). Reasoning the
fast and frugal way: Models of bounded rationality.
Psychological Review, 103, 650–669.
Gigerenzer, G., & Todd, P. M. (1999). Fast and frugal
heuristics: the adaptive toolbox. In G. Gigerenzer, P.
M. Todd & the ABC Research Group, Simple heuris-
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 213
tics that make us smart (pp. 3–34). New York: Oxford
University Press.
Gigerenzer, G., Todd, P. M., & the ABC Research Group
(1999). Simple heuristics that make us smart. Oxford:
Oxford University Press.
Glöckner, A., & Betsch, T. (2008). Modelling option and
strategy choices with connectionist networks: Towards
an integrative model of automatic and controlled deci-
sion making. Judgment and Decision Making, 3, 215–
228.
Goldstein, D. G., & Gigerenzer, G. (2002). Models of
ecological rationality: The recognition heuristic. Psy-
chological Review, 109, 75–90.
Hausmann, D. & Läge, D. (2008). Sequential evidence
accumulation in decision making: The individual de-
sired level of confidence can explain the extent of in-
formation acquisition. Judgment and Decision Mak-
ing, 3, 229–243.
Hogarth, R. M., & Karelaia, N. (2005). Ignoring informa-
tion in binary choice with continuous variables: When
is less “more”? Journal of Mathematical Psychology,
49, 115–124.
Hogarth, R. M., & Karelaia, N. (2007). Heuristic and
linear models of judgment: Matching rules and envi-
ronments. Psychological Review, 114, 733–758.
Juslin, P., & Persson, M. (2002). PROBabilities from
EXemplars (PROBEX): A “lazy” algorithm for prob-
abilistic inference from generic knowledge. Cognitive
Science, 26, 563–607.
Lee, M. D., & Cummins, T. D. R. (2004). Evidence ac-
cumulation in decision making: Unifying the “take the
best” and the “rational” models. Psychonomic Bulletin
and Review, 11, 343–352.
Luchins, A. S. & Luchins, E. H. (1959). Rigidity of be-
havior: A variational approach to the effect of Einstel-
lung. Eugene: University of Oregon Press.
Martignon, L., & Hoffrage, U. (1999). Why does one-
reason decision making work? A case study in eco-
logical rationality. In G. Gigerenzer, P. M. Todd &
The ABC Research Group (Eds), Simple heuristics that
make us smart (pp. 119–140). Oxford: Oxford Univer-
sity Press.
Newell, B. R. (2005). Re-visions of rationality? Trends
in Cognitive Sciences, 9, 11–15.
Newell, B. R. & Bröder, A. (2008). Cognitive processes,
models and metaphors in decision research. Judgment
and Decision Making, 3, 195–204.
Newell, B.R., Collins, P., & Lee, M.D. (2007). Adjusting
the spanner: Testing an evidence accumulation model
of decision making. In D. McNamara and G. Trafton
(Eds.), Proceedings of the 29th Annual Conference of
the Cognitive Science Society. (pp. 533–538). Austin,
TX: Cognitive Science Society.
Newell, B.R. & Fernandez, D. (2006). On the binary
quality of recognition and the inconsequentiality of
further knowledge: Two critical tests of the recognition
heuristic. Journal of Behavioral Decision Making, 19,
333–346.
Newell, B. R., Rakow, T., Weston, N. J.,& Shanks, D.
R. (2004). Search strategies in decision-making: The
success of success. Journal of Behavioral Decision
Making, 17, 117–137.
Newell, B. R., & Shanks, D. R. (2003). Take-the-best or
look at the rest? Factors influencing ‘one-reason’ de-
cision making. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 29, 53–65.
Newell, B. R., & Shanks, D. R. (2004). On the role of
recognition in decision making. Journal of Experi-
mental Psychology: Learning, Memory & Cognition,
30, 923–935.
Newell, B. R. & Shanks, D.R. (2007). Perspectives on
the tools of decision making. In Max Roberts (Ed.)
Integrating the mind (pp. 131–151). Hove, UK: Psy-
chology Press.
Newell, B. R., Weston, N. J., & Shanks, D. R. (2003).
Empirical tests of a fast and frugal heuristic: Not ev-
eryone “takes-the-best”.
Organizational Behavior and
Human Decision Processes, 91, 82–96.
Over, D. E. (2003). From Massive Modularity to
Metarepresentation: The Evolution of Higher Cogni-
tion. In D. E. Over, (Ed.) Evolution and the psychol-
ogy of thinking: The debate (pp. 121–144). Hove: Psy-
chology Press.
Pachur, T., Bröder, A. & Marewski, J. (in press). The
Recognition Heuristic in Memory-Based Inference: Is
Recognition a Non-Compensatory Cue? Journal of Be-
havioral Decision Making.
Paivio, A. (1991). Dual code theory: Retrospect and cur-
rent status. Canadian Journal of Psychology, 45, 255–
287.
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988).
Adaptive strategy selection in decision making. Jour-
nal of Experimental Psychology: Learning, Memory,
& Cognition, 14, 534–552.
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993).
The adaptive decision maker. Cambridge: Cambridge
University Press.
Pohl, R. F. (2006). Empirical tests of the recognition
heuristic. Journal of Behavioral Decision Making, 19,
251–271.
Rakow, T., Newell, B. R., Fayers, K., & Hersby, M.
(2005). Evaluating three criteria for establishing cue-
search hierarchies in inferential judgment. Journal of
Experimental Psychology: Learning, Memory & Cog-
nition, 31, 1088–1104.
Richter, T. & Späth, P. (2006). Recognition is used as
one cue among others in judgment and decision mak-
ing. Journal of Experimental Psychology: Learning,
Judgment and Decision Making, Vol. 3, No. 3, March 2008 Common beliefs versus empirical results 214
Memory & Cognition, 32, 150–162.
Rieskamp, J. (2006). Perspectives of Probabilistic Infer-
ences: Reinforcement Learning and an Adaptive Net-
work Compared. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 32, 1355–1370.
Rieskamp, J. (2008). The importance of learning when
making inferences. Judgment and Decision Making, 3,
261–277.
Rieskamp, J., & Hoffrage, U. (1999). When do peo-
ple use simple heuristics and how can we tell? In G.
Gigerenzer, P. M. Todd & the ABC Research Group,
Simple heuristics that make us smart (pp. 141–167).
New York: Oxford University Press.
Rieskamp, J., & Otto, P. E. (2006). SSL: A Theory of
How People Learn to Select Strategies. Journal of Ex-
perimental Psychology: General, 135, 207–236.
Schkade, D. A., & Kleinmuntz, D. N. (1994). Informa-
tion displays and choice processes: Differential effects
of organization, form, and sequence. Organizational
Behavior and Human Decision Processes, 57, 319–
337.
Schneider, W., & Shiffrin, R. M. (1977). Controlled and
automatic human information processing: I. Detection,
search, and attention. Psychological Review, 84, 1–66.
Stone, D. N., & Schkade, D. A. (1991). Numeric and
linguistic information representation in multiattribute
choice. Organizational Behavior and Human Decision
Processes, 49, 42–59.
Vickers, D. (1979). Decision processes in visual percep-
tion. New York: Academic Press.
Wallsten, T. S., & Barton, C. (1982). Processing prob-
abilistic multidimensional information for decisions.
Journal of Experimental Psychology: Learning, Mem-
ory, and Cognition, 8, 361–384.