ArticlePDF Available

Fluent, Fast, and Frugal? A Formal Model Evaluation of the Interplay Between Memory, Fluency, and Comparative Judgments

Authors:

Abstract and Figures

A new process model of the interplay between memory and judgment processes was recently suggested, assuming that retrieval fluency-that is, the speed with which objects are recognized-will determine inferences concerning such objects in a single-cue fashion. This aspect of the fluency heuristic, an extension of the recognition heuristic, has remained largely untested due to methodological difficulties. To overcome the latter, we propose a measurement model from the class of multinomial processing tree models that can estimate true single-cue reliance on recognition and retrieval fluency. We applied this model to aggregate and individual data from a probabilistic inference experiment and considered both goodness of fit and model complexity to evaluate different hypotheses. The results were relatively clear-cut, revealing that the fluency heuristic is an unlikely candidate for describing comparative judgments concerning recognized objects. These findings are discussed in light of a broader theoretical view on the interplay of memory and judgment processes.
Content may be subject to copyright.
This work is scheduled to appear in
Journal of Experimental Psychology: Learning, Memory, and Cognition
© 2010 American Psychological Association
DOI: unknown
This article may not exactly replicate the final version published in the APA journal. It is not
the copy of record.
Running Head: Fluency Heuristic
1
Fluent, fast, and frugal? A formal model evaluation of the interplay between memory,
fluency, and comparative judgments
Benjamin E. Hilbig
University of Mannheim
and Max-Planck Institute for Research on Collective Goods
Edgar Erdfelder & Rüdiger F. Pohl
University of Mannheim
in press
Journal of Experimental Psychology: Learning, Memory, and Cognition
Running Head: Fluency Heuristic
2
ABSTRACT
A new process model of the interplay between memory and judgment processes was
recently suggested, assuming that retrieval fluency – that is, the speed with which objects are
recognized – will determine inferences concerning such objects in a single-cue fashion. This
aspect of the fluency heuristic, an extension of the recognition heuristic, has remained largely
untested due to methodological difficulties. To overcome the latter, we propose a
measurement model from the class of multinomial processing tree models which can estimate
true single-cue reliance on recognition and retrieval fluency. We applied this model to
aggregate and individual data from a probabilistic inference experiment and considered both
goodness of fit and model complexity to evaluate different hypotheses. The results were
relatively clear-cut, revealing that the fluency heuristic is an unlikely candidate for describing
comparative judgments concerning recognized objects. These findings are discussed in light
of a broader theoretical view on the interplay of memory and judgment processes.
Running Head: Fluency Heuristic
3
INTRODUCTION
Research on how people reason, make judgments, or come to decisions has often
considered the role of memory in shaping these processes. As Weber and Johnson (2009)
pointed out in their comprehensive review, models of judgment and decision making that
explicitly refer to memory have indeed gained growing popularity. For example, Dougherty,
Gettys, and Ogden’s (1999) MINERVA-DM model explains probability judgments through
the matching processes of a global memory model. Models focusing on information sampling
(e.g. Busemeyer & Pleskac, 2009; Busemeyer & Townsend, 1993; J. G. Johnson &
Busemeyer, 2005) also explicitly refer to memory as one possible source of the evidence that
is accumulated before a choice is made. Also, models of how (Reyna, 2004) or in which order
(E. J. Johnson, Häubl, & Keinan, 2007; Weber et al., 2007) information is retrieved from
memory have been proposed of accounts for phenomena in judgment and decision making.
One particular view of the interplay between memory and judgments is taken in the
fast-and-frugal heuristics research program (e.g. Gigerenzer, 2004). Here, emphasis is placed
on how the output of memory is used in specific heuristic judgment rules. The focus of the
current investigation is the so-called fluency heuristic (FH; Schooler & Hertwig, 2005), a
relatively new fast-and-frugal judgment strategy and an extension of the recognition heuristic
(RH; Goldstein & Gigerenzer, 2002). Consider the task of judging which of two objects has
the higher criterion value (e.g., which of two cities is more populous). The recognition
heuristic posits that if exactly one of these objects is recognized, the recognized object should
be inferred to have the higher criterion value. However, if both objects are recognized, the RH
cannot be applied; consequently, the decision maker must resort to another strategy1 – the FH
as suggested by Schooler and Hertwig (2005): If both to-be-compared objects are recognized,
one should compare the retrieval times for the two object names and choose the object
recognized faster, given that the difference in retrieval time between the two exceeds a certain
threshold value. The FH thus bases probabilistic inferences on retrieval fluency.
Running Head: Fluency Heuristic
4
With good reason, fluency – or the ease with which information can be retrieved or
processed – is one of the most prominent concepts across various subfields in cognitive
psychology. Specifically, fluency is a factor of interest in memory research, judgment and
decision making, social cognition, and many more (e.g., Alter & Oppenheimer, 2008; Reber,
Schwarz, & Winkielman, 2004; Whittlesea & Leboe, 2003). For example, it has been shown
that fluency drives judgments of truth (Reber & Schwarz, 1999; Unkelbach, 2007), liking
(Reber, Winkielman, & Schwarz, 1998), and others. Given the importance of fluency
(Oppenheimer, 2008), it thus seems reasonable to propose a heuristic using this cue for
judgments. However, it should be noted that fluency has been conceptualized in different
ways and various effects of fluency have been reported (for an overview see Alter &
Oppenheimer, 2009). The theory we are concerned with herein, Schooler and Hertwig's
(2005) FH theory, pertains to one particular understanding of what might be called fluency.
We will elaborate on differences in how fluency is conceptualized in the discussion because
these differences provide a backdrop for the interpretation of our findings. For the time being,
however, it suffices to bear in mind that the FH hinges on fluency in terms of the speed with
which objects are judged to be recognized.
To allow for many correct judgments, both the RH and FH necessitate an
environmental correlation between the cue (recognition or fluency) and the to-be-inferred
criterion (e.g. city population). This cue-criterion association has been termed the "ecological
validity" (Gigerenzer & Goldstein, 1996), signifying the proportion of correct inferences
possible from consistently following a cue's prediction, given that it discriminates between
options. Indeed, there are quite a number of judgment domains in which both recognition and
fluency comprise substantial ecological validity, that is, predict certain criteria: For example,
recognition is positively correlated with the population of cities, the length of rivers, the
height of mountains, the fame of NHL players, the outcome of elections, and several more
(Marewski, Gaissmaier, Schooler, Goldstein, & Gigerenzer, 2010; Pohl, 2006; Snook &
Running Head: Fluency Heuristic
5
Cullen, 2006). Fluency has so far been shown to predict city population, the success of music
artists, and the revenues of companies (Hertwig, Herzog, Schooler, & Reimer, 2008; Hilbig,
2010b). These findings are plausible since, for instance, larger cities are more likely to appear
in the news and are therefore more commonly and speedily recognized (Hertwig et al., 2008).
As Shah and Oppenheimer (2008) have pointed out, both the RH and FH are
quintessential instances of using easy-to-access information to make inferences and can thus
be considered mechanisms of effort reduction. In addition, both the RH and FH are “fast and
frugal” because they are proposed as instances of one-reason decision making. That is, “like
the recognition heuristic, the fluency heuristic relies on only one consideration to make a
choice” (Schooler & Hertwig, 2005, p. 612, emphasis added). In other words, the RH theory
assumes that the recognized of two objects is chosen irrespective of any further knowledge or
information. By implication, recognition is thus relied on in a noncompensatory fashion
(Goldstein & Gigerenzer, 2002; Pachur, Bröder, & Marewski, 2008). Likewise, the FH theory
posits that – once fluency discriminates between two recognized objects – the decision is
determined and no further evidence for (or against) either of the objects is considered. So, in
our view, the truly bold assertion here is that these cues alone determine choice behavior
(Gigerenzer & Goldstein, 1996; Goldstein & Gigerenzer, 2002). In the remainder of this
article, we will focus on this striking aspect.
With respect to the RH, the claim of one-reason decision making has been challenged
in previous research (e.g. Bröder & Eichler, 2006; Newell & Fernandez, 2006; Oppenheimer,
2003; Richter & Späth, 2006; for an overview see Hilbig, 2010b). In sum, most findings
indicate that the recognition cue is not generally considered in an isolated or
noncompensatory fashion, though it does describe choice behavior in a substantial proportion
of cases (Hilbig, Erdfelder, & Pohl, 2010). Similarly, the majority of decision makers is
typically found not to rely on the RH consistently (Glöckner & Bröder, in press; Hilbig &
Richter, in press), though there always is a minority of participants who seem to consistently
Running Head: Fluency Heuristic
6
follow the recognition cue in isolation (Hilbig & Pohl, 2008; Pachur et al., 2008). By
comparison, very little is known about the potential use of the FH, that is, whether and how
many decision makers actually rely on retrieval fluency alone when comparing recognized
objects.
A first empirical investigation of the FH stems from Hertwig et al. (2008) who
measured fluency through the latencies of participants' recognition judgments. They reported
(a) that retrieval fluency of recognized objects is an ecologically valid and thus useful cue in
several judgment domains, (b) that people could use the FH as they can reliably identify the
objects they recognize more speedily, and (c) that actual choices are often (> 60%) in line
with the prediction of the FH, that is, more speedily recognized objects were more likely to be
chosen. Also, (d) the degree to which choices were in line with the predictions of the FH
increased with larger differences in fluency. Finally, (e) Hertwig et al. (2008) reported that
experimentally induced increases in fluency also affect choices. Specifically, the authors
increased the fluency of a subset of objects by asking participants to perform a syllable
counting task prior to making judgments between pairs of objects. Objects previously
processed in the counting task had a slightly increased probability (.55) of being judged to
have the larger criterion value.
However, whereas these findings clearly demonstrate that fluency affects judgments
(in line with the vast literature on fluency to which we pointed above), none provide distinct
evidence for use of the FH in terms of reliance on the fluency cue in isolation. Most
importantly, observed choices in line with a heuristic’s prediction cannot imply that this
heuristic was actually used (e.g., Fiedler, 2010; Hilbig & Pohl, 2008): Further knowledge or
information – which argued for the object eventually chosen – may well have been
considered. So, the confound between the cue in question (here: fluency) and further pieces of
evidence renders the proportion of observed choices in line with a strategy (also denoted the
adherence or accordance rate) uninformative. It is quite plausible that objects recognized more
Running Head: Fluency Heuristic
7
fluently are also those for which further knowledge is more readily accessible and also more
valid. Thus, judgments in line with the FH may just as well reflect an effect of retrieving and
integrating more positive evidence arguing for the more fluently recognized object, rather
than reliance on a one-cue heuristic.
The methodological caveats inherent in adherence rates can be illustrated in several
ways: Firstly, simulations have shown that the adherences rate to the RH consistently remains
at about .60 even if no single simulated decision maker ever relied on recognition in isolation
(Hilbig, 2010a). Secondly, empirically observable adherence rates have been shown to drop
from above .90 to below .20 when all possible positively confounded pieces of further
information are controlled for (Hilbig, 2008b). Note, however, that these findings pertain to a
heuristic which is not based on memory processes. Nonetheless, Bröder and Eichler (2006)
reported corresponding findings for the RH, corroborating that high adherence will often be
driven by confounded pieces of evidence. Finally, above chance-level adherence rates can be
demonstrated for practically any single-cue heuristic which relies on an ecologically valid
cue, even if the heuristic is essentially a hoax and most certainly never truly used (Hilbig,
2010b). As an aside, these facts also show why comparative model tests based on adherences
rates (e.g., Marewski et al., 2010) must be interpreted very carefully.
For these reasons, Hertwig et al.’s (2008) findings cannot confirm that the FH was
used in the sense of one-reason reliance on fluency. To be fair, Hertwig et al. (2008) are
appropriately careful in discussing their results and do not claim or suggest that these support
such a conclusion. The question thus remains largely open. In a first attempt to test the notion
of single-cue reliance on retrieval fluency, Hilbig (2010b) analyzed corresponding data with
the discrimination index (DI; Hilbig & Pohl, 2008). The DI is an individual measure,
computed as the difference in adherences rates for cases in which a heuristic yields a factually
correct versus false judgment. Stated differently, it denotes the probability of following a
cue’s prediction conditional on whether this leads to correct or false judgments. Since true
Running Head: Fluency Heuristic
8
users of the FH consider one cue (fluency) only, they cannot discriminate whether or not the
prediction of this cue is valid. Consequently, their adherences rates must be equally large in
both of these cases and they should therefore score DI 0. However, a majority of
participants (86%) showed DI scores reliably different from zero which is sufficient to
conclude that these decision makers did not pervasively use the FH (Hilbig, 2010b).
These findings notwithstanding, the DI bears several limitations (Hilbig, 2010a) since
it is mainly an ad-hoc proxy. Most importantly, it does not provide any estimate of the
probability of FH-use. Also, it cannot provide information about FH-use of those participants
who score DI 0, since such a DI score is merely necessary but not sufficient for FH-use.
Finally, the DI does not allow for formal model comparisons based on goodness-of-fit and
model complexity as criteria. We thus intended to test the FH by extending a recently
developed formal measurement model of the RH (Hilbig, Erdfelder et al., 2010) to also
estimate the probability of FH-use. We will introduce this extended measurement model in
what follows.
The r-s-model
As hinted previously, the methodological pitfall of adherence rates described above
clearly also pertains to the RH: Judging the recognized of two objects to have the higher
criterion value could – but need not necessarily – be based on consideration of the recognition
cue alone. Therefore, Hilbig, Erdfelder et al. (2010) proposed a formal measurement model,
named r-model, to assess true RH-use. In a nutshell, this multinomial processing tree (MPT)
model (e.g. Batchelder & Riefer, 1999; Erdfelder et al., 2009) is tailored to the typical
paradigm for investigating the RH: Participants are presented with pairs of objects and asked
to judge which object yields the higher criterion value in the given comparison. Furthermore,
participants provide recognition judgments for each single object, that is, they state whether
they have encountered an object’s name previously or not. Based on these data, the r-model
Running Head: Fluency Heuristic
9
subdivides paired-comparison judgments into three choice scenarios or ‘cases’: (i) Both
options or objects recognized (knowledge case), (ii) neither object recognized (guessing case),
or (iii) exactly one object recognized (recognition case) (cf. Goldstein & Gigerenzer, 2002).
Empirically observed comparative judgments in each of these cases are categorized as correct
or false judgments with respect to the true criterion and, in recognition cases, whether the
recognized objects was chosen, that is, whether a given choice followed the recognition cue
(adherence to the RH).
This total of eight observable outcome categories is explained through four latent
parameters, namely the recognition validity (parameter a), the knowledge validity (b), the
probability of a correct guess (g), and, centrally, the probability of using the RH, that is,
following the recognition cue in isolation (r). The main assumption is that whenever exactly
one object is recognized, a decision makers can either use the RH (probability r) or consider
any additional information or knowledge beyond recognition (1 – r). In the former case, one
will adhere to the RH and achieve a correct judgment with probability a which denotes the
recognition validity. In the latter case, however, a correct judgment will be achieved only if
one integrates valid knowledge into one’s judgment (with probability b). Importantly, this can
also imply choice of the unrecognized object if it factually comprises the larger criterion value
(which occurs with probability 1 – a).
In a series of experiments, the r-model was shown to fit empirical data well. More
importantly, the psychological meaning of the central model parameter r was validated
experimentally, for example by showing that it was sensitive to a manipulation instructing
participants to follow the recognition cue in isolation (Hilbig, Erdfelder et al., 2010). Finally,
recent simulations showed that – unlike any other measure of RH-use available so far – the r-
model provides reliable and uncontaminated estimates of RH-use which are adequately robust
even under conditions hampering correct identification of judgment strategies (Hilbig, 2010a).
Running Head: Fluency Heuristic
10
Thus, given the model’s usefulness for measuring single-cue reliance on recognition, it
seemed obvious to extend it to also estimate FH-use and thereby test the FH.
Specifically, as described above, the FH refers to cases in which both objects are
recognized (Schooler & Hertwig, 2005). Thus, to incorporate FH-use into the r-model, the
knowledge cases in the r-model must be subdivided into those in which retrieval fluency
discriminates between the objects (whenever fluency differences are above-threshold) and
those in which it does not. Following previous work on the FH (Schooler & Hertwig, 2005),
we initially assume that a difference threshold of 100ms applies to the FH. This assumption is
well-justified by the findings that differences in retrieval fluency above this threshold (i) are
reliably identified by participants and (ii) bear above-chance-level ecological validity
(Hertwig et al., 2008). However, we will later relax this assumption and test how well the FH
accounts for choice behavior if different threshold values are implemented.
As implied by the threshold, some cases will allow for application of the FH whereas
others will not. The two topmost trees in Figure 1 refer exactly to these two types of
knowledge cases in the extended r-model, denoted r-s-model henceforth. If both objects are
recognized but the difference in retrieval fluency is below the threshold (a fluency-
homogeneous case), a correct choice will occur with the probability of retrieving valid
knowledge, b1 (and a false choice with probability 1 – b1). If, by contrast, fluency does
discriminate between the two recognized objects (second tree, representing a fluency-
heterogeneous case), one can rely on fluency alone, that is, use the FH, which occurs with
probability s (for speed-based judgment). Here, the model functions in exactly the same way
as the r-model does whenever exactly one object is recognized: If one uses the FH and thus
follows fluency in isolation, one will make a correct choice with probability c (denoting the
fluency validity) or a false choice with probability 1 – c. If, by contrast, the FH is not used
(with probability 1 – s), performance will depend on the validity of the additional information
or knowledge considered: If it is valid (b2) one will adhere to the fluency cue whenever the
Running Head: Fluency Heuristic
11
latter implies a correct judgment (c) but refrain from doing so otherwise (1 – c). Vice versa,
given invalid knowledge (1 – b2), one will erroneously fail to adhere to the fluency cue even
though it implies a correct judgment (c) or adhere to the fluency cue even though this choice
is, in effect, false (1 – c). The third tree in Figure 1 refers to recognition cases and mirrors the
r-model (cf. Hilbig, Erdfelder et al., 2010) with the exception that a third knowledge validity
parameter (b3) is implemented. Finally, the fourth tree refers to guessing cases.
Note that the r-s-model, as outlined so far, includes three knowledge validity
parameters (b1, b2, b3) for different subtypes of object pairs (cf. Figure 1). The r-model,
however, uses a single b parameter only. This discrepancy arises because the s-r-model,
unlike the r-model, further subdivides the knowledge cases into fluency-heterogeneous and
fluency-homogeneous cases. It is theoretically reasonable to assume that the knowledge
validity differs between the former and the latter cases. Hence, two different knowledge
validities b1 and b2 are required for fluency-homogeneous and fluency-heterogeneous
knowledge cases, respectively.
However, given that the r-model has been well-validated as a measurement tool
(Hilbig, 2010a; Hilbig, Erdfelder et al., 2010), we consider it vital that the r-s-model is a true
extension of the r-model. So, in light of the necessity for three b parameters in total, how can
we ensure that the r-s-model reduces to the r-model for s = 0? For an answer, it is necessary to
analyze how the knowledge validities b1 and b2 relate to the knowledge validity parameter of
the r-model (called b3 in the r-s-model) when (i) s = 0 and (ii) knowledge cases are aggregated
across fluency-heterogeneous and fluency-homogeneous pairs. As shown in Appendix A, the
solution to this problem is
b3 = p · b1 + (1 – p) · b2 (1)
where p denotes the proportion of fluency-homogeneous cases out of all knowledge cases. In
other words, the parameters r, a, b3, and g of the r-s-model with s = 0 are identical to the
parameters r, a, b, and g of the r-model, respectively, if and only if the linear constraint (1)
Running Head: Fluency Heuristic
12
holds for the knowledge validity parameters of the r-s-model. Given the importance that the r-
s-model is a true extension of the r-model, we propose to apply the r-s-model in combination
with the linear constraint (1) only. For the sake of brevity, we refer to the r-s-model including
constraint (1) simply as ‘the r-s-model’.
The r-s-model comprises a total of 12 observable outcome categories of which nine are
free. These are explained through eight free parameters (r, s, p, a, b1, b2, c, g) 2 such that the
χ²-goodness-of-fit test of the r-s-model has 9 – 8 = 1 degrees of freedom (df). Consequently,
the r-s-model can be evaluated in much the same way as the r-model. First, we can test the fit
of the r-s-model statistically, for example, using the likelihood-ratio goodness-of-fit statistic
G2 with df=1. Second, given that the r-s-model holds, we can compare its fit with the fit of
several nested submodels using the χ²-difference test statistic G2. Examples include
submodels asserting (i) r = 1 (i.e., the RH is used without exception), (ii) s = 1 (i.e., the FH is
used without exception), or (iii) r = s = 1 (i.e., both heuristics are used without exception).
Third, note that the submodels of the r-s-model including constraints (ii) and (iii) involve a
deterministic interpretation of the FH as proposed by Schooler and Hertwig (2005).
Alternatively, one could propose a more plausible probabilistic interpretation in which the
choice predicted by a heuristic is switched into the alternative response with some error
probability f – in the sense of a naïve error theory (Rieskamp, 2008). Hilbig, Erdfelder et al.
(2010) previously explored the performance of a similar probabilistic extension of the RH.
We will complement these analyses by extending the r-s-model with r = s = 1 in an analogous
way. Fourth, provided that the number of choices per participant is sufficiently large, analyses
can be performed and parameter estimates sought at the single-participant level.
As identifiability is required for the existence of unique parameter estimates (Erdfelder
et al., 2009), we assessed identifiability using different methods (Moshagen, 2010;
Schmittmann, Dolan, Raijmakers, & Batchelder, 2010). As indicated by full column ranks of
the Jacobian matrix, local identifiability was given for each of the data sets analyzed in the
Running Head: Fluency Heuristic
13
present paper. We checked global identifiability using the simulated identifiability method
suggested by Rouder and Batchelder (1998). Specifically, for 1000 model-consistent data sets
generated by random parameter vectors, the estimated parameters exactly matched the values
of the parameters initially used to generate the frequencies (0 out of 8000 possible deviations,
given a tolerance of 1.0E-4). Hence, identifiability of the r-s-model can safely be assumed.
EXPERIMENT
Materials, procedure, and participants
The current experiment closely mirrored those previously conducted to investigate the
FH (Hertwig et al., 2008). Specifically, participants were given two tasks: a judgment-task in
which they repeatedly inferred which of two cities was more populous and a recognition-task
in which they were asked to indicate of which cities they had heard before. As materials we
used the 14 most populous cities from two domains, namely Austria and Poland (excluding,
for each, the most populous city to reduce the danger of conclusive criterion knowledge, cf.
Hilbig, Pohl, & Bröder, 2009; Pachur et al., 2008). In each domain, the 14 cities were
exhaustively paired, resulting in 91 paired-comparisons which were presented to participants
one after the other in random order. The order in which participants performed the 91 city-
size judgments in the two domains was counterbalanced. In the recognition-task, participants
were shown the 14 cities from each domain (again, sequentially and in random order) and
asked to indicate whether they had heard of the city before. For this recognition-judgment,
response latencies were recorded, thus serving as a proxy of retrieval fluency (cf. Hertwig et
al., 2008). A total of 66 (54 female) undergraduate students of the University of Mannheim,
aged 18 to 46 years (M = 22, SD = 5.3), participated in the experiment in exchange for course
credit or a flat fee cash payment.
Aggregate-level results
Running Head: Fluency Heuristic
14
Data were analyzed across the two domains, that is, based on 182 choices per
participant. Initially, the fluency-threshold was set to 100ms as originally proposed by
Schooler and Hertwig (2005). First, we assessed the operational statistics typically reported,
viz. the mean recognition validity (M = .83, SD = .07) and fluency validity (M = .59, SD =
.19) as well as the mean adherence rate to the predictions of the RH (M = .89, SD = .08) and
FH (M = .63, SD = .15). As these numbers indicate, the data were highly comparable to the
results previously reported by Hertwig et al. (2008). The fluency validity was greater than
chance, t(65) = 3.8, p < .001, Cohen’s d = 0.5, as was the mean adherence rate to the FH, t(65)
= 6.9, p < .001, Cohen’s d = 0.85. So, fluency was an ecologically valid cue and participants’
choices were consistent with its predictions in a substantial proportion of cases.
To apply the r-s-model, we considered the overall frequency of the twelve observable
outcome categories in the r-s-model (66 · 182 = 12,012 choices in total, see Appendix B),
computed from participants’ recognition judgments, corresponding recognition latencies, and
choices in the judgment-task. Based on these frequencies, we estimated the parameters and
determined the overall fit of the r-s-model using standard software for MPT models
(Moshagen, 2010). The model fit the data well (G²(1) = 1.5, p = .22), despite the large number
of cases in total and thus high statistical power. Parameter estimates can be found in Table 1.
Firstly, the estimates for the recognition and fluency validities (parameters a and c) almost
exactly mirrored the corresponding operational statistics reported above which corroborates
the estimates obtained from the r-s-model. Moreover, the estimated probability of RH-use
(parameter r) resembled previously reported findings (Hilbig, Erdfelder et al., 2010). As such,
the proportion of cases in which participants relied on recognition in isolation was substantial,
though significantly below the mean adherence rate to the RH (G² = 302, p < .001 when
fixing r = .89) and thus, by implication, smaller than 1.
Most importantly for the current research question, the estimated proportion of cases
in which participants relied on retrieval fluency in isolation (parameter s) was rather small
Running Head: Fluency Heuristic
15
(.23, see Table 1). As such, it turned out significantly below the mean adherence rate to the
FH, G² = 503, p < .001 when fixing s = .63. By implication, a deterministic understanding of
the FH (s = 1) must thus also be rejected. So, on the aggregate level, use of the FH appeared
to be quite unlikely, accounting for less than one fourth of choices in cases in which it could
have been applied. Also, the results once more confirm that adherence rates will severely
overestimate the probability of using a strategy, as non-use of the FH was more than twice as
likely when estimated through the r-s-model which unconfounds the contributions of retrieval
fluency and further knowledge or other information.
However, one may argue that a deterministic understanding of the RH or FH is not
very reasonable. Possibly, the core assumptions of either or both of these heuristics hold, if
strategy execution errors are considered. The solution is to add an error parameter f to every
terminal branch of the r-s-model, denoting the probability of switching one’s choice. First,
when adding the f parameter to the r-s-model, the overall fit remained unchanged and f was
estimated zero. So, as was originally the case for the r-model (Hilbig, Erdfelder et al., 2010),
the r-s-model did not necessitate an error parameter to account for the data. Next, different
nested submodels were compared to the r-s-model including the error parameter. Specifically,
after adding the free error parameter f, we implemented (i) r = 1 (probabilistic RH), (ii) s = 1
(probabilistic FH), and (iii) r = s = 1 (probabilistic RH and FH). Analyses revealed that each
of these models fit significantly worse than the original r-s-model (all G² > 94, all p <.001),
despite substantial estimated error rates (.10, .18, and .18 for the three submodels,
respectively). So, probabilistic versions of the RH and FH must be rejected at the aggregate
level, too.
Implementation of different fluency thresholds
So far, the results contradict the claim that fluency is relied on in isolation whenever
the retrieval times of to-be-compared objects differ by 100ms or more. However, the poor
Running Head: Fluency Heuristic
16
performance of the FH may well be due to this threshold of 100ms. We therefore sought to
test the explanatory power of the FH when implementing different thresholds. Specifically,
we analyzed the aggregate data with the original r-s-model (without an error parameter),
conditional upon 11 fluency-threshold values starting with 0ms and increasing in steps of
100ms to 1000ms. In particular, we focused on the s parameter which denotes the probability
of using the FH. However, the different thresholds will necessarily also affect p, that is, the
proportion of fluency-homogeneous cases: The higher the threshold, the less often fluency
will discriminate. So, the proportion of cases in which the FH could be used (1 – p) will
decline.
The results are shown in Figure 2 which plots the proportion of fluency-heterogeneous
knowledge cases (1 – p) and the estimated probability of using the FH (s) against the
threshold implemented. Additionally, the product of both ((1 – p) · s) which denotes the
proportion of all knowledge cases explained by the FH is displayed. As can be seen, use of the
FH remained unlikely across all threshold levels, with a maximum of s = .31 when
implementing a threshold of 600ms. However, since the proportion of fluency-heterogeneous
cases was already reduced to 1 – p = .22 at this level, the FH explained only 7% of all
knowledge cases. At most, namely at the 0ms-threshold level, the FH explained 19% of all
knowledge cases. Overall, these analyses reveal that the critical results reported above are
robust and do not depend on the fluency threshold implemented.
Individual-level results
It is a well-documented issue in the behavioral sciences that aggregate data analyses
may lead to different results and conclusions than a focus on the level of individuals (Estes &
Maddox, 2005). Aggregate analyses, on the one hand, may yield biases because aggregate
results do not necessarily mirror typical individual tendencies; findings obtained from single
individuals, on the other hand, may be less reliable and distorted by biased parameter
Running Head: Fluency Heuristic
17
estimates, especially due to the substantially smaller number of observations. Both early
seminal work (e.g. Estes, 1956; Sidman, 1952) and more recent discussions (Chechile, 2009;
Cohen, Sanborn, & Shiffrin, 2008) essentially imply that – while neither approach is per se or
generally superior – converging evidence from both aggregate and individual-level analyses
warrants particular confidence in the findings. Indeed, given evidence for clear individual
differences in judgment strategies (e.g. Hilbig, 2008a), analyses of individual data have been
called for repeatedly when investigating fast and frugal heuristics (Gigerenzer & Brighton,
2009; Marewski et al., 2010).
So, to test whether the above findings hold on the level of individuals, we next applied
the r-s-model (with the original 100ms threshold) to the 182 choices of each single
participant3. Figure 3 shows individual estimates of the probability of FH-use (parameter s) as
compared to the individual adherence rate to the FH. As can be seen, the estimated probability
of using the FH was well below the adherence rate for every single participant. Indeed, the s
parameter estimate of only one single participant was larger than .75. Ten participants (15%)
appeared to have used the FH with a probability above .50. The remaining 55 participants
(85%) used the FH in less than half of all possible cases, with 20 participants (31%) even
showing s parameter estimates below .10.
To gain further insight concerning use of the FH, we next conducted formal model
comparisons based on individual data. Specifically, we tested three competing hypotheses,
representing the assumptions that the FH is (I) always used, (II) sometimes used, or (III) never
used. The former (I) represents the deterministic FH theory at the single-participant level and
can be implemented as a submodel of the r-s-model by fixing s = 1. Variant II yields a more
lenient theoretical view, implying that some choices may reflect reliance on fluency in
isolation whereas others will not. To implement this assumption, no parameter restrictions in
the r-s-model are necessary. Of course, this more lenient view comes at the cost of increased
model complexity (one additional free parameter, namely s). Finally (III), one can assume that
Running Head: Fluency Heuristic
18
the fluency cue is never actually considered in isolation; this assumption is implemented in
the r-s-model by fixing s = 0 which comprises the same number of free parameters as variant
(I).
In order to compare all three variants in terms of goodness-of-fit – while penalizing
models with more free parameters – we relied on the Bayesian Information Criterion (BIC;
e.g. Myung, 2000; Raftery, 1995). BIC was computed separately for each model variant and
participant. From the corresponding BIC scores, we then approximated the Bayesian posterior
probability of each hypothesis (i.e. model variant) given the data (Wagenmakers, 2007; see
also Wasserman, 2000): Assuming equal priors4, the posterior probability of a model Hi out of
k different models is given by
1
0
)](
2
1
exp[
)](
2
1
exp[
)|(Pr k
jj
i
iBIC HBIC
HBIC
DH . (2)
This bears the additional advantage of being able to quantify the superiority of a
model, whereas mere BIC comparisons are limited to “more or less” conclusions. In
accordance with Wagenmakers (2007, Table 3), we considered a posterior probability above
.50 as weak evidence, above .75 as positive evidence, and above .95 as strong evidence. The
results are displayed in Table 2.
As can be seen in Table 2, the assumption that the FH is always used was superior for
only one single participant. Indeed, the Bayesian posterior probability of the first model
variant was smaller than .0001 for all other participants. The hypothesis that the FH was
sometimes used (model variant II) was supported for 23% of our sample, yielding strong
evidence for a total of 4 participants (6%). On the other hand, the data of 49 participants
(75%) favored the hypothesis that the FH was never used (model variant III) yielding positive
evidence as implied by a posterior probability .75 for 37 participants (57%). So, for a clear
Running Head: Fluency Heuristic
19
majority of decision makers, the assumption that retrieval fluency is never considered in
isolation explained the data best.
DISCUSSION
The concept of fluency has received much attention in cognitive and experimental
research in the past (Alter & Oppenheimer, 2009; Oppenheimer, 2008). While fluency comes
in different shades and tastes, its importance is quite undisputed. By contrast, it is a matter of
dispute – or, in the very least, an open question – whether Schooler and Hertwig’s (2005)
fluency heuristic is a viable model of comparative judgments. The main idea of the FH theory
is that in comparing two recognized objects with respect to some criterion, decision makers
should rely on the speed of their recognition judgment in making such an inference.
Like the RH (Goldstein & Gigerenzer, 2002), of which the FH is an extension, it is
implied to function in a noncompensatory fashion, given that the cue in question (recognition
or, if both objects are recognized, fluency) discriminates between the objects. In other words,
both heuristics predict that one cue alone is considered in isolation while all further
information is ignored (Schooler & Hertwig, 2005). Whereas many studies have addressed
this claim with respect to the RH (for an overview, see Hilbig, 2010b) – and mostly rejected it
in its strong form – very little is known about use of the FH in the sense of considering
differences in fluency between recognized objects in isolation. The empirical finding that
participants’ choices are often in line with the FH (Hertwig et al., 2008) is, unfortunately,
uninformative for this particular question due to the confound between the cue in question and
an unknown number of further cues decision makers may have considered.
In the current work, we set out to assess use of the FH by extending a formal
measurement model of the RH (Hilbig, Erdfelder et al., 2010). Two arguments favored this
step: First, the FH was proposed to explain judgments in those cases in which the RH makes
no prediction (whenever both objects are recognized). It can thus be considered an add-on to
Running Head: Fluency Heuristic
20
the RH. Secondly, the multinomial processing tree model proposed by Hilbig, Erdfelder et al.
(2010) to measure RH-use has been well validated and shown to provide robustly unbiased
estimates of the probability with which the recognition cue is considered in isolation (Hilbig,
2010a). Herein, we implemented an extended measurement model for the RH and FH, named
r-s-model, which comprises a parameter (s) representing the probability of FH-use
unperturbed by the potential consideration of further knowledge or information.
On the aggregate level, when analyzing the data with the r-s-model, we obtained a
much lower estimated probability of true FH-use (23%), as compared to what was implied by
the adherence rate (63%) also reported in previous studies (Hertwig et al., 2008). Both a
deterministic and a probabilistic version of the FH failed to account for the data and the main
result of infrequent FH-use was robust when implementing different fluency thresholds, that
is, minimal differences in retrieval fluency for applying the FH.
To rule out that conclusions are limited to the aggregate (Cohen et al., 2008; Estes &
Maddox, 2005), we additionally determined the superior model at the level of individual
participants. We found that the data of less than a third of our sample was best explained by
the hypothesis that the FH is sometimes used. By contrast, the hypothesis implying that the
FH is never used was superior in accounting for choice behavior of most participants (75% of
our sample). This is particularly noteworthy, since the latter hypothesis, too, is deterministic
in nature. In essence, the current results most strongly favor the conclusion that fluency is
very rarely considered in isolation.
Before we turn to theoretical conclusions, the main methodological implications of our
modeling approach and findings should be sketched. Specifically, the results reported above
add to the growing body of literature which stresses (i) the importance of specifying and
testing heuristics – or, indeed, any theory – formally (e.g., Farrell & Lewandowsky, 2010;
Gigerenzer, 2009; Marewski et al., 2010) and (ii) the severe problems of considering
adherence rates as a measure for the use of judgment strategies (Hilbig, 2010b). At the same
Running Head: Fluency Heuristic
21
time, these two desiderata are directly linked: Even if models are formally specified and
precise, their evaluation also affords precise measurement (Hilbig, 2010a). Optimally, there
should be a formal link between the model and observable data and an appropriate error
theory (Bröder & Schiffer, 2003). Simply looking at the adherence rate to a particular model
is inappropriate, as several – often contradictory – models will explain data to the same
degree (for similar arguments, see Roberts & Pashler, 2000). Single models should be
evaluated by means of testing critical hypotheses (Fiedler, 2010), that is, deriving predictions
which are specific for – and optimally unique to – the model in question. This approach also
involves constructing materials and paradigms to allow for conclusions about the model in
question (Bröder & Eichler, 2006; Glöckner & Betsch, 2008a). In testing competing models,
one should focus on competing hypotheses which discriminate between models (Glöckner &
Herbold, in press; Hilbig & Pohl, 2009) and/or consider process measures beyond choice data
(Glöckner, 2009; Jekel, Nicklisch, & Glöckner, 2010; E. J. Johnson, Schulte-Mecklenbeck, &
Willemsen, 2008).
On the theoretical level, our results are clear-cut. If the assertion that the FH is among
the ‘well-studied heuristics for which there is evidence that they are in the adaptive toolbox of
humans’ (Gigerenzer & Brighton, 2009, p. 130) refers, as we understand it, to reliance on
retrieval fluency in isolation, a more careful conclusion seems in order. Some fast-and-frugal
heuristics indeed describe behavior adequately at times (Hilbig, 2010b). The FH, however,
should not be counted among that number. Not only in absolute terms but especially
compared to the RH – which again accounted for a substantial proportion of choices, thus
replicating previous results – the FH performed rather dismally.
Clearly, this calls for an explanation. We propose that one viable account lies in the
way that the interplay of memory processes and judgment strategies is conceptualized in the
fast and frugal heuristics approach. More specifically, the RH and FH are at odds with the
literature on fluency and its role in recognition memory (Dougherty, Franco-Watkins, &
Running Head: Fluency Heuristic
22
Thomas, 2008; but see also Gigerenzer, Hoffrage, & Goldstein, 2008). We discuss these
inconsistencies in what follows, aiming for an integration of our current findings with the
broader literature on recognition memory and its role in judgment and decision making.
One line of research which has emphasized the importance of fluency for at least two
decades, is the area of recognition memory (Jacoby, 1991; Whittlesea, 1993). Specifically, it
can be considered a well-established finding that fluency influences judgments of previous
encounter, that is, recognition judgments (Westerman, Miller, & Lloyd, 2003; Whittlesea,
1993). Stated briefly, fluency is assumed to be experienced as a heightened sense of
familiarity which, in turn, can be used for a recognition judgment in the absence of actual
recall (Jacoby & Dallas, 1981). So, in terms of dual-process theories of memory, this path
from fluency – via subjective familiarity – is typically considered the alternative route to
recognition as opposed to conscious recollection (Jacoby, 1991). In essence, fluency is
attributed to prior experience and thereby increases the likelihood of a positive recognition
judgment (Whittlesea & Leboe, 2003). As such, recognition and fluency are inherently
intertwined and would therefore exert their potential influence on judgments and decisions in
unison.
As introduced above, a strikingly different view is held in the fast and frugal heuristics
program which emphasizes a “toolbox” of separate and distinct judgment strategies which
come with their own special set of triggering conditions (Gigerenzer, 2004). Here, recognition
and fluency are treated separately, each forming the basic cue for a distinct judgment heuristic
(Schooler & Hertwig, 2005). Specifically, the recognition heuristic proposes to infer the
recognized of two objects to have the higher criterion value, given that exactly one object is
recognized (Goldstein & Gigerenzer, 2002) and independent of how or why this particular
object was (judged to be) recognized. The fluency heuristic, by contrast, is applied if and only
if both objects are recognized and thus conditional upon recognition. One argument put forth
to support this separation of the RH and FH has been that “[h]aving two distinct rules
Running Head: Fluency Heuristic
23
improves the overall efficiency of the system because information is processed only as much
as is necessary to make a decision” (Schooler & Hertwig, 2005, p. 626)5.
However, theories of recognition memory assert that recognition and fluency actually
share what could be called a continuous familiarity variable as a common denominator
(Westerman et al., 2003; Whittlesea & Leboe, 2003). So, comparative judgments could
simply be performed based on this familiarity (Dougherty et al., 2008) without the need for
two separate heuristics (Shah & Oppenheimer, 2008). Similarly, one could assume that
fluency should impact how familiar objects are and, by extension, with which certainty they
are recognized. Weighted by this degree of certainty, recognition could then be considered as
a non-binary cue (Erdfelder, Küpper-Tetzel, & Mattern, in press). In any case, the prediction
would be that recognition is considered depending on the speed of recognition (as a proxy for
fluency). This is exactly what has been found empirically: Newell and Fernandez (2006)
showed that more speedily recognized objects were more likely to be chosen over
unrecognized ones, than slowly recognized ones. As such, fluency apparently influenced the
degree to which decision makers relied on the recognition cue (see also Hertwig et al., 2008).
To make these findings compatible with the RH, it has been proposed that reliance on
recognition depends on underlying memory activation (Marewski et al., 2010), thus giving up
the binary understanding of recognition implemented in the original RH. However, this
strongly undermines the necessity of an additional and separate FH mechanism (Erdfelder et
al., in press). In particular, given that decision makers rely on fluency (or, simply, the
underlying degree of familiarity) when making their recognition judgments, why should they
consider fluency as a cue in the choice task after formation of and conditional upon a positive
recognition judgment?
Note that this question also relates to a discussion concerning the superiority of the RH
over the FH in terms of ecological validity. Hertwig et al. (2008) explained this superiority
effect by referring to the range of the underlying memory activation of different objects. The
Running Head: Fluency Heuristic
24
RH makes use of the entire range of possible activations (an unrecognized object may have no
activation at all and a recognized one may yield very high activation); the FH, by contrast,
hinges on a much smaller range since it operates conditional upon recognition and thus
necessitates at least some intermediate degree of activation. On average, differences between
objects in terms of underlying memory strengths must thus be smaller for the FH than for the
RH. While Hertwig et al. (2008) discussed this as an account for the RH's superiority in terms
of ecological validity, it also mirrors our conjecture concerning the need for the FH. If
judgments are based on underlying memory strengths – for instance in the form of familiarity
(Dougherty et al., 2008) or the degree of recognition certainty (Erdfelder et al., in press) –
reliance on fluency should not be limited to both options being recognized.
In sum, our findings nourish doubts about the adequacy of considering recognition
and fluency separately as two cues used in two distinct heuristics. As previous investigations
have shown, the degree to which decision makers rely on recognition is quite strongly
influenced by retrieval fluency (Hertwig et al., 2008; Newell & Fernandez, 2006). Here, we
are convinced, fluency comes into play. This is in line with the vast literature on fluency and
recognition memory that treats these cues as inherently intertwined. However, once the
recognition cue is weighted by the fluency of recognition, there is no need for an additional
FH-mechanism, conditional upon recognition. The current results support this view as the
hypothesis that retrieval fluency is relied on in isolation and conditional upon recognition was
rejected.
Running Head: Fluency Heuristic
25
REFERENCES
Alter, A. L., & Oppenheimer, D. M. (2008). Effects of fluency on psychological distance and
mental construal (or why New York is a large city, but New York is a civilized
jungle). Psychological Science, 19, 161-167.
Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a
metacognitive nation. Personality and Social Psychology Review, 13, 219-235.
Batchelder, W. H., & Riefer, D. M. (1999). Theoretical and empirical review of multinomial
process tree modeling. Psychonomic Bulletin & Review, 6, 57-86.
Bröder, A., & Eichler, A. (2006). The use of recognition information and additional cues in
inferences from memory. Acta Psychologica, 121, 275-284.
Bröder, A., & Newell, B. R. (2008). Challenging some common beliefs: Empirical work
within the adaptive toolbox metaphor. Judgment and Decision Making, 3, 205-214.
Bröder, A., & Schiffer, S. (2003). Bayesian strategy assessment in multi-attribute decision
making. Journal of Behavioral Decision Making, 16, 193-213.
Busemeyer, J. R., & Pleskac, T. J. (2009). Theoretical tools for understanding and aiding
dynamic decision making. Journal of Mathematical Psychology, 53, 126-138.
Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive
approach to decision making in an uncertain environment. Psychological Review, 100,
432-459.
Chechile, R. A. (2009). Pooling data versus averaging model fits for some prototypical
multinomial processing tree models. Journal of Mathematical Psychology, 53, 562-
576.
Cohen, A. L., Sanborn, A. N., & Shiffrin, R. M. (2008). Model evaluation using grouped or
individual data. Psychonomic Bulletin & Review, 15, 692-712.
Running Head: Fluency Heuristic
26
Dougherty, M. R., Franco-Watkins, A. M., & Thomas, R. (2008). Psychological plausibility
of the theory of Probabilistic Mental Models and the Fast and Frugal Heuristics.
Psychological Review, 115, 199-213.
Dougherty, M. R., Gettys, C. F., & Odgen, E. E. (1999). MINERVA-DM: A memory process
model for judgments of likelihood. Psychological Review, 106, 180-209.
Erdfelder, E., Auer, T.-S., Hilbig, B. E., Aßfalg, A., Moshagen, M., & Nadarevic, L. (2009).
Multinomial processing tree models: A review of the literature. Zeitschrift für
Psychologie - Journal of Psychology, 217, 108-124.
Erdfelder, E., Küpper-Tetzel, C. E., & Mattern, S. (in press). Threshold models of recognition
and the recognition heuristic. Judgment and Decision Making.
Estes, W. K. (1956). The problem of inference from curves based on group data.
Psychological Bulletin, 53, 134-140.
Estes, W. K., & Maddox, W. T. (2005). Risks of drawing inferences about cognitive
processes from model fits to individual versus average performance. Psychonomic
Bulletin & Review, 12, 403-408.
Farrell, S., & Lewandowsky, S. (2010). Computational Models as Aids to Better Reasoning in
Psychology. Current Directions in Psychological Science, 19, 329-335.
Fiedler, K. (2010). How to study cognitive decision algorithms: The case of the priority
heuristic. Judgment and Decision Making, 5, 21-32.
Gigerenzer, G. (2004). Fast and frugal heuristics: The tools of bounded rationality. In D.J.
Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making
(pp. 62-88). Malden, MA: Blackwell Publishing.
Gigerenzer, G. (2009). Surrogates for theory. APS Observer, 22, 21-23.
Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make better
inferences. Topics in Cognitive Science, 1, 107-143.
Running Head: Fluency Heuristic
27
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of
bounded rationality. Psychological Review, 103, 650-669.
Gigerenzer, G., Hoffrage, U., & Goldstein, D. G. (2008). Fast and frugal heuristics are
plausible models of cognition: Reply to Dougherty, Franco-Watkins, and Thomas
(2008). Psychological Review, 115, 230-237.
Glöckner, A. (2008). How evolution outwits bounded rationality: The efficient interaction of
automatic and deliberate processes in decision making and implications for
institutions. In C. Engel & W. Singer (Eds.), Better Than Conscious? Implications for
Performance and Institutional Analysis. Strüngmann Forum Report 1. (pp. 259-284).
Cambridge, MA: MIT Press.
Glöckner, A. (2009). Investigating intuitive and deliberate processes statistically: The
multiple-measure maximum likelihood strategy classification method. Judgment and
Decision Making, 4, 186–199.
Glöckner, A., & Betsch, T. (2008a). Do people make decisions under risk based on
ignorance? An empirical test of the priority heuristic against cumulative prospect
theory. Organizational Behavior and Human Decision Processes, 107, 75-95.
Glöckner, A., & Betsch, T. (2008b). Multiple-reason decision making based on automatic
processing. Journal of Experimental Psychology: Learning, Memory, & Cognition, 34,
1055-1075.
Glöckner, A., & Betsch, T. (2010). Accounting for critical evidence while being precise and
avoiding the strategy selection problem in a parallel constraint satisfaction approach –
a reply to Marewski (2010). Journal of Behavioral Decision Making, 23, 468–472.
Glöckner, A., Betsch, T., & Schindler, N. (2010). Coherence shifts in probabilistic inference
tasks. Journal of Behavioral Decision Making, 23, 439-462.
Running Head: Fluency Heuristic
28
Glöckner, A., & Bröder, A. (in press). Processing of recognition information and additional
cues: A model-based analysis of choice, confidence, and response time. Judgment and
Decision Making.
Glöckner, A., & Herbold, A.-K. (in press). An eye-tracking study on information processing
in risky decisions: Evidence for compensatory strategies based on automatic
processes. Journal of Behavioral Decision Making.
Glöckner, A., & Witteman, C. (2010). Beyond dual-process models: A categorization of
processes underlying intuitive judgment and decision making. Thinking & Reasoning,
16, 1-25.
Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition
heuristic. Psychological Review, 109, 75-90.
Hertwig, R., Herzog, S. M., Schooler, L. J., & Reimer, T. (2008). Fluency Heuristic: A model
of how the mind exploits a by-product of information retrieval. Journal of
Experimental Psychology: Learning, Memory, & Cognition, 34, 1191-1206.
Hilbig, B. E. (2008a). Individual differences in fast-and-frugal decision making: Neuroticism
and the recognition heuristic. Journal of Research in Personality, 42, 1641-1645.
Hilbig, B. E. (2008b). One-reason decision making in risky choice? A closer look at the
priority heuristic. Judgment and Decision Making, 3, 457-462.
Hilbig, B. E. (2010a). Precise models deserve precise measures: a methodological dissection.
Judgment and Decision Making, 5, 272-284.
Hilbig, B. E. (2010b). Reconsidering "evidence" for fast-and-frugal heuristics. Psychonomic
Bulletin & Review, 17, 923-930.
Hilbig, B. E., Erdfelder, E., & Pohl, R. F. (2010). One-reason decision-making unveiled: A
measurement model of the recognition heuristic. Journal of Experimental Psychology:
Learning, Memory, & Cognition, 36, 123-134.
Running Head: Fluency Heuristic
29
Hilbig, B. E., & Pohl, R. F. (2008). Recognizing users of the recognition heuristic.
Experimental Psychology, 55, 394-401.
Hilbig, B. E., & Pohl, R. F. (2009). Ignorance- versus evidence-based decision making: A
decision time analysis of the recognition heuristic. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 35, 1296-1305.
Hilbig, B. E., Pohl, R. F., & Bröder, A. (2009). Criterion knowledge: A moderator of using
the recognition heuristic? Journal of Behavioral Decision Making, 22, 510-522.
Hilbig, B. E., & Richter, T. (in press). Homo heuristicus outnumbered: Comment on
Gigerenzer and Brighton (2009). Topics in Cognitive Science.
Hilbig, B. E., Scholl, S. G., & Pohl, R. F. (2010). Think or blink – is the recognition heuristic
an "intuitive" strategy? Judgment and Decision Making, 5, 300-309.
Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from
intentional uses of memory. Journal of Memory and Language, 30, 513-541.
Jacoby, L. L., & Dallas, M. (1981). On the relationship between autobiographical memory
and perceptual learning. Journal of Experimental Psychology: General, 110, 306-340.
Jekel, M., Nicklisch, A., & Glöckner, A. (2010). Implementation of the Multiple-Measure
Maximum Likelihood strategy classification method in R: addendum to Glöckner
(2009) and practical guide for application. Judgment and Decision Making, 5, 54-63.
Johnson, E. J., Häubl, G., & Keinan, A. (2007). Aspects of endowment: A query theory of
value construction. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 33, 461-474.
Johnson, E. J., Schulte-Mecklenbeck, M., & Willemsen, M. C. (2008). Process models
deserve process data: Comment on Brandstätter, Gigerenzer, and Hertwig (2006).
Psychological Review, 115, 263-272.
Johnson, J. G., & Busemeyer, J. R. (2005). A dynamic, stochastic, computational model of
preference reversal phenomena. Psychological Review, 112, 841-861.
Running Head: Fluency Heuristic
30
Lee, M. D., & Cummins, T. D. (2004). Evidence accumulation in decision making: Unifying
the "take the best" and the "rational" models. Psychonomic Bulletin & Review, 11,
343-352.
Marewski, J. N. (2010). On the theoretical precision and strategy selection problem of a
single-strategy approach: A comment on Glöckner, Betsch, and Schindler (2010).
Journal of Behavioral Decision Making, 23, 463-467.
Marewski, J. N., Gaissmaier, W., Schooler, L. J., Goldstein, D. G., & Gigerenzer, G. (2010).
From recognition to decisions: extending and testing recognition-based models for
multi-alternative inference. Psychonomic Bulletin & Review, 17, 287-309.
Moshagen, M. (2010). multiTree: A computer program for the analysis of multinomial
processing tree models. Behavior Research Methods, 42, 42-54.
Myung, I. J. (2000). The importance of complexity in model selection. Journal of
Mathematical Psychology, 44, 190-204.
Newell, B. R., & Fernandez, D. (2006). On the binary quality of recognition and the
inconsequentially of further knowledge: Two critical tests of the recognition heuristic.
Journal of Behavioral Decision Making, 19, 333-346.
Newell, B. R., & Lee, M. D. (in press). The right tool for the job? Comparing an evidence
accumulation and a naïve strategy selection model of decision making. Journal of
Behavioral Decision Making.
Oppenheimer, D. M. (2003). Not so fast! (and not so frugal!): Rethinking the recognition
heuristic. Cognition, 90, B1-B9.
Oppenheimer, D. M. (2008). The secret life of fluency. Trends in Cognitive Sciences, 12, 237-
241.
Pachur, T., Bröder, A., & Marewski, J. (2008). The recognition heuristic in memory-based
inference: Is recognition a non-compensatory cue? Journal of Behavioral Decision
Making, 21, 183-210.
Running Head: Fluency Heuristic
31
Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioral
Decision Making, 19, 251-271.
Raftery, A. E. (1995). Bayesian model selection in social research. Sociological Methodology,
25, 111-163.
Reber, R., & Schwarz, N. (1999). Effects of perceptual fluency on judgments of truth.
Consciousness and Cognition: An International Journal, 8, 338-342.
Reber, R., Schwarz, N., & Winkielman, P. (2004). Processing Fluency and Aesthetic
Pleasure: Is Beauty in the Perceiver's Processing Experience? Personality and Social
Psychology Review, 8, 364-382.
Reber, R., Winkielman, P., & Schwarz, N. (1998). Effects of perceptual fluency on affective
judgments. Psychological Science, 9, 45-48.
Reyna, V. F. (2004). How people make decisions that involve risk: A dual-processes
approach. Current Directions in Psychological Science, 13, 60-66.
Richter, T., & Späth, P. (2006). Recognition is used as one cue among others in judgment and
decision making. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 32, 150-162.
Rieskamp, J. (2008). The probabilistic nature of preferential choice. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 34, 1446-1465.
Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing.
Psychological Review, 107, 358-367.
Rouder, J. N., & Batchelder, W. H. (1998). Multinomial models for measuring storage and
retrieval processes in paired associate learning. In C. Dowling, F. Roberts, & P.
Theuns (Eds.), Recent Progress in Mathematical Psychology (pp. 195-225). Mahwah:
Erlbaum.
Running Head: Fluency Heuristic
32
Schmittmann, V. D., Dolan, C. V., Raijmakers, M. E. J., & Batchelder, W. H. (2010).
Parameter identification in multinomial processing tree models. Behavior Research
Methods, 42, 836-846.
Schooler, L. J., & Hertwig, R. (2005). How forgetting aids heuristic inference. Psychological
Review, 112, 610-628.
Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction
framework. Psychological Bulletin, 134, 207-222.
Sidman, M. (1952). A note on functional relations obtained from group data. Psychological
Bulletin, 49, 263-269.
Snook, B., & Cullen, R. M. (2006). Recognizing National Hockey League greatness with an
ignorance-based heuristic. Canadian Journal of Experimental Psychology, 60, 33-43.
Unkelbach, C. (2007). Reversing the truth effect: Learning the interpretation of processing
fluency in judgments of truth. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 33, 219-230.
Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values.
Psychonomic Bulletin & Review, 14, 779-804.
Wasserman, L. (2000). Bayesian model selection and model averaging. Journal of
Mathematical Psychology, 44, 92-107.
Weber, E. U., & Johnson, E. J. (2009). Mindful judgment and decision making. Annual
Review of Psychology, 60, 53-85.
Weber, E. U., Johnson, E. J., Milch, K. F., Chang, H., Brodscholl, J. C., & Goldstein, D. G.
(2007). Asymmetric discounting in intertemporal choice: A query-theory account.
Psychological Science, 18, 516-523.
Westerman, D. L., Miller, J. K., & Lloyd, M. E. (2003). Change in perceptual form attenuates
the use of the fluency heuristic in recognition. Memory & Cognition, 31, 619-629.
Running Head: Fluency Heuristic
33
Whittlesea, B. W. (1993). Illusions of familiarity. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 19, 1235-1253.
Whittlesea, B. W., & Leboe, J. P. (2003). Two fluency heuristics (and how to tell them apart).
Journal of Memory and Language, 49, 62-79.
Running Head: Fluency Heuristic
34
APPENDIX A
Observation:
The model parameters r, a, b3, and g of the r-s-model with s = 0 are equal to the model
parameters r, a, b, and g of the r-model, respectively, if and only if the relation
b3 = p · b1 + (1-p) · b2 (A1)
holds for the parameters of the r-s-model.
Proof:
Note that for (a) recognition cases and (b) guessing cases the observation categories
and model equations of the r-model with parameters r, a, b, and g and the r-s-model with
parameters r, a, b3, and g, respectively, are equivalent. Hence, it is only necessary to analyze
what happens to the model equations of the r-s-model when the six observation categories of
knowledge cases to which the r-s-model refers are combined into the two observation
categories to which the r-model refers (i.e., ‘correct choice’ and ‘false choice’). Combining
the categories 1, 9, and 12 of the r-s-model (see Figure 1) into a single ‘correct-choice’
category essentially involves summation of the r-s-model equations for categories 1, 9, and
12, each one weighted with the probability of the tree from which it derives. If p and (1-p)
denote the probabilities of fluency-homogeneous and fluency-heterogeneous cases from all
knowledge cases, respectively, then we easily obtain
p(correct choice | knowledge case) (A2)
= p · b1 + (1-p) · s · c + (1-p) · (1-s) · b2 · c + (1-p) · (1-s) · b2 · (1-c)
= p · b1 + (1-p) · (s · c + (1-s) · b2)
For s = 0, Equation (A2) reduces to
p(correct choice | knowledge case and s = 0) (A3)
Running Head: Fluency Heuristic
35
= p · b1 + (1-p) · b2.
Recall that the knowledge validity parameter b of the r-model is, by definition, equal
to p(correct choice | knowledge case). Hence, if we require the r-s-model parameters to satisfy
the constraint
b3 = p(correct choice | knowledge case and s = 0) = p · b1 + (1-p) · b2 (A4)
then the parameters r, s, b, and g of the r-model necessarily match the parameters r, s, b3, and
g of the r-s-model with s = 0, respectively. Conversely, whenever the parameters r, s, b, and g
of the r-model match the parameters r, s, b3, and g of the r-s-model with s = 0, then the s-r-
model parameters will satisfy the linear constraint (A1). This completes the proof.
Computational aspects:
Standard computer programs for MPT models (for a brief review see Erdfelder et al.,
2009) allow for parameter fixations, equality constraints, and ordinal parameter constraints
only. Other types of parameter restrictions are not provided. Fortunately, however, the linear
constraint (A1) involves model parameters only. Hence, by replacing b3 with p · b1 + (1-p) · b2
in the model definition, an extended binary MPT model can easily be developed that includes
the constraint (A1). Readers interested in this extended model definition file (to be used in
combination with the multiTree software, Moshagen, 2010) should request a copy from the
authors.
Running Head: Fluency Heuristic
36
APPENDIX B
r-s-model
category # r-s-model category meaning Total N
1 Fluency-homogeneous knowledge case, correct judgment 430
2 Fluency-homogeneous knowledge case, false judgment 298
3 Neither recognized, correct judgment 1,740
4 Neither recognized, false judgment 1,564
5 One recognized, adherence to RH, correct judgment 4,242
6 One recognized, adherence to RH, false judgment 753
7 One recognized, non-adherence to RH, false judgment 384
8 One recognized, non-adherence to RH, correct judgment 168
9 Fluency-heterogeneous knowledge case, adherence to FH,
correct judgment 1,115
10 Fluency-heterogeneous knowledge case, adherence to FH, false
judgment 455
11 Fluency-heterogeneous knowledge case, non-adherence to FH,
false judgment 361
12 Fluency-heterogeneous knowledge case, non-adherence to FH,
correct judgment 502
Running Head: Fluency Heuristic
37
AUTHORS NOTE
Benjamin E. Hilbig, Psychology III, University of Mannheim, Germany and Max
Planck Institute for Research on Collective Goods, Bonn, Germany. Edgar Erdfelder,
Psychology III, University of Mannheim, Germany. Rüdiger F. Pohl, Psychology III,
University of Mannheim, Germany. Manuscript preparation was supported by grant ER
224/2-1 from the Deutsche Forschungsgemeinschaft. We thank Tina Tanz for assistance in
data collection.
Correspondence concerning this article should be addressed to Benjamin E. Hilbig,
Psychology III, University of Mannheim, D-68131 Mannheim, Germany. Email:
hilbig@psychologie.uni-mannheim.de.
Running Head: Fluency Heuristic
38
TABLES
Table 1. Parameters of the r-s-model, psychological meaning of model parameters and
parameter estimates (standard error of each estimate in parenthesis) based on aggregated data
(across all choices and participants).
Parameter Psychological meaning Estimate (SE)
a recognition validity .83 (.01)
b1 knowledge validity, fluency-homogeneous knowledge cases .59 (.02)
b2 knowledge validity, fluency-heterogeneous knowledge cases .69 (.01)
b3 knowledge validity, given that only one object is recognized .67*
c fluency validity .61 (.01)
g correct guessing (if neither object is recognized) .53 (.01)
p proportion of fluency-homogeneous knowledge cases .23 (.01)
r RH-use (considering the recognition cue in isolation) .74 (.01)
s FH-use (considering retrieval fluency in isolation) .23 (.02)
Note * this number is derived analytically from b3 = pb1 + (1 – p) • b2 (see Appendix A) and
therefore does not comprise a standard error.
Running Head: Fluency Heuristic
39
Table 2. Model variants and corresponding number of participants providing weak, positive,
or strong evidence.
Number of participants
Model variant weak evidence positive
evidence
strong evidence Sum (%)
I (s = 1) 0 1 0 1 (2%)
II (s unconstrained) 5 6 4 15 (23%)
III (s = 0) 12 37 0 49 (75%)
Running Head: Fluency Heuristic
40
FIGURES
Figure 1. Processing tree representation of the r-s-model. Recognition validity (a) and fluency
validity (c), knowledge validities (b1, b2, b3), probability of valid guesses (g), probabilities of
using the FH (s) and RH (r). Boxes displayed with rounded corners signify latent states.
Running Head: Fluency Heuristic
41
0.00
0.25
0.50
0.75
1.00
0100200300400
500600700800900
1000
Fluency threshold
Probability
FH-use
fluency-heterogeneous cases
knowledge cases explained by FH
Figure 2. Estimate of the s parameter (probability of FH-use, solid black) with error bars
indicating one standard error of the estimate, 1 – p (proportion of fluency-heterogeneous
knowledge cases, solid grey), and the product of both (proportion of knowledge cases
explained by the FH, dashed black) plotted against the fluency threshold implemented.
Running Head: Fluency Heuristic
42
0.00
0.25
0.50
0.75
1.00
Individual participants
Implied probability of FH-use
adherence rate
s parameter
Figure 3. Individual probability of using the FH as implied by the adherence rate (light bar)
versus the estimate of the s parameter (dark bar), ordered by the latter.
Running Head: Fluency Heuristic
43
FOOTNOTES
1 Note that the idea of having to switch from one strategy to another is inherent in the
fast-and-frugal heuristics approach and not without criticism. Some therefore consider
alternative ’single-tool’ approaches to be more appropriate (Glöckner, Betsch, & Schindler,
2010; Lee & Cummins, 2004; Newell & Lee, in press). However, this debate is clearly
unresolved (Glöckner & Betsch, 2010; Marewski, 2010).
2 Note that the linear parameter constraint (1) effectively eliminates parameter b3 from
the set of to-be-estimated parameters because it can be replaced by a linear combination of b1
and b2. However, to facilitate interpretation, we present both the r-s model and the model-
based results including parameter b3.
3 One participant had to be excluded from the analyses because he or she had too few
fluency-heterogeneous knowledge cases (individual parameter p = .98). Note, however, that a
non-parametric bootstrap with 1000 samples for the data of this participant (Moshagen, 2010)
revealed a mean s parameter estimate of .06 and median of zero. As such, exclusion of this
participant is certainly not a disadvantage for the FH.
4 Such comparisons hinge on the assumption that one of the models under
consideration is, in fact, correct. However, this assumption is common practice (Bröder &
Schiffer, 2003; Glöckner, 2009) and, in the current case, clearly reasonable as the three model
variants reflect all possible hypotheses, viz. that the FH is always, sometimes, or never used.
5 Note that such arguments necessitate certain assumptions about information
processing costs (Bröder & Newell, 2008). Also, they ignore the well-documented capacity
for “intuitive” information processing (Glöckner, 2008; Glöckner & Betsch, 2008b; Hilbig,
Running Head: Fluency Heuristic
44
Scholl, & Pohl, 2010) and thus automatic information integration as assumed in several
models of judgments and decision making (for a recent overview see Glöckner & Witteman,
2010).
... In this paper, we primarily aim to test a crucial and counterintuitive prediction that has not been directly addressed before and conflicts with the popular notion that processing fluency -or cognitive fluency in general -boosts preference for a choice option (Schooler & Hertwig, 2005;Zajonc, 1968). In addition, we provide support for the MSH through conceptual replications of predictions previously tested by different researchers Hertwig, Herzog, Schooler, & Reimer, 2008;Hilbig, Erdfelder, & Pohl, 2011;Schooler & Hertwig, 2005;Pohl, Erdfelder, Michalkiewicz, Castela, & Hilbig, 2016). In this way, we aim at closing a gap in previous research on the MSH and provide converging evidence on the importance of memory strength (rather than recognition judgments) in recognition-based decision making. ...
... The fluency heuristic, in contrast, is silent about the predicted effect size for knowledge cases as compared with recognition cases. Notably, this MSH prediction has already found some support in previous research (e.g., Hilbig et al., 2011;Marewski & Schooler, 2011;Pohl et al., 2016;Schwikert & Curran, 2014). ...
... Thus, the r parameter of Hilbig et al.'s (2010) r-model (and also the corresponding parameter of the r-s-model, cf. Hilbig et al., 2011) can still be interpreted as a measure of noncompensatory reliance on recognition if noncompensatory reliance on recognition is not confused with reliance on recognition alone (i.e., as a serial cognitive strategy predicted by the RH theory). Now let us consider the reverse question: What are the implications of our current MSH research for the PCS model of recognition-based decisions advanced by Heck and Erdfelder (2017)? ...
Article
Full-text available
According to the recognition heuristic (RH), for decision domains where recognition is a valid predictor of a choice criterion, recognition alone is used to make inferences whenever one object is recognized and the other is not, irrespective of further knowledge. Erdfelder, Küpper-Tetzel, and Mattern (2011) questioned whether the recognition judgment itself affects decisions or rather the memory strength underlying it. Specifically, they proposed to extend the RH to the memory state heuristic (MSH), which assumes a third memory state of uncertainty in addition to recognition certainty and rejection certainty. While the MSH already gathered significant support, one of its basic and more counterintuitive predictions has not been tested so far: In guessing pairs (none of the objects recognized), the object more slowly judged as unrecognized should be preferred, since it is more likely to be in a higher memory state. In this paper, we test this prediction along with other recognition latency predictions of the MSH, thereby adding to the body of research supporting the MSH. © 2017, Society for Judgment and Decision making. All rights reserved.
... To provide a less biased measure of RH use, a multinomial processing tree model was developed by Hilbig et al. (2010). The model was further extended to incorporate unbiased measurements of other heuristics (Castela, Kellen, Erdfelder, & Hilbig, 2014;Hilbig, Erdfelder, & Pohl, 2011). ...
... In addition, the cue validity for the FH was higher than .55 in three different domains, implying that the correct judgment rate is higher than chance. However, using a multinomial processing tree model (the r-s model), Hilbig et al. (2011) found participants used the FH in isolation only 23% of the time. Hilbig et al.'s individual data analysis revealed that the majority of participants never used the FH in isolation. ...
... This study use the r-s model to test the use of familiarity differences. We first introduce the model as developed by Hilbig et al. (2011), then discuss its application to the current study. ...
Article
The familiarity difference cue has been regarded as a general cue for making inferential judgments (Honda, Abe, Matsuks, & Yamagishi in Memory and Cognition, 39(5), 851–863, 2011; Schwikert & Curran in Journal of Experimental Psychology: General, 143(6), 2341–2365, 2014). The current study tests a model of inference based on familiarity differences that encompasses the recognition heuristic (Goldstein & Gigerenzer, 1999, Goldstein & Gigerenzer in Psychological Review, 109(1), 75–90, 2002). In two studies, using a large pool of stimuli, participants rated their familiarity of cities and made choices on a typical city-size task. The data were fitted with the r-s model (Hilbig, Erdfelder, & Pohl in, Journal of Experimental Psychology: Learning Memory and Cognition, 37(4), 827–839, 2011), which was adapted to include familiarity differences. The results indicated that people used the familiarity difference cue because the participants ignored further knowledge in a substantial number of cases when the familiarity difference cue was available. An analysis of reaction-time data further indicated that the response times were shorter for heuristic judgments than for knowledge-only-based judgments. Furthermore, when knowledge was available, the response times were shorter when knowledge was congruent with a heuristic cue than when it was in conflict with it. Differences between the familiarity difference cue and the fluency heuristic (Schooler & Hertwig, 2005, Psychological Review, 112, 610–628) are discussed.
... There has been considerable debate as to which strategies people use when making such decisions from memory (e.g., Hilbig, Erdfelder, & Pohl, 2011;Marewski, Gaissmaier, Schooler, Goldstein, & Gigerenzer, 2010;Newell & Shanks, 2004;Oppenheimer, 2003;Pachur, Bröder, & Marewski, 2008;Schwikert & Curran, 2014). One prominent proposal is that the recognition process for decision alternatives provides an important source of information (as illustrated in the example above). ...
... The city domain is convenient because people are likely to have naturally acquired recognition and knowledge about cities, and both types of information can be good indicators of the decision criterion (e.g., Pachur, Todd, Gigerenzer, Schooler, & Goldstein, 2011); the domain therefore provides a good test bed for investigating how they are used to make inferences in a realworld domain. In addition, using this domain allows for comparability with the many previous studies on memory-based decisions that have also used the city-size task (e.g., Goldstein & Gigerenzer, 2002;Hilbig et al., 2011;Horn, Ruggeri, & Pachur, in press;Marewski & Schooler, 2011;Pachur, Mata, & Schooler, 2009;Rosburg, Mecklinger, & Frings, 2011;Schwikert & Curran, 2014;Volz et al., 2006). We implemented the decision strategies as computational models within the ACT-R cognitive architecture (Anderson, 2007), which yields predictions at both the behavioral and the neural level for each of the candidate strategies. 1 ...
... We therefore refer to this strategy as Lex-R-K. Some studies have obtained only limited support for a strict use of the recognition and fluency heuristics (e.g., Hilbig, Erdfelder, & Pohl, 2010;Hilbig et al., 2011;Schwikert & Curran, 2014) and purely knowledge-based strategies (e.g., Marewski et al., 2010;Pachur & Biele, 2007), so strategies implementing a confluence of different types of memory information might be more appropriate. Lex-R-K draws on proposals that the recognition cue might be evaluated in terms of memory strength (indicating certainty of recognition memory) before it is used as a basis for a decision Marewski et al., 2010;Schwikert & Curran, 2014). ...
Article
How do people use memories to make inferences about real-world objects? We tested three strategies based on predicted patterns of response times and blood-oxygen-level-dependent (BOLD) responses: one strategy that relies solely on recognition memory, a second that retrieves additional knowledge, and a third, lexicographic (i.e., sequential) strategy, that considers knowledge conditionally on the evidence obtained from recognition memory. We implemented the strategies as computational models within the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture, which allowed us to derive behavioral and neural predictions that we then compared to the results of a functional magnetic resonance imaging (fMRI) study in which participants inferred which of two cities is larger. Overall, versions of the lexicographic strategy, according to which knowledge about many but not all alternatives is searched, provided the best account of the joint patterns of response times and BOLD responses. These results provide insights into the interplay between recognition and additional knowledge in memory, hinting at an adaptive use of these two sources of information in decision making. The results highlight the usefulness of implementing models of decision making within a cognitive architecture to derive predictions on the behavioral and neural level.
... For instance, in source memory, participants have to learn words from two sources and then have to classify studied words not only as "old" or "new," but also as stemming from "Source A" or "Source B." MPT models for this paradigm focus on three of the four possible combinations of these two responses: "old -Source A," "old -Source B," and "new" (Batchelder & Riefer, 1990, if a word has been classified as new, responses about the source are usually omitted). In a similar vein, MPT models may also incorporate information about continuous variables such as response times by using discrete bins (e.g., "fast" versus "slow" responses, Heck and Erdfelder, 2016;Hilbig et al., 2011). 3 Even multivariate quantitative judgments can be mapped on discrete response categories by looking at the set of possible rank orders of these judgments (e.g. ...
Preprint
Full-text available
Many psychological theories assume that observable responses are determined by multiple latent processes. Multinomial processing tree (MPT) models are a class of cognitive models for discrete responses that allow researchers to disentangle and measure such processes. Before applying MPT models to specific psychological theories, it is necessary to tailor a model to specific experimental designs. In this tutorial, we explain how to develop, fit, and test MPT models using the classical pair-clustering model as a running example. The first part covers the required data structures, model equations, identifiability, model validation, maximum-likelihood estimation, hypothesis tests, and power analyses using the software multiTree. The second part introduces hierarchical MPT modeling which allows researchers to account for individual differences and to estimate the correlations of latent processes among each other and with additional covariates using the TreeBUGS package in R. All examples including data and annotated analysis scripts are provided at the Open Science Framework (https://osf.io24pbm/).
... A big advantage of MPT models is that statistical tests can be performed directly at the level of the model parameters. Over the past decades, this method has been successfully applied to measure cognitive states, processes, and decisions in areas such as memory [33][34][35][36][37] and decision making [38][39][40], and to test statistical hypotheses about them. Several computer programs are available for parameter estimation and goodnessof-fit tests [31,41,42]. ...
Article
Full-text available
Most theories of social exchange distinguish between two different types of cooperation, depending on whether or not cooperation occurs conditional upon the partner’s previous behaviors. Here, we used a multinomial processing tree model to distinguish between positive and negative reciprocity and cooperation bias in a sequential Prisoner’s Dilemma game. In Experiments 1 and 2, the facial expressions of the partners were varied to manipulate cooperation bias. In Experiment 3, an extinction instruction was used to manipulate reciprocity. The results confirm that people show a stronger cooperation bias when interacting with smiling compared to angry-looking partners, supporting the notion that a smiling facial expression in comparison to an angry facial expression helps to construe a situation as cooperative rather than competitive. Reciprocity was enhanced for appearance-incongruent behaviors, but only when participants were encouraged to form expectations about the partners’ future behaviors. Negative reciprocity was not stronger than positive reciprocity, regardless of whether expectations were manipulated or not. Experiment 3 suggests that people are able to ignore previous episodes of cheating as well as previous episodes of cooperation if these turn out to be irrelevant for predicting a partner’s future behavior. The results provide important insights into the mechanisms of social cooperation.
... The fluency heuristic states that the faster recognized item is chosen in trials involving two recognized objects (RR pairs; Hertwig et al., 2008;Schooler & Hertwig, 2005). Despite empirical evidence against such an account (Hilbig, Erdfelder, & Pohl, 2011;Pohl, Erdfelder, Michalkiewicz, Castela, & Hilbig, 2016), the recognition speed could still confound any analysis of choice RTs. Specifically, memory-based decisions in trials with further knowledge might not be faster because of the increased coherence of further information with recognition (as stated by PCS) but rather due to faster recognition itself . ...
Article
Full-text available
When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases.
Conference Paper
Full-text available
Previous studies have shown that individuals often make inferences based on heuristics using recognition, fluency, or familiarity. In the present study, we propose a new heuristic called familiarity-matching, which predicts that when a decision maker is familiar (or unfamiliar) with an object in a question sentence, s/he will choose the more (or less) familiar object from the two alternatives. We examined inference processes and ecological rationality regarding familiarity-matching through three studies including behavioral experiments and ecological analyses. Results showed that participants often used familiarity-matching in solving difficult binary choice problems, and that familiarity-matching could be applied in an ecologically rational manner in real-world situations. A new perspective on human cognitive processes is discussed in this study.
Chapter
Menschen müssen ständig unterschiedlichste Situationen beurteilen oder Entscheidungen treffen. Dabei können die Informationen mehr oder weniger eindeutig und die Folgen der Entscheidung mehr oder weniger schwerwiegend sein. Die Psychologie erforscht die Struktur von Urteilen und Entscheidungen sowie Einflussfaktoren und Prozesse, die sowohl „gute“ als auch „irrationale“ Urteile und Entscheidungen hervorbringen. Die empirische Erforschung des Urteilens und Entscheidens hat faszinierende Einblicke in die einzelnen Bestandteile des Entscheidens gewährt, zum Bespiel über typische Fehlleistungen, verwendete Strategien der Suche nach relevanter Information sowie über deren weitere Verarbeitung. Spannende Befunde und die daraus entwickelten psychologischen Theorien des Urteilens und Entscheidens werden in diesem Kapitel vorgestellt. Schlüsselwörter: Urteilen; Entscheiden; Informationsverarbeitung; Rationalität; Heuristik; Täuschungen; Strategie
Article
In paired comparisons based on which of two objects has the larger criterion value, decision makers could use the subjectively experienced difference in retrieval fluency of the objects as a cue. According to the fluency heuristic (FH) theory, decision makers use fluency—as indexed by recognition speed—as the only cue for pairs of recognized objects, and infer that the object retrieved more speedily has the larger criterion value (ignoring all other cues and information). Model-based analyses, however, have previously revealed that only a small portion of such inferences are indeed based on fluency alone. In the majority of cases, other information enters the decision process. However, due to the specific experimental procedures, the estimates of FH use are potentially biased: Some procedures may have led to an overestimated and others to an underestimated, or even to actually reduced, FH use. In the present article, we discuss and test the impacts of such procedural variations by reanalyzing 21 data sets. The results show noteworthy consistency across the procedural variations revealing low FH use. We discuss potential explanations and implications of this finding.
Article
Full-text available
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following Herbert Simon's notion of satisficing, this chapter proposes a family of algorithms based on a simple psychological mechanism: one-reason decision making. These fast-and-frugal algorithms violate fundamental tenets of classical rationality: It neither looks up nor integrates all information. By computer simulation, a competition was held between the satisficing "take-the-best" algorithm and various "rational" inference procedures (e.g., multiple regression). The take-the-best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference.
Article
Quantitative theories with free parameters often gain credence when they closely fit data. This is a mistake. A good fit reveals nothing about the flexibility of the theory (how much it cannot fit), the variability of the data (how firmly the data rule out what the theory cannot fit), or the likelihood of other outcomes (perhaps the theory could have fit any plausible result), and a reader needs all 3 pieces of information to decide how much the fit should increase belief in the theory. The use of good fits as evidence is not supported by philosophers of science nor by the history of psychology; there seem to be no examples of a theory supported mainly by good fits that has led to demonstrable progress. A better way to test a theory with free parameters is to determine how the theory constrains possible outcomes (i.e., what it predicts), assess how firmly actual outcomes agree with those constraints, and determine if plausible alternative outcomes would have been inconsistent with the theory, allowing for the variability of the data.
Article
Feelings of familiarity are not direct products of memory. Although prior experience of a stimulus can produce a feeling of familiarity, that feeling can also be aroused in the absence of prior experience if perceptual processing of the stimulus is fluent (e.g., Whittlesea, Jacoby, & Girard, 1990). This suggests that feelings of familiarity arise through an unconscious inference about the source of processing fluency. The present experiments extend that conclusion. First, they show that a wide variety of feelings about the past are controlled by a fluency heuristic, including feelings about the meaning, pleasantness, duration, and recency of past events. Second, they demonstrate that the attribution process does not rely only on perceptual fluency, but can be influenced even more by the fluency of conceptual processing. Third, they show that although the fluency heuristic itself is simple, people's use of it is highly sophisticated and makes them robustly sensitive to the actual historical status of current events.
Article
It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection, and accounting for model uncertainty is presented. Implementing this is straightforward through the use of the simple and accurate BIC approximation, and it can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this approach overcomes the difficulties with P-values and standard model selection procedures based on them. It also allows easy comparison of nonnested models, and permits the quantification of the evidence for a null hypothesis of interest, such as a convergence theory or a hypothesis about societal norms.
Article
Scientists can reason about natural systems, including the mind and brain, in many ways, with each form of reasoning being associated with its own set of limitations. The limitations on human reasoning imply that the process of reasoning about theories and communicating those theories will be error prone; we must therefore be concerned about the reproducibility of theories whose very nature is shaped by constraints on human reasoning. The problem of reproducibility can be alleviated by computational modeling, which maximizes correspondence between the actual behavior of a posited system and its behavior inferred through reasoning and increases the fidelity of communication of our theories to others.
Article
Intuitive-automatic processes are crucial for making judgements and decisions. The fascinating complexity of these processes has attracted many decision researchers, prompting them to start investigating intuition empirically and to develop numerous models. Dual-process models assume a clear distinction between intuitive and deliberate processes but provide no further differentiation within both categories. We go beyond these models and argue that intuition is not a homogeneous concept, but a label used for different cognitive mechanisms. We suggest that these mechanisms have to be distinguished to allow for fruitful investigations of intuition. Specifically, we argue that researchers should concentrate on investigating the processes underlying intuition before making strong claims about its performance. We summarise current models for intuition and propose a categorisation according to the underlying cognitive processes: (a) associative intuition based on simple learning–retrieval processes, (b) matching intuition based on comparisons with prototypes/exemplars, (c) accumulative intuition based on automatic evidence accumulation, and (d) constructive intuition based on construction of mental representations. We discuss how this differentiation might help to clarify the relationship between affect and intuition and we derive a very general hypothesis as to when intuition will lead to good decisions and when it will go astray.
Article
For a study with multinomial data where there are ng individuals and with each person having nr test trials, the question arises as to how to fit the parameters of a multinomial processing tree (MPT) model. Should each parameter be estimated for each individual and then averaged to obtain a group estimate, or should the frequencies in the multinomial categories be pooled so that the model is fit once for the entire group? This basic question is explored with a series of Monte Carlo simulations for some prototypical MPT models. There is a general finding of a pooling advantage for the case where there is a single experimental condition. Also when there are different experimental conditions, there is reduced bias for detecting condition differences for a method based on the pooled data. Although the focus of the paper is on multinomial models, a general theorem is advanced that establishes a basic condition that determines whether there is or is not a difference between the averaging of individual estimates and the estimate based on the pooled data.
Article
Many health and safety problems, including war and terrorism, are by-products of how people reason about risk. I describe a new approach to reasoning about risk that implements a modern dual-process model of memory called fuzzy-trace theory. This approach posits encoding of both verbatim and gist representations, with reliance on the latter whenever possible; dependence of reasoning on retrieval cues that access stored values and principles; and vulnerability of reasoning to processing interference from overlapping classes of events, which causes denominator neglect in risk or probability judgments. These simple principles explain classic and new findings, for example, the finding that people overestimate small risks but ignore very small risks. Fuzzy-trace theory differs from other dual-process approaches to reasoning in that it places intuition at the apex of development, considering fuzzy intuitive processing more advanced than precise computational processing (e.g., trading off risks and rewards). The theory supplies a conception of rationality that distinguishes degrees of severity of errors in reasoning. It also includes a mechanism for achieving consistency in reasoning, a hallmark of rationality, by explaining how a person can treat superficially different reasoning problems in the same way if the problems share an underlying gist.
Article
According to a two-step account of the mere-exposure effect, repeated exposure leads to the subjective feeling of perceptual fluency, which in turn influences liking. If so, perceptual fluency manipulated by means other than repetition should influence liking. In three experiments, effects of perceptual fluency on affective judgments were examined. In Experiment 1, higher perceptual fluency was achieved by presenting a matching rather than nonmatching prime before showing a target picture. Participants judged targets as prettier if preceded by a matching rather than nonmatching prime. In Experi- ment 2, perceptual fluency was manipulated by figure-ground contrast. Stimuli were judged as more pretty, and less ugly, the higher the con- trast. In Experiment 3, perceptual fluency was manipulated by presen- tation duration. Stimuli shown for a longer duration were liked more, and disliked less. We conclude (a) that perceptual fluency increases liking and (b) that the experience of fluency is affectively positive, and hence attributed to positive but not to negative features, as reflected in a differential impact on positive and negative judgments. 0