Figure 1 - uploaded by Timothy Pleskac
Content may be subject to copyright.
The ecological rationality of the recognition heuristic. In some domains, recognition of objects can be correlated with an unknown target variable (e.g., city population). This is because judges experience objects via mediators in the environment (e.g., newspapers), and the mediators reflect the target variable (e.g., more populous cities tend to be in the news more often). From "Models of Ecological Rationality: The Recognition Heuristic," by D. G. Goldstein and G. Gigerenzer, 2002, Psychological Review, 109, p. 78. Copyright 2002 by the American Psychological Association. Adapted with permission.

The ecological rationality of the recognition heuristic. In some domains, recognition of objects can be correlated with an unknown target variable (e.g., city population). This is because judges experience objects via mediators in the environment (e.g., newspapers), and the mediators reflect the target variable (e.g., more populous cities tend to be in the news more often). From "Models of Ecological Rationality: The Recognition Heuristic," by D. G. Goldstein and G. Gigerenzer, 2002, Psychological Review, 109, p. 78. Copyright 2002 by the American Psychological Association. Adapted with permission.

Source publication
Article
Full-text available
The recognition heuristic uses a recognition decision to make an inference about an unknown variable in the world. Theories of recognition memory typically use a signal detection framework to predict this binary recognition decision. In this article, I integrate the recognition heuristic with signal detection theory to formally investigate how judg...

Similar publications

Article
Full-text available
Four experiments used signal detection analyses to assess recognition memory for lists of words consisting of differing numbers of exemplars from different semantic categories. The results showed that recognition memory performance, measured by d(a), (a) increased as category length (CL, the number of study-list items selected from the same semanti...
Article
Full-text available
We present a signal detection-like model termed the stochastic detection and retrieval model (SDRM) for use in studying metacognition. Focusing on paradigms that relate retrieval (e.g., recall or recognition) and confidence judgments, the SDRM measures (1) variance in the retrieval process, (2) variance in the confidence process, (3) the extent to...
Article
Full-text available
The effects of aging on performance were examined in signal detection, letter discrimination, brightness discrimination, and recognition memory, with each subject tested on all four tasks. Ratcliff's (1978) diffusion model was fit to the data for each subject for each task, and it provided a good account of accuracy and the distributions of correct...
Article
Full-text available
The current study compared 3 models of recognition memory in their ability to generalize across yes/no and 2-alternative forced-choice (2AFC) testing. The unequal-variance signal-detection model assumes a continuous memory strength process. The dual-process signal-detection model adds a thresholdlike recollection process to a continuous familiarity...
Article
Full-text available
In a recognition memory test, subjects may be asked to decide whether a test item is old or new (item recognition) or to decide among alternative sources from which it might have been drawn for study (source recognition). Confidence-rating-based receiver operating characteristic (ROC) curves for these superficially similar tasks are quite different...

Citations

... This focus on cues has lent itself to applying (or as we argue, misapplying) a whole host of assumptions and methods from psychophysics and statistics to understand and study rationality. The methods used to highlight the idea of humans as intuitive statisticians include various approaches such as random sampling, signal detection, stimulus thresholds, lens model statistics, just-noticeable-differences, Neyman-Pearson statistics, representative design, and Bayesian inference (e.g., Dhami et al., 2004;Hogarth, 2005;Pleskac, 2007;Karelaia and Hogarth, 2008;Hertwig and Pleskac, 2010;Todd and Gigerenzer, 2012;Luan et al., 2014;Gershman et al., 2015;Gigerenzer and Marewski, 2015;Feldman, 2017;Rahnev and Denison, 2018;Szollosi and Newell, 2020). ...
... The ecological rationality literature has developed a growing, statistical toolbox of heuristics. This statistical toolbox now includes tools such as random sampling, signal detection, stimulus thresholds, lens model statistics, just-noticeabledifference, Neyman-Pearson statistics, representative design, and Bayesian inference (e.g., Dhami et al., 2004;Hogarth, 2005;Pleskac, 2007;Karelaia and Hogarth, 2008;Hertwig and Pleskac, 2010;Todd and Gigerenzer, 2012;Luan et al., 2014;Pleskac and Hertwig, 2014;Gershman et al., 2015;Gigerenzer and Marewski, 2015;Feldman, 2017;Rahnev and Denison, 2018;Szollosi and Newell, 2020). And these statistical tools can directly be mapped onto various named heuristics (see Todd and Gigerenzer, 2012). ...
Article
Full-text available
In this paper we contrast bounded and ecological rationality with a proposed alternative, generative rationality. Ecological approaches to rationality build on the idea of humans as “intuitive statisticians” while we argue for a more generative conception of humans as “probing organisms.” We first highlight how ecological rationality’s focus on cues and statistics is problematic for two reasons: (a) the problem of cue salience, and (b) the problem of cue uncertainty. We highlight these problems by revisiting the statistical and cue-based logic that underlies ecological rationality, which originate from the misapplication of concepts in psychophysics (e.g., signal detection, just-noticeable-differences). We then work through the most popular experimental task in the ecological rationality literature—the city size task—to illustrate how psychophysical assumptions have informally been linked to ecological rationality. After highlighting these problems, we contrast ecological rationality with a proposed alternative, generative rationality. Generative rationality builds on biology—in contrast to ecological rationality’s focus on statistics. We argue that in uncertain environments cues are rarely given or available for statistical processing. Therefore we focus on the psychogenesis of awareness rather than psychophysics of cues. For any agent or organism, environments “teem” with indefinite cues, meanings and potential objects, the salience or relevance of which is scarcely obvious based on their statistical or physical properties. We focus on organism-specificity and the organism-directed probing that shapes awareness and perception. Cues in teeming environments are noticed when they serve as cues-for-something, requiring what might be called a “cue-to-clue” transformation. In this sense, awareness toward a cue or cues is actively “grown.” We thus argue that perception might more productively be seen as the presentation of cues and objects rather than their representation. This generative approach not only applies to relatively mundane organism (including human) interactions with their environments—as well as organism-object relationships and their embodied nature—but also has significant implications for understanding the emergence of novelty in economic settings. We conclude with a discussion of how our arguments link with—but modify—Herbert Simon’s popular “scissors” metaphor, as it applies to bounded rationality and its implications for decision making in uncertain, teeming environments.
... This condition says that the reason for the less-is-more effect is that intuition is more accurate than analysis. Katsikopoulos (2010) labelled this the "accurate-heuristics explanation" and showed that it does not work when memory is imperfect ( Pleskac, 2007 ); additionally, Smithson (2010) showed that the explanation fails if α and β are not constant, even if memory were perfect. A subtler explanation is needed, and this is provided by Proposition 2.1 . ...
Article
Full-text available
Firefighters, emergency paramedics, and airplane pilots are able to make correct judgments and choices in challenging situations of scarce information and time pressure. Experts often attribute such successes to intuition and report that they avoid analysis. Similarly, laypeople can effortlessly perform tasks that confuse machine algorithms. OR should ideally respect human intuition while supporting and improving it with analytical modelling. We utilise research on intuitive decision making from psychology to build a model of mixing intuition and analysis over a set of interrelated tasks, where the choice of intuition or analysis in one task affects the choice in other tasks. In this model, people may use any analytical method, such as multi-attribute utility, or a single-cue heuristic, such as availability or recognition. The article makes two contributions. First, we study the model and derive a necessary and sufficient condition for the optimality of using a positive proportion of intuition (i.e., for some tasks): Intuition is more frequently accurate than analysis to a larger extent than analysis is more frequently accurate than guessing. Second, we apply the model to synthetic data and also natural data from a forecasting competition for a Wimbledon tennis tournament and a King's Fund study on how patients choose a London hospital: The optimal proportion of intuition is estimated to range from 25% to 53%. The accuracy benefit of using the optimal mix over analysis alone is estimated between 3% and 27%. Such improvements would be impactful over large numbers of choices as in public health.
... This focus on cues has lent itself to applying (or as we argue, misapplying) a whole host of assumptions and methods from psychophysics and statistics to understand and study rationality. The methods used to highlight the idea of humans as intuitive statisticians include various approaches such as random sampling, signal detection, stimulus thresholds, lens model statistics, just-noticeable-differences, Neyman-Pearson statistics, representative design, and Bayesian inference (e.g., Dhami et al., 2004;Hogarth, 2005;Pleskac, 2007;Karelaia and Hogarth, 2008;Hertwig and Pleskac, 2010;Todd and Gigerenzer, 2012;Luan et al., 2014;Gershman et al., 2015;Gigerenzer and Marewski, 2015;Feldman, 2017;Rahnev and Denison, 2018;Szollosi and Newell, 2020). ...
... The ecological rationality literature has developed a growing, statistical toolbox of heuristics. This statistical toolbox now includes tools such as random sampling, signal detection, stimulus thresholds, lens model statistics, just-noticeabledifference, Neyman-Pearson statistics, representative design, and Bayesian inference (e.g., Dhami et al., 2004;Hogarth, 2005;Pleskac, 2007;Karelaia and Hogarth, 2008;Hertwig and Pleskac, 2010;Todd and Gigerenzer, 2012;Luan et al., 2014;Pleskac and Hertwig, 2014;Gershman et al., 2015;Gigerenzer and Marewski, 2015;Feldman, 2017;Rahnev and Denison, 2018;Szollosi and Newell, 2020). And these statistical tools can directly be mapped onto various named heuristics (see Todd and Gigerenzer, 2012). ...
Preprint
Full-text available
In this paper we contrast bounded and ecological rationality with a proposed alternative, generative rationality. Ecological approaches to rationality build on the idea of humans as “intuitive statisticians” while we argue for a more generative conception of humans as “probing organisms.” We first highlight how ecological rationality’s focus on cues and statistics is problematic for two reasons: (a) the problem of cue salience, and (b) the problem of cue uncertainty. We highlight these problems by revisiting the statistical and cue- based logic that underlies ecological rationality, which originate from the misapplication of concepts in psychophysics (e.g., signal detection, just- noticeable-differences). We then work through the most popular experimental task in the ecological rationality literature—the city-size task—to illustrate how psychophysical assumptions have informally been linked to ecological rationality. After highlighting these problems, we contrast ecological rationality with a proposed alternative, generative rationality. Generative rationality builds on biology—in contrast to ecological rationality’s focus on statistics. We argue that in uncertain environments cues are rarely given or available for statistical processing. Therefore we focus on the psychogenesis of awareness rather than psychophysics of cues. For any agent or organism, environments “teem” with indefinite cues, meanings and potential objects, the salience or relevance of which is scarcely obvious based on their statistical or physical properties. We focus on organism-specificity and the organism-directed probing that shapes awareness and perception. Cues in teeming environments are noticed when they serve as cues-for-something, requiring what might be called a “cue-to- clue” transformation. In this sense, awareness toward a cue or cues is actively “grown.” We thus argue that perception might more productively be seen as the presentation of cues and objects rather than their representation. This generative approach not only applies to relatively mundane organism (including human) interactions with their environments—as well as organism- object relationships and their embodied nature—but also has significant implications for understanding the emergence of novelty in economic settings. We conclude with a discussion of how our arguments link with—but modify—Herbert Simon’s popular “scissor” metaphor, as it applies to bounded rationality and its implications for decision making in uncertain, teeming environments. Key words: rationality, ecology, cues, biology of perception, awareness
... This focus on cues has lent itself to applying (or as we argue, misapplying) a whole host of assumptions and methods from psychophysics and statistics to understand and study rationality. The methods used to highlight the idea of humans as intuitive statisticians include various approaches such as random sampling, signal detection, stimulus thresholds, lens model statistics, just-noticeable-differences, Neyman-Pearson statistics, representative design, and Bayesian inference (e.g., Dhami et al., 2004;Hogarth, 2005;Pleskac, 2007;Karelaia and Hogarth, 2008;Hertwig and Pleskac, 2010;Todd and Gigerenzer, 2012;Luan et al., 2014;Gershman et al., 2015;Gigerenzer and Marewski, 2015;Feldman, 2017;Rahnev and Denison, 2018;Szollosi and Newell, 2020). ...
... The ecological rationality literature has developed a growing, statistical toolbox of heuristics. This statistical toolbox now includes tools such as random sampling, signal detection, stimulus thresholds, lens model statistics, just-noticeabledifference, Neyman-Pearson statistics, representative design, and Bayesian inference (e.g., Dhami et al., 2004;Hogarth, 2005;Pleskac, 2007;Karelaia and Hogarth, 2008;Hertwig and Pleskac, 2010;Todd and Gigerenzer, 2012;Luan et al., 2014;Pleskac and Hertwig, 2014;Gershman et al., 2015;Gigerenzer and Marewski, 2015;Feldman, 2017;Rahnev and Denison, 2018;Szollosi and Newell, 2020). And these statistical tools can directly be mapped onto various named heuristics (see Todd and Gigerenzer, 2012). ...
Preprint
In this paper we contrast bounded and ecological rationality with a proposed alternative, generative rationality. Ecological approaches to rationality build on the idea of humans as “intuitive statisticians” while we argue for a more generative conception of humans as “probing organisms.” We first highlight how ecological rationality’s focus on cues and statistics is problematic for two reasons: (a) the problem of cue salience, and (b) the problem of cue novelty in teeming environments. We highlight these problems by revisiting the statistical and cue-based logic that underlies ecological rationality, by discussing its origins in the field of psychophysics (e.g., signal detection, just-noticeable-differences). We work through the most popular experiment in the ecological rationality literature—the city-size task—to illustrate how psychophysical assumptions have been linked to ecological rationality. After highlighting these problems, we contrast ecological rationality with a proposed alternative, generative rationality. Generative rationality builds on biology, in contrast to ecological rationality’s focus on statistics. We argue that in uncertain environments cues are rarely given and available for statistical processing. Environments “teem” with indefinite cues, meanings and potential objects, the salience or relevance of which is scarcely obvious based on their statistical or physical properties. We focus on organism-specificity and organism-directed probing that shapes perception and judgment. Generative rationality departs from existing bounded and ecological approaches in that cue salience is given by top-down factors rather than the bottom-up, statistical or physical properties. A central premise of generative rationality is that cues in teeming environments are noticed or recognized when they serve as cues-for-something, requiring what might be called a “cue-to-clue” transformation. Awareness toward relevant cues needs to be actively cultivated or “grown.” Thus we argue that perception might more productively be seen as the presentation of cues and objects rather than their representation. The generative approach not only applies to seemingly mundane organism (including human) interactions with their environments—as well as organism-object relationships and their embodied nature—but also has significant implications for understanding the emergence of novelty in economic and other uncertain settings. We conclude with a discussion of how our arguments link with—but modify—Herbert Simon’s popular “scissor” metaphor, as it applies to bounded rationality and its implications for decision making in uncertain, teeming environments.
... The ecological rationality literature has developed a growing, statistical toolbox of heuristics. This statistical toolbox now includes tools such as random sampling, signal detection, stimulus thresholds, lens model statistics, just-noticeabledifference, Neyman-Pearson statistics, representative design, and Bayesian inference (e.g., Dhami et al., 2004;Feldman, 2016;Gershman et al., 2015;Gigerenzer and Marewski, 2015;Hertwig and Pleskac, 2010;Hogarth, 2005;Karelaia and Hogarth, 2008;Luan et al., 2014;Rahnev and Denison, 2018;Pleskac, 2007;Pleskac and Hertwig, 2014;Szollosi and Newell, 2019;Todd et al., 2012). And these statistical tools can directly be mapped onto various named heuristics (see Gigerenzer and Todd, 2012). ...
... According to signal detection models, any given stimulus can be thought of as positioned along a continuum of familiarity or strength of evidence, where previously encountered items are positioned higher than lures or distractors not previously encountered (Green and Swets 1966;Palmer and Brewer 2012;Pleskac 2007;Snodgrass, Volvovitz, and Walfish 1972;Wixted 2007). These models suggest that people base their identification judgments on some internal criterion, or threshold, for the strength of evidence they require (i.e., the degree of perceived match). ...
Article
Full-text available
Consumers often try to visually identify a previously encountered product among a sequence of similar items, guided only by their memory and a few general search terms. What determines their success at correctly identifying the target product in such “product lineups”? The current research finds that the longer consumers search sequentially, the more conservative and—ironically—inaccurate judges they become. Consequently, the more consumers search, the more likely they are to erroneously reject the correct target when it finally appears in the lineup. This happens because each time consumers evaluate a similar item in the lineup, and determine that it is not the option for which they have been looking, they draw an implicit inference that the correct target should feel more familiar than the similar items rejected up to that point. This causes the subjective feeling of familiarity consumers expect to experience with the true target to progressively escalate, making them more conservative but also less accurate judges. The findings have practical implications for consumers and marketers, and make theoretical contributions to research on inference-making, online search, and product recognition.
... Heuristics exploit existing cognitive capacities, such as memory. The recognition heuristic, for instance, has been implemented in the ACT-R model of memory (Schooler & Hertwig, 2005) and, alternatively, in a signal detection model of recognition memory (Pleskac, 2007). ...
Article
Full-text available
Unlike behaviorism, cognitive psychology relies on mental concepts to explain behavior. Yet mental processes are not directly observable and multiple explanations are possible, which poses a challenge for finding a useful framework. In this article, I distinguish three new frameworks for explanations that emerged after the cognitive revolution. The first is called tools‐to‐theories: Psychologists' new tools for data analysis, such as computers and statistics, are turned into theories of mind. The second proposes as‐if theories: Expected utility theory and Bayesian statistics are turned into theories of mind, describing an optimal solution of a problem but not its psychological process. The third studies the adaptive toolbox (formal models of heuristics) that describes mental processes in situations of uncertainty where an optimal solution is unknown. Depending on which framework researchers choose, they will model behavior in either situations of risk or of uncertainty, and construct models of cognitive processes or not. The frameworks also determine what questions are asked and what kind of data are generated. What all three frameworks have in common, however, is a clear preference for formal models rather than explanations by general dichotomies or mere verbal concepts. The frameworks have considerable potential to inform each other and to generate points of integration.
... According to signal detection models, any given stimulus can be thought of as positioned along a continuum of familiarity or strength of evidence, where previously encountered items are positioned higher than lures or distractors not previously encountered (Green and Swets 1966;Palmer and Brewer 2012;Pleskac 2007;Snodgrass, Volvovitz, and Walfish 1972;Wixted 2007). These models suggest that people base their identification judgments on some internal criterion, or threshold, for the strength of evidence they require (i.e., the degree of perceived match). ...
... The recognition heuristic operates on input from memory (recognition) and rather than making auxiliary assumptions about memory, the memory processes on which this (and other memory-based decision mechanisms) operate ought to be modeled. A number of researchers have pointed out that need and/or striven to do that (Castela & Erdfelder, 2017;Dougherty et al., 1999;Dougherty et al., 2008;Heck & Erdfelder, 2017;Marewski & Schooler, 2011;Pleskac, 2007;Schooler & Hertwig, 2005;Tomlinson et al., 2011). At the time, rather than loosely motivating model predictions from a posited "adaptive toolbox rhetoric" and claims about "flagship heuristic[s]," I would find it theoretically and methodologically more convincing to derive predictions from cognitive theory, and here, one can envision several different, cognitively grounded models that will make different response time predictions and that can be tested competitively (e.g., see Heck & Erdfelder, 2017; or see Marewski & Mehlhorn, 2011, for 39 different models). ...
Article
Organisms must be capable of adapting to environmental task demands. Which cognitive processes best model the ways in which adaptation is achieved? People can behave adaptively, so many frameworks assume, because they can draw from a repertoire of decision strategies, with each strategy particularly fitting to certain environmental demands. In contrast to that multi-mechanism assumption, competing approaches posit a single decision mechanism. The juxtaposition of such single-mechanism and multi-mechanism approaches has fuelled not only much theory-building, empirical research, and methodological developments, but also many controversies. This special issue on “Strategy Selection: A Theoretical and Methodological Challenge” sheds a spotlight on those developments. The contribution of this introductory article is twofold. First, we offer a documentation of the controversy, including an outline of competing approaches. Second, this special issue and this introductory article represent adversarial collaborations among the three of us: we have modeled adaptive decision making in different ways in the past. Together, we now work on resolving the controversy and point to five guiding principles that might help to improve our models for predicting adaptive behavior. Copyright © 2018 John Wiley & Sons, Ltd.
... However, one key concept of the RH has often been neglected: recognition. While literally at the core of the heuristic, only a modest amount of research has focused on understanding the role of recognition in use of the RH (e.g., Erdfelder, Küpper-Tetzel, & Mattern, 2011;Pachur & Hertwig, 2006;Pleskac, 2007;Castela, Kellen, Erdfelder, & Hilbig, 2014;Castela & Erdfelder, 2017). Notably, Erdfelder et al. proposed a framework that extends the RH by accommodating the role of recognition memory, the memory state heuristic (MSH). ...
Article
Full-text available
According to the recognition heuristic (RH), for decision domains where recognition is a valid predictor of a choice criterion, recognition alone is used to make inferences whenever one object is recognized and the other is not, irrespective of further knowledge. Erdfelder, Küpper-Tetzel, and Mattern (2011) questioned whether the recognition judgment itself affects decisions or rather the memory strength underlying it. Specifically, they proposed to extend the RH to the memory state heuristic (MSH), which assumes a third memory state of uncertainty in addition to recognition certainty and rejection certainty. While the MSH already gathered significant support, one of its basic and more counterintuitive predictions has not been tested so far: In guessing pairs (none of the objects recognized), the object more slowly judged as unrecognized should be preferred, since it is more likely to be in a higher memory state. In this paper, we test this prediction along with other recognition latency predictions of the MSH, thereby adding to the body of research supporting the MSH. © 2017, Society for Judgment and Decision making. All rights reserved.