Article

Bayesian Strategy Assessment in Multi-attribute Decision Making

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Behavioral Decision Research on multi-attribute decision making is plagued with the problem of drawing inferences from behavioral data on cognitive strategies. This bridging problem has been tackled by a range of methodical approaches, namely Structural Modeling (SM), Process Tracing (PT), and comparative model fitting. Whereas SM and PT have been criticized for a number of reasons, the comparative fitting approach has some theoretical advantages as long as the formal relation between theories and data is specified. A Bayesian method is developed that is able to assess, whether an empirical data vector was most likely generated by a ‘Take The Best’ heuristic (Gigerenzer et al., 1991), by an equal weight rule, or a compensatory strategy. Equations are derived for the two- and three-alternative cases, respectively, and a simulation study supports its validity. The classification also showed convergent validity with Process Tracing measures in an experiment. Potential extensions of the general approach to other applications in behavioral decision research are discussed. Copyright © 2003 John Wiley & Sons, Ltd.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, even under such conditions, choice data will often be inconclusive, since single-mechanism and multi-strategy accounts can perfectly mimic each other in terms of choice predictions (Hilbig & Pohl, 2009). Only if the predictions of competing approaches are unconfounded, can any insight be gained from choice data (Bröder & Schiffer, 2003a;Hilbig, 2010;Moshagen & Hilbig, 2011). To overcome these limitations inherent in considering choice data alone, the following empirical investigations take into account multiple dependent measures in a simultaneous maximum likelihood estimation including choices, response time, and confidence ratings (Glöckner, 2009;Jekel, Nicklisch, & Glöckner, 2010), further flanked by information search data and tests of cross-prediction. ...
... Note: A and B represent the choice options in each type of decision task and cues are sorted in order of decreasing validity. 5 The MM-ML has been shown to be an unbiased method for identifying individual decision strategies and is an extension of the choice based strategy classification method proposed by Bröder and Schiffer (2003a). To sketch the advantages, MM-ML is (a) more efficient than the previously used method, (b) unbiased, and (c) allows for reliably differentiating between strategies that make the same choice predictions, given that effects on decision times and confidence are large (d > 1). ...
... In this case, SSL is identical to the application of TTB resp. WADD with a strategy-application error e (Bröder and Schiffer, 2003a). In our analysis we did not punish SSL for the superfluous parameters and used only one free parameter for calculating BICs in these conditions. ...
Article
Recent studies have proposed the use of “fast and frugal” strategies as viable alternatives to support decision-processes in cases where time or other operational constraints preclude the application of standard decision-analytic methods. While a growing body of evidence shows that such procedures can be highly accurate, limited research has evaluated how well decision-makers can execute the prescriptive recommendations of aids based on such strategies in practice. Drawing on the behavioural, neuropsychological and decision-analytic literatures, we propose that an alignment between individual, model and task features will influence the effectiveness with which decision-makers can execute strategies that draw on prescriptive psychological heuristics – “fast and frugal” or otherwise. Our findings suggest that strategy execution is highly sensitive to task characteristics however, the effects of the number of alternatives and attributes on individuals’ ability to deploy a given strategy, differ in magnitude and direction depending on which decision-strategy is prescribed. A more compensatory decision-style positively affected overall task performance. Subjects’ ability to regulate inhibitory control was found to positively affect non-compensatory strategy execution, while having no discernible bearing on comparable compensatory tasks. Our findings reinforce that rather than an aspect of the prescriptive model, synergies between individual, model and task features are more instrumental in driving task performance in aided MCDM contexts. We discuss these findings in light of calls from OR scholars for the development of decision-aids that draw on prescriptive “fast and frugal” principles.
... When it comes to knowledge about the structure of environment, a crucial input for integrative and lexicographic strategies, respectively, simulation studies directly endowed the models with the required information (e.g., learning optimal cue weights or orders from data) and did not specify the exact learning process (e.g., Gigerenzer & Goldstein, 1996;Hogarth & Karelaia, 2005a, 2007Ş imşek & Buckmann, 2015). In a similar vein, most empirical studies explicitly provided information about cue weights or order (e.g., Bröder, 2000;Bröder & Schiffer, 2003a;Mata, Schooler, & Rieskamp, 2007;Newell, Weston, & Shanks, 2003;Rieskamp, 2006;Rieskamp & Hoffrage, 2008;Rieskamp & Otto, 2006). 1 There is only a few studies that did not explicitly provide information about the cue weights or order. In these studies participants had to learn cue weights or order through, either (i) feedback about whether their choices were correct or not (Lee & Cummins, 2004;Newell, Rakow, Weston, & Shanks, 2004;Pachur & Olsson, 2012;Rakow, Newell, Fayers, & Hersby, 2005), or (ii) feedback about both the choice correctness and the criterion values of the chosen option (Bröder & Schiffer, 2006a;Rakow, Hinvest, Jackson, & Palmer, 2004). ...
... Further, Kelley and Busemeyer (2008) and Speekenbrink and Shanks (2010) examined how people learn functions when they are presented with a single stimulus per trial and they have access to multiple continuous-valued cues that they can use to predict a continuous criterion. In these tasks, the participants observed the criterion value after each prediction, not unlike choice experiments where participants receive feedback about the criterion value on the chosen option (as in, e.g., Bröder & Schiffer, 2003a, 2006aHogarth & Karelaia, 2007). In our study, the LMS model learns cue weights based on observed cue and criterion values and cue order can be obtained as a byproduct of learning cue weights. ...
... Here we built on previous theoretical and empirical work on the LMS model. Following function learning studies that examined the mapping between multiple cues and continuous response (Kelley & Busemeyer, 2008;Speekenbrink & Shanks, 2010), participants in our experiment received the criterion value of the chosen option as a feedback in a choice task (same as in Bröder & Schiffer, 2003a, 2006aHogarth & Karelaia, 2007;Rakow et al., 2004). The choice task was followed by an estimation task where participants predicted criterion value of new options that they had not seen during the choice task, and did not receive feedback after their predictions. ...
Article
Full-text available
Choosing between options characterized by multiple cues can be a daunting task. People may integrate all information at hand or just use lexicographic strategies that ignore most of it. Notably, integrative strategies require knowing exact cue weights, whereas lexicographic heuristics can operate by merely knowing the importance order of cues. Here we study how using integrative or lexicographic strategies interacts with learning about cues. In our choice-learning-estimation paradigm people first make choices, learning about cues from the experienced qualities of chosen options, and then estimate qualities of new options. We developed delta-elimination (DE), a new lexicographic strategy that generalizes previous heuristics to any type of environment, and compared it to the integrative weighted-additive (WADD) strategy. Our results show that participants learned cue weights, regardless of whether the DE strategy or the WADD strategy described their choices the best. Still, there was an interaction between the adopted strategy and the cue weight learning process: the DE users learned cue weights slower than the WADD users. This work advances the study of lexicographic choice strategies, both empirically and theoretically, and deepens our understanding of strategy selection, in particular the interaction between the strategy used and learning the structure of the environment. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
... use to make their choices? To shed more light on this issue, we assessed choice strategies by applying an outcome-based method for classification (Bröder & Schiffer, 2003). ...
... Our classification method followed a technique suggested by Bröder and Schiffer (2003). For each individual, we determined the likelihood of the observed choice pattern for the strategies under consideration. ...
... Adding chance level to this value (0.33 + 0.20 = 0.53) equals the rate of selecting the HVI response category in Table 3 (2.65 / 5 = 0.53). Bröder & Schiffer, 2003;Lang & Betsch, 2018). Inspection of the results in Table 4 reveals that the strategy use is dependent on the age of the participant χ2(8) = 58.89, ...
Article
When adapting to a risky world, decision makers must be capable of grasping the probabilistic nature of the environment. Developmental research on the ontogeny of the ability to use probabilities thus far has used different paradigms and revealed inconsistent findings. Merging two paradigms (choices based on probabilistic inferences; trust-in-informants decisions), we tested whether procedural differences are responsible for the inconsistent findings. 6-, 9-, and 22-year-olds (N = 108) learned the relative accuracy of three informants in a labelling task commonly used in trust-in-informants research. Informants differed with regard to their validity (rates of being correct = 0.17; 0.34; 0.83). Subsequently, participants were presented with an information board typically used in research on probabilistic-inference choices. Prior to choosing between two options, they were allowed to inspect informants’ predictions regarding choice outcomes. After choices, participants indicated which informant they would trust in other domains (e.g., biology, arts). A large contrast effect between trust decisions and choices indicates that the utilization of probabilities develops more slowly in probabilistic-inference choices than in trust decisions: 6-year-olds and a portion of 9-year-olds did not rely on the predictions of the high validity informant in their choices even though they trusted the informant in other domains. In adults, that pattern was reversed. We discuss our findings while considering the possibility that structural differences of the paradigms may impose different constraints on cognitive processing.
... These decision strategies describe both which information is used, as well as how this information is then integrated to arrive at the decision (e. g., Gigerenzer, 2002;Glöckner & Betsch, 2008). These strategies are fit to the observed choices and then compared based on the goodness off the fit (e. g., Bröder & Schiffer, 2003;Hilbig & Moshagen, 2014;Lee, 2016). However, this method assumes that the correct strategy is among the ones tested and therefor, if applied without further tests of the dominant strategy, can be seen as a form of confirmatory testing (Moshagen & Hilbig, 2011). ...
... To test how well we can retrieve a p-frame from simulated data, we used the same cue patterns as Bröder and Schiffer (2003) and Hilbig and Moshagen (2014, see Table 2). However, to also test for a possible side bias, each cue pattern was used twice, once in the original form and once in mirrored form. ...
... Recovery rates were only little influence by the choice of the prior parameter. While the recovery rates are somewhat worse than for methods used for strategy classification (e. g., Bröder & Schiffer, 2003;Hilbig & Moshagen, 2014), the number of possible classifications is also much larger. For example Bröder and Schiffer (2003) used six strategies during their recovery simulation to achive a recovery rate of at least 83 % (chance level: 16 %), whereas the 75 % recovery rate was achieved using 20 different p-frames (chance level: 5 %). ...
Preprint
Full-text available
In research on decision making, experiments are often analyzed in terms of decision strategies. These decision strategies define both which information is used as well as how it is used. However, often it is desirable to identify the used information without any further assumptions about how it is used. We provide a mathematical framework that allows analyzing which information is used by identifying consistent patterns on the choice probabilities. This framework makes it possible to generate the most general model consistent with an information usage hypothesis and then to test this model against others. We test our approach in a recovery simulation to show thatthe used information may be reliably identified AUC>= .90. In addition, to further verify the correctness we compare our approach with other approaches based on strategy fitting to show that both produce similar results.
... deterministic choice patterns across different contexts (e.g., different types of stimuli, items, conditions, measurement occasions, or pre-existing groups). For instance, a theory might provide a specific response pattern such as "participants prefer Option A over B in each of five choice scenarios" (Bröder & Schiffer, 2003). Often, however, theories predict more than one response pattern. ...
... In multinomial models, each predicted choice pattern can be represented by a vector of probabilities of either one (an option is deterministically chosen) or zero (an option is not chosen; Bröder & Schiffer, 2003). Figure 1 illustrates this for two independent binomial probabilities θ = (θ 1 , θ 2 ) of preferring Option A over B in a control and an experimental condition, respectively. ...
... For instance, a model for D = 3 binomial probabilities must have three free parameters, as represented by a 3-dimensional polytope (cf. Figure 2). However, theories often predict that choice probabilities are identical across different item types, which leads to equality constraints of the form θ ij = θ kl (e. g., Bröder & Schiffer, 2003). Formally, any set of linear equality constraints can be defined via a matrix C and a vector d similar to the Ab-representation of inequalities (i. ...
... deterministic choice patterns across different contexts (e.g., different types of stimuli, items, conditions, measurement occasions, or pre-existing groups). For instance, a theory might provide a specific response pattern such as "participants prefer Option A over B in each of five choice scenarios" (Bröder & Schiffer, 2003). Often, however, theories predict more than one response pattern. ...
... In multinomial models, each predicted choice pattern can be represented by a vector of probabilities of either one (an option is deterministically chosen) or zero (an option is not chosen; Bröder & Schiffer, 2003). Figure 1 illustrates this for two independent binomial probabilities θ = (θ 1 , θ 2 ) of preferring Option A over B in a control and an experimental condition, respectively. ...
... For instance, a model for D = 3 binomial probabilities must have three free parameters, as represented by a 3-dimensional polytope (cf. Figure 2). However, theories often predict that choice probabilities are identical across different item types, which leads to equality constraints of the form θ ij = θ kl (e. g., Bröder & Schiffer, 2003). Formally, any set of linear equality constraints can be defined via a matrix C and a vector d similar to the Ab-representation of inequalities (i. ...
Preprint
Full-text available
Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: 1) as the solution set to a system of linear inequalities and 2) as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method. We introduce the R package multinomineq, which provides an easily-accessible interface to a computationally efficient C++ implementation of these techniques.
... A maximum likelihood analysis of the individual choice patterns (Bröder & Schiffer, 2003;Wasserman, 2000) and an additional analysis of decision time predictions (cf. Bergert & Nosofsky, 2007) were used to identify choice strategies. 1 In experiments with three and six cues (see Glöckner, 2007, for an overview), choice pattern suggested that the majority of participants used a weighted compensatory rule to integrate all cue values instead of a fast-and-frugal heuristic (Gigerenzer et al., 1999) such as Take the Best, Equal Weight or Random Choice. ...
... The patterns of information search actually used by individuals map onto a number of decision strategies described in the literature, such as LEX and WADD. There is also evidence indicating that choices correspond with distinct types of strategies (for classification methods based on a joint consideration of patterns of choices and/or process measures, see Bröder & Schiffer, 2003;Glöckner, 2006). Altogether, these findings seem to provide ample support for the notion that individuals apply different decision rules. ...
Article
Full-text available
We claim that understanding human decisions requires that both automatic and deliberate processes be considered. First, we sketch the qualitative differences between two hypothetical processing systems, an automatic and a deliberate system. Second, we show the potential that connectionism offers for modeling processes of decision making and discuss some empirical evidence. Specifically, we posit that the integration of information and the application of a selection rule are governed by the automatic system. The deliberate system is assumed to be responsible for information search, inferences and the modification of the network that the automatic processes act on. Third, we critically evaluate the multiple-strategy approach to decision making. We introduce the basic assumption of an integrative approach stating that individuals apply an all-purpose rule for decisions but use different strategies for information search. Fourth, we develop a connectionist framework that explains the interaction between automatic and deliberate processes and is able to account for choices both at the option and at the strategy level.
... They were compensated either with credit points or 10 Swiss francs; additionally, they could take part in a raffle that paid out 100 Swiss francs to each of five randomly drawn participants. 6 For an alternative distinction of problem types that focuses on whether the different strategies predict the same or opposing decisions, see, for instance, Bröder and Schiffer (2003a), Hilbig and Moshagen (2014), Lee (2016), and Heck et al. (2017). 7 There is evidence that the larger the difference in evidence between alternatives, the faster responses will be (e.g., Moyer & Landauer, 1967). ...
... Traditionally, in models of strategies for multi-attribute decision making the probabilistic nature of people's choices is taken into account by assuming a constant execution error (e.g., Bröder & Schiffer, 2003a;Glöckner, 2009). However, it has also been suggested that-similar to, for instance, risky choice (Rieskamp, 2008; see also Mosteller & Nogee, 1951)-the probability of choosing an alternative depends on the relative amount of evidence for it. ...
Article
People deciding between options have at their disposal a toolbox containing both compensatory strategies, which take into account all available attributes of those options, and noncompensatory strategies, which consider only some of the attributes. It is commonly assumed that noncompensatory strategies play only a minor role in decisions from givens, where attribute information is openly presented, because all attributes can be processed automatically “at a glance.” Based on a literature review, however, I establish that previous studies on strategy selection in decisions from givens have yielded highly heterogeneous findings, including evidence of widespread use of noncompensatory strategies. Drawing on insights from visual attention research on subitizing, I argue that this heterogeneity might be due to differences across studies in the number of attributes and in whether the same or different symbols are used to represent high/low attribute values across attributes. I tested the impact of these factors in two experiments with decisions from givens in which both the number of attributes shown for each alternative and the coding of attribute values was manipulated. An analysis of participants’ strategy use with a Bayesian multimethod approach (taking into account both decisions and response-time patterns) showed that a noncompensatory strategy was more frequently selected in conditions with a higher number of attributes; the type of attribute coding scheme did not affect strategy selection. Using a compensatory strategy in the conditions with eight (vs. four) attributes was associated with rather long response times and a high rate of strategy execution errors. The results suggest that decisions from givens can incur cognitive costs that prohibit reliance on automatic compensatory decision making and that can favor the adaptive selection of a noncompensatory strategy.
... Inferring which strategies people use to make decisions, such as which used car to buy, is a problem that decision researchers have tried to solve for decades (e.g., Bröder & Schiffer, 2003;Glöckner, 2009;Hilbig & Moshagen, 2014;Lee et al., 2019). Before introducing the new method that we tested in this study, we review the main existing methods that have been applied to the problem of determining what strategies people are using. ...
... In comparative model fitting, a metric, such as maximum likelihood, is calculated to gauge how well a strategy describes choice outcomes. For example, Bröder and Schiffer (2003) compared an individual's choice outcomes with predictions made by several strategies and treated the strategy with the highest estimated likelihood as the one most likely to have produced the observed choice pattern. This method assumes that an individual uses only one strategy and applies that strategy with a constant error rate across different combinations of alternatives, which they referred to as item types. ...
Article
Full-text available
We propose a novel approach, which we call machine learning strategy identification (MLSI), to uncovering hidden decision strategies. In this approach, we first train machine learning models on choice and process data of one set of participants who are instructed to use particular strategies, and then use the trained models to identify the strategies employed by a new set of participants. Unlike most modeling approaches that need many trials to identify a participant’s strategy, MLSI can distinguish strategies on a trial-by-trial basis. We examined MLSI’s performance in three experiments. In Experiment I, we taught participants three different strategies in a paired-comparison decision task. The best machine learning model identified the strategies used by participants with an accuracy rate above 90%. In Experiment II, we compared MLSI with the multiple-measure maximum likelihood (MM-ML) method that is also capable of integrating multiple types of data in strategy identification, and found that MLSI had higher identification accuracy than MM-ML. In Experiment III, we provided feedback to participants who made decisions freely in a task environment that favors the non-compensatory strategy take-the-best. The trial-by-trial results of MLSI show that during the course of the experiment, most participants explored a range of strategies at the beginning, but eventually learned to use take-the-best. Overall, the results of our study demonstrate that MLSI can identify hidden strategies on a trial-by-trial basis and with a high level of accuracy that rivals the performance of other methods that require multiple trials for strategy identification.
... Descriptive approaches to decision-making vis-â-vis traditional normative models have already been discussed by scholars (5,21,22). Choice-based models, as a subset of descriptive approaches, measure the relationship between "attributing" values and "alternative" options of decisionmaking (23,24). The structural model describes the final outcome of the decision with no reference to the processes of decision-making. ...
... The structural model describes the final outcome of the decision with no reference to the processes of decision-making. Process tracing, however, is a different descriptive approach that emphasizes on the process of decision-making by focusing on quantity, type, time, and sequence of information acquisition (21,25). Process tracing models are well-suited for studying human decisionmaking and understanding of cognitive processes from a psychological perspective (26)(27)(28). ...
Article
Full-text available
Background: Studies related to decision-making and choice preference in substance use behavior have less commonly focused on decision-making processes per se. Those processes include decision-making time, task-based complexity, and decision-making strategies. Objectives: The objectives of this study was the production of a culturally modified version of the Mouselab tool for measurement of decision-making processes and to measure differences between decision-making processes in subjects with a positive and negative history of substance use. Methods: Applying a snowball method for sampling, two groups, of individuals with a positive and negative history of substance use were recruited. The case and control groups consisted of 17 males with the mean age of 35.94 (± 12) and 33.8 (± 8.83) years, respectively. The measurement tool was a modified version of Mouselab computer game. Results: Using repeated measurement analysis of variances ant t-test with non-paired groups for comparing the case and control groups, it was found that the group with a positive history of substance use had a longer time-lapse in the decision-making process (P = 0.029). The accuracy of choice, however, was not different between the groups (P = 0.172). Conclusions: Subjects with a positive history of substance use were different in two stages of decision-making process, which are dependent on the ecology and conditions of decision-making process, namely, search for information and decision-making. Two other stages of decision-making process that were dependent on individual cognitive and logical properties, i.e., stop search and choice, were not different in subjects with a positive history of substance use compared to the control group. Although subjects with a positive history of substance use consumed more resources for decision-making, their accuracy of choice was not different from the control group, thereby, ruling out a decision-making-related cognitive deficit.
... New approaches for comparative model testing such as maximum likelihood methods (Bröder & Schiffer, 2003) and its extensions (Moshagen & Hilbig, 2011), hierarchical Bayesian approaches (Scheibehenne et al., 2013;Scheibehenne & Pachur, 2014), multinomial approaches (Hilbig & Moshagen, 2014), normalized maximum likelihood approaches (Davis-Stober & Brown, 2011), approaches based on polytopes and flexible error specifications (Regenwetter et al., 2014), and approaches that take into account multiple dependent measures simultaneously (Glöckner, 2009;Heck & Erdfelder, 2016;Lee & Newell, 2011;Ravenzwaaij, Moore, Lee, & Newell, 2014) have been developed. At the same time, as mentioned earlier, increasingly models of decision making are developed in integrative cognitive theories, including cognitive architectures (e.g., Dimov & Link, 2017;Dougherty et al., 1999;Fechner et al., 2018;Marewski & Schooler, 2011;Schooler & Hertwig, 2005;Thomas et al., 2008;Thomson, Lebiere, Anderson, & Staszewski, 2015). ...
... Similarly, outcome-based classifications showed convergent validity with information search patterns in a Mouselab setup (e.g. Bröder, 2003). 18,19 ...
Article
Organisms must be capable of adapting to environmental task demands. Which cognitive processes best model the ways in which adaptation is achieved? People can behave adaptively, so many frameworks assume, because they can draw from a repertoire of decision strategies, with each strategy particularly fitting to certain environmental demands. In contrast to that multi-mechanism assumption, competing approaches posit a single decision mechanism. The juxtaposition of such single-mechanism and multi-mechanism approaches has fuelled not only much theory-building, empirical research, and methodological developments, but also many controversies. This special issue on “Strategy Selection: A Theoretical and Methodological Challenge” sheds a spotlight on those developments. The contribution of this introductory article is twofold. First, we offer a documentation of the controversy, including an outline of competing approaches. Second, this special issue and this introductory article represent adversarial collaborations among the three of us: we have modeled adaptive decision making in different ways in the past. Together, we now work on resolving the controversy and point to five guiding principles that might help to improve our models for predicting adaptive behavior. Copyright © 2018 John Wiley & Sons, Ltd.
... In addition to group-level analysis, the inspection of individual strategies can reveal further within-age group variability. Therefore, we analyzed individual choice behavior using an outcome-based strategy classification method to test for a variety of choice strategies in children (Bröder and Schiffer, 2003). In addition to the lexicographic rule (LEX), which predicts reliance on the high valid cue's prediction only, we considered the weighted additive rule (WADD: integrating weighted predictions of all cues; e.g., Payne et al., 1988) 1 and an option-based win-stay-lose-shift rule (WSLS) for the feedback conditions. ...
... To account for individual differences, we classified participants according to their choice behavior over all patterns using an outcome based strategy classification method (Bröder and Schiffer, 2003). Predictions were derived from five different choice models (LEX; WADD; WSLS; SW; LVC, see section "Choice Strategies in Children"). ...
Article
Full-text available
We investigated whether children prefer feedback over stated probabilistic information in decision making. 6-year-olds’, 9-year-olds’, and adults’ making was examined in an environment where probabilistic information about choice outcome had to be actively searched (N = 166) or was available without search (N = 183). Probabilistic information was provided before choices as predictions of cues differing in validity. The presence of outcome feedback was varied. 6-year-olds, but not 9-year-olds were over-responsive to negative outcomes leading to choices biased by recent feedback. However, children did not systematically utilize feedback in choices. Irrespective of feedback, 6-year-olds fully and 9-year-olds partly neglected stated probabilistic information in their choices. When 6-year-olds chose systematically, they only relied on invalid information, which did not maximize outcomes. 9-year-olds still applied invalid choice rules, but also choice rules based on probability. Results suggest that neglect of probabilities in complex decisions is robust, independent of feedback, and only starts to subside at elementary school age.
... The third example demonstrates the relevance of our approach for non-nested NML stable models. The models stem from the field of judgment and decision making and describe the behavior of choosing one of two choice options to maximize a given criterion (i.e., decision strategy; Bröder and Schiffer, 2003a). Recently, Hilbig and Moshagen (ress) Klauer and Wegener (1998). ...
Preprint
The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound NN' for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.
... The weighting of the strategy's cost is controlled by the cost-weighting parameter δ; larger values of δ reflect a larger weight being given to strategy cost during strategy selection. The selected strategy is then executed with a trembling-hand error (expressing the proportion of cases in which an option other than that predicted by the strategy is chosen), to account for potential noise in the application of the strategy [46]. All model parameters (including the size and the composition of the strategy toolbox) were estimated from the data for each participant and are assumed to be constant across choice problems. ...
Article
Full-text available
Younger and older adults often differ in their risky choices. Theoretical frameworks on human aging point to various cognitive and motivational factors that might underlie these differences. Using a novel computational model based on the framework of resource rationality, we find that the two age groups rely on different strategies. Importantly, older adults did not use simpler strategies than younger adults, they did not select among fewer strategies, they did not make more errors, and they did not put more weight on cognitive costs. Instead, older adults selected strategies that had different risk propensities than those selected by younger adults. Our modeling approach suggests that age differences in risky choice are not necessarily a consequence of cognitive decline; instead, they may reflect motivational differences between age groups.
... Decision-makers may need to consider factors like pricing, quality, delivery time, dependability, and environmental sustainability when choosing a new supplier, for instance. To analyze and assess options based on these many features, MADM offers a systematic framework [7,8]. ...
... As the previous analysis indicates, almost no participants in our sample systematically followed the high-validity cue in their decisions. To investigate what other strategies, if any, participants used to make their choices, Betsch et al. (2020) applied an outcome-based method for strategy classification (Bröder and Schiffer, 2003). These analyses should be treated with caution, because the pattern types of the tasks were not designed for identifying strategies. ...
Article
Full-text available
A child’s world is full of cues that may help to learn about decision options by providing valuable predictions. However, not all cues are always equally valid. To enhance decision-making, one should use cue validities as weights in decision-making. Prior research showed children’s difficulty in doing so. In 2 conceptual replication studies, we investigated preschoolers’ competencies when they encounter a cue whose prediction is always correct. We assessed 5- to 6-year-olds’ cue evaluations and decision-making in an information-board-game. Participants faced 3 cues when repeatedly choosing between 2 locations to find treasures: A nonprobabilistic, high-validity cue that always provided correct predictions ( p = 1) paired with 2 probabilistically correct (Study 1: p = .34, p = .17) or 2 nonprobabilistic, incorrect cues (Study 2: p = 0). Participants considered cue validities—albeit in a rudimentary form. In their cue evaluations, they preferred the high-validity cue, indicating their ability to understand and use cue validity for evaluations. However, in their decision-making, they did not prioritize the high-validity cue. Rather, they frequently searched and followed the predictions of less valid (Study 1) and incorrect cues (Study 2). Our studies strengthen the current state of decision research suggesting that the systematic use of cue validities in decision-making develops throughout childhood. Apparently, having appropriate cue evaluations that reflect cue validities is not sufficient for their use in decision-making. We discuss our findings while considering the importance of learning instances for the development of decision competencies.
... Finally, individuals can weight all cues differently, for instance according to their validity, and integrate them in a weighted additive manner (WADD, Payne, Bettman, & Johnson, 1988). Note that LEX and EQW are of course sub-models of WADD with specific restricted weights (Bröder & Schiffer, 2003a). For the sake of simplicity, we will nevertheless refer to them as separate "strategies" here. ...
Article
Full-text available
In recent years, numerous studies comparing intuition and deliberation have been published. However, relatively little is known about the cognitive processes underlying the two decision modes. In two studies, we analyzed the effects of decision mode instructions on processes of information search and integration, using eye-tracking technology in a between-participants (Study 1) and a within-participants (Study 2) design. Our findings indicate that the instruction to deliberate does not necessarily lead to qualitatively different information processing compared to the instruction to decide intuitively. We found no difference in mean fixation duration and the distribution of short, medium and long fixations. Short fixations in particular prevailed under both decision mode instructions, while long fixations indicating a conscious and calculation-based information processing were rarely observed. Instruction-induced deliberation led to a higher number of fixations, a more complete information search and more repeated information inspections. We interpret our findings as support for the hypothesis that intuitive and deliberate decision modes share the same basic processes which are supplemented by additional operations in the deliberate decision mode.
... Moreover, the research on MADM methods is increasingly challenged by issues arising with behavioral data on cognitive strategies. A Bayesian method was proposed in [60], which can evaluate whether it is most likely to generate empirical data vectors through equal weight rules or compensation strategies. This study also discussed the potential extensions of the general methods to other applications in behavioral decision research. ...
Article
Full-text available
In the last decades, the art and science of multi-attribute decision-making (MADM) have witnessed significant developments and have found applications in many active areas. A lot of research has demonstrated the ability of cognitive techniques in dealing with complex and uncertain decision information. The purpose of representing human cognition in the decision-making process encourages the integration of cognitive psychology and multi-attribute decision-making theory. Due to the emergence of research on cognitively inspired MADM methods, we make a comprehensive overview of published papers in this field and their applications. This paper has been grouped into five parts: we first conduct some statistical analyses of academic papers from two angles: the development trends and the distribution of related publications. To illustrate the basic process of cognitively inspired MADM methods, we present some underlying ideas and the systematic structure of this kind of method. Then, we make a review of cognitively inspired MADM methods from different perspectives. Applications of these methods are further reviewed. Finally, some challenges and future trends are summarized. This paper highlights the benefits of the synergistic approach that is developed based on cognitive techniques and MADM methods and identifies the frontiers in this field.
... As such, the following results should be considered with caution. We classified participants according to their choice behavior using an outcome-based strategy classification method (Bröder & Schiffer, 2003). We followed the procedure from prior work and classified the same set of strategies (see . ...
Article
Full-text available
In a probabilistic inference task (three probabilistic cues predict outcomes for two options), we examined decisions from 233 children (5–6 vs. 9–10 years). Contiguity (low vs. high; i.e., position of probabilistic information far vs. close to options) and demand for selectivity (low vs. high; i.e., showing predictions of desired vs. desired and undesired outcomes) were varied as configural aspects of the presentation format. Probability utilization was measured by the frequency of following the predictions of the highest validity cue in choice. High contiguity and low demand for selectivity strongly and moderately increased probability utilization, respectively. Children are influenced by presentation format when using probabilities as decision weights. They benefit from perception-like presentations that present probabilities and options as compounds.
... Yet in several other domains of judgment and decision making, it has been found that decision makers do not always engage in exhaustive information search. For instance, there is evidence from computational modeling and process tracing approaches that people inspect only part of the available information in multiple-cue judgment (e.g., Pachur & Marinello, 2013), multi-attribute choice (Bröder & Schiffer, 2003;Payne et al., 1993;Russo & Dosher, 1983), risky choice (Brandstätter, Gigerenzer, & Hertwig, 2006;Pachur, Hertwig, Gigerenzer, & Brandstätter, 2013;Payne & Braunstein, 1978;Su et al., 2013), intertemporal choice (e.g., Dai, Pleskac, & Pachur, 2018), and estimation (Juslin & Persson, 2002). Moreover, for categorization, in which-as in social samplingdecisions are informed by exemplars stored in memory, it has been shown that models that assume consideration of a limited set of stored exemplars are better able to account for decisions than are unbounded models that require extensive similarity computations across all stored exemplars (De Schryver, Vandist, & Rosseel, 2009). ...
Article
Full-text available
The social environment provides a sampling space for making informed inferences about features of the world at large, such as the distribution of preferences, risks, or events. How do people search this sampling space and make inferences based on the instances sampled? Inspired by existing models of bounded rationality and in accord with research on the structure of social memory, we develop and test the social-circle model , a parameterized, probabilistic process account of how people make inferences about relative event frequencies. The model extends to social sampling the idea that cognitive search is both structured and limited; moreover, it captures individual differences in the order in which sampling spaces are probed, in difference thresholds, and in response error. Using a hierarchical Bayesian latent-mixture approach, we submit the model to a rigorous model comparison. In Study 1, a reanalysis of published data, the social-circle model outperformed both a model assuming exhaustive search and a simple heuristic assuming no individual differences in search or difference thresholds. Study 2 establishes the robustness of these findings in a different domain and in different age groups (adults and children). We find that children also consult their social memories for inferential purposes and rely on sequential and limited search. Finally, model and parameter recovery analyses (Study 3) demonstrate the ability of the social-circle model to recover the characteristics of the cognitive processes assumed to underlie social sampling. Our analyses establish that social sampling in both children and adults follows key principles of bounded rationality.
... We use the experimental structure of Bröder and Schiffer (2003), and take the cue search patterns suggested by TTB and WA, and calculate the feature vector corresponding to that search pattern, denoting these asf ttb andf wa respectively. For a cue space with n A = 4 and n O = 3, we get ...
Thesis
Full-text available
Many experimental and statistical paradigms collect and analyze behavioral data under steady-state assumptions. Such paradigms may have low external validity, since real-world decisions are often made in situations when people are still learning and adapting, or their beliefs and preferences are in a state of flux, or where the decision environment is constantly changing. I focus on experimental and real-world paradigms that represent some form of adaptive behavior. Under such situations, factoring in structural adaptivity into cognitive modeling frameworks can improve their descriptive and predictive performance, and allow us to make superior inferences about the underlying latent cognitive processes. Towards this endeavor, first, a novel probabilistic framework is proposed for representation of heuristic strategies of information search and multi-attribute choice within a model of learning. Second, a novel adaptive reference point mechanism is introduced, and I show how this can be used in a variety of different tasks and applications, and be incorporated into cognitive frameworks to structurally capture the adaptive process. This mechanism provides superior predictive performance along with psychologically meaningful parameter inference in tasks ranging from signal detection, bandit problems, judgments about estimating true values, consumption behavior, probability tracking, and expectation formation. Third, I show how cognitive modeling can be applied to adaptive population behavior in the real world. Real world data is used to make inferences about the latent processes involved in aspects such as the changing nature of cycles of violence, and the impact of tax policies on changing consumption patterns. Incorporating cognitive modeling frameworks and psychological insights to constrain econometric models yields possibly simpler models, but with directly interpretable parameter inferences. Fourth, cognitive modeling approaches are used to design choice architecture frames to create behavioral nudges within a risk allocation paradigm, where people change their behavior based on representational, rather than meaningful changes in their environment.
... Heuristics are decision rules used to reduce the complexity of a task and make fast decisions. Different heuristic-based models based are available that explain decision-making in humans (Bröder, 2000;Bröder & Schiffer, 2003;Dhami, 2003;Newell, Weston, & Shanks, 2003;Payne, Bettman, & Johnson, 1993;Payne, Bettman, & Luce, 1996;Rieskamp & Hoffrage, 1999;Schkade & Johnson, 1989). For example, the Maximax heuristic predicts that individuals will only gamble if at least one chance to win is at stake. ...
Article
In their natural environment, animals often make decisions crucial for survival, such as choosing the best patch or food, or the best partner to cooperate. The choice can be compared to a gamble with an outcome that is predictable but not certain, such as rolling a dice. In economics, such a situation is called a risky context. Several models show that although individuals can generally evaluate the odds of each potential outcome, they can be subject to errors of judgment or choose according to decision-making heuristics (simple decision rules). In non-human primates, similar errors of judgment have been reported and we have recently shown that they also use a decisional heuristics when confronted with a risky choice in an exchange task. This suggests a common evolutionary origin to the mechanisms underlying decision-making under risk in primates. However, whether the same mechanisms are also present in more distantly related taxa needs to be further investigated. Other social species, like corvids, are renowned for their advanced cognitive skills and may show similar responses. Here, we analyse data on corvids (carrion crows, hooded crows, common ravens and rooks) tested in a risky exchange task comparable to the one used in non-human primates. We investigated whether corvids could exchange according to the odds of success or, alternatively, whether they used a heuristic similar to the one used by non-human primates. Instead, most corvids chose a course of action (either a low or high exchange rate) that remained constant throughout the study. In general, corvids' mean exchange rates were lower compared to non-human primates, indicating that they were either risk-adverse or that they do not possess the cognitive capabilities to evaluate odds. Further studies are required to evaluate the flexibility in exchange abilities of these birds in exchange abilities of these birds.
... Participants' fixations on the emptied displays were recorded first, while they integrated memorized cues according to freely chosen strategies, and afterwards, while they applied instructed decision strategies, The instructed strategies included a lexicographic strategy (take-the-best) and a compensatory strategy (equal weighting). An outcome-based Bayesian strategy classification method (Bröder and Schiffer, 2003) was used to infer the freely chosen strategies and to confirm the use of instructed strategies. Fixation patterns such as transition frequencies between decision alternatives, fixation durations on former cue locations, and the sequence of fixations on former cue locations were in line with memory search and cue processing as postulated for lexicographic and compensatory strategies. ...
... So far, the structural model has only been used to measure the relationship between the value of attributes and the final response without referencing to the process of decision making. However, process tracing is an alternative approach with more emphasis on the process of decision-making that focuses on the quantity, type, time, and sequence of information acquisition as well as the evaluation processes (15,16). To date, most studies use these aspects of decisionmaking in terms of tools and design; some tools measure the final result of decision-making and some measure processes tracing. ...
Article
Full-text available
Objective: A prominent challenge in modeling choice is specification of the underlying cognitive processes. Many cognitive-based models of decision-making draw substantially on algorithmic models of artificial intelligence and thus rely on associated metaphors of this field. In contrast, the current study avoids metaphors and aims at a first-hand identification of the behavioral elements of a process of choice. Method: We designed a game in Mouselab resembling the real-world procedure of choosing a wife. 17 male subjects were exposed to cost-benefit decision criteria that closely mimic their societal respective conditions. Results: The quality of choice index was measured with respect to its sensitivity to the final outcomes as well as process tracing of decisions. The correlation between this index and individual components of process tracing are discussed in detail. The choice quality index can be configured as a function of expected value and utility. In our sample the quality of choice with an average of 75.98% (SD: ±12.67) suggests that subjects obtained close to 76% of their expected gains. Conclusion: The quality of choice index, therefore, may be used for comparison of different conditions where the variables of decision-making are altered. The analysis of results also reveals that the cost of incorrect choice is significantly correlated with expected value (0.596, sig = 0.012) but not with utility. This means that when sub-jects face higher costs prior to making a decision, there exists a corresponding higher expectation of gains, i.e., higher expected value.
... As an example of modeling long-term memory, the information gain in discriminating between power and exponential models of forgetting curves depends crucially on the spacing of the retention intervals (Myung and Pitt 2009;Navarro et al. 2004). As a second example in judgment and decision-making, consider the classification of participants as users of different decision-making strategies such as take-the-best or weightedadditive (Bröder and Schiffer 2003). In typical choice tasks for investigating strategy use across different environments, the presence or absence of item features (e.g., which of two choice options is recommended by different experts) must be carefully chosen to ensure that each strategy predicts a unique response pattern across items. ...
Preprint
Full-text available
To ensure robust scientific conclusions, cognitive modelers should optimize planned experimental designs a priori in order to maximize the expected information gain for answering the substantive question of interest. Both from the perspective of philosophy of science, but also within classical and Bayesian statistics, it is crucial to tailor empirical studies to the specific cognitive models under investigation before collecting any new data. In practice, methods such as design optimization, classical power analysis, and Bayesian design analysis provide indispensable tools for planning and designing informative experiments. Given that cognitive models provide precise predictions for future observations, we especially highlight the benefits of model-based Monte Carlo simulations to judge the expected information gain provided by different possible designs for cognitive modeling.
... As an example of modeling long-term memory, the information gain in discriminating between power and exponential models of forgetting curves depends crucially on the spacing of the retention intervals (Myung and Pitt 2009;Navarro et al. 2004). As a second example in judgment and decision-making, consider the classification of participants as users of different decision-making strategies such as take-the-best or weightedadditive (Bröder and Schiffer 2003). In typical choice tasks for investigating strategy use across different environments, the presence or absence of item features (e.g., which of two choice options is recommended by different experts) must be carefully chosen to ensure that each strategy predicts a unique response pattern across items. ...
Article
Full-text available
To ensure robust scientific conclusions, cognitive modelers should optimize planned experimental designs a priori in order to maximize the expected information gain for answering the substantive question of interest. Both from the perspective of philosophy of science, but also within classical and Bayesian statistics, it is crucial to tailor empirical studies to the specific cognitive models under investigation before collecting any new data. In practice, methods such as design optimization, classical power analysis, and Bayesian design analysis provide indispensable tools for planning and designing informative experiments. Given that cognitive models provide precise predictions for future observations, we especially highlight the benefits of model-based Monte Carlo simulations to judge the expected information gain provided by different possible designs for cognitive modeling.
... Zahlreiche experimentelle Studien zeigen auch, dass Personen diese Heuristiken in den bestimmten Umwelten meistens anwenden, v.a. unter Zeitdruck oder wenn die Informationen etwas kosten Bröder & Schiffer, 2003a, 2003bNewell & Shanks, 2003;, aber eben nur meistens und nicht immer . Relativ wenig Wissen existiert bislang darüber, wie Personen die einzusetzenden Strategien lernen (siehe dazu Rieskamp 2008) und wie sie diese Strategien in Abhängigkeit zur jeweiligen Umwelt auswählen (z.B. in Bezug auf Änderungen in den Korrelationen der Cues, Fasolo, Misuraca, & McClelland, 2003). ...
... More general methods for strategy inference were then developed. Bröder & Schiffer (2003a) developed an early Bayesian method for classifying strategy use. Lee (2004) developed a minimum description length approach using the normalized maximum likelihood (NML) criterion (Grünwald, 2007), which was later extended and applied by Newell and Lee (2011). Hilbig and Moshagen (2014) proposed a similar approach, also based on the NML. ...
Article
Full-text available
We develop models of strategy use in multiple-cue decision making that have a couple of key capabilities. The first is that they incorporate multiple sources of behavior, using the predictions that strategies make about processes and outcomes as a basis for inferring strategy use. The second is that they allow for people to change strategies multiple times over a sequence of decision trials. The models are implemented as generative probabilistic models, allowing for fully Bayesian inference about the nature of strategy use and the number of strategy switches. To demonstrate the approach and evaluate the models, we consider the standard take-the-best, weighted-additive, and tally strategies, as well as a guessing strategy, and apply them to previously published experimental data that involve decision, search, and verbal report data (Walsh & Gluck, 2016). We find strong evidence that many people switch strategies many times, especially when inference is based on all of the available behavioral data. Our results suggest there is interpretable complexity beneath people's use of simple strategies to make decisions.
... Furthermore, similar to latent-class analysis, this approach infers the likelihood that each participant uses each of the assumed strategies. Thus, it can simultaneously infer the number and size of strategy groups, each individual participant's group membership, and the certainty of each participant's group membership (Bröder and Schiffer 2003;Lee 2016). In addition, latent-mixture models provide an estimate of how accurately participants follow their inferred strategy, by explicitly formalizing individual differences in error rate. ...
Article
Full-text available
Differential strategy use is a topic of intense investigation in developmental psychology. Questions under study are as follows: How do strategies change with age, how can individual differences in strategy use be explained, and which interventions promote shifts from suboptimal to optimal strategies? In order to detect such differential strategy use, developmental psychology currently relies on two approaches—the rule assessment methodology and latent class analysis—each having their own strengths and weaknesses. In order to optimize strategy detection, a new approach that combines the strengths of both existing methods and avoids their weaknesses was recently developed: model-based latent-mixture analysis using Bayesian inference. We performed a simulation study to test the ability of this new approach to detect differential strategy use. Next, we illustrate the benefits of this approach by a re-analysis of decision making data from 210 children and adolescents. We conclude that the new approach yields highly informative results, and provides an adequate account of the observed data. To facilitate the application of this new approach in other studies, we provide open access documented code, and a step-by-step tutorial of its usage.
... But they may call upon alternative decision-making processes from the broad category of heuristic models (Brandstätter et al., 2006). Human cognitive processes and inferences can be predicted by those models (Bröder, 2000;Bröder & Schiffer, 2003;Dhami, 2003;Newell, Weston, & Shanks, 2003;Payne, Bettman, & Johnson, 1993;Payne, Bettman, & Luce, 1996;Rieskamp & Hoffrage, 1999;Schkade & Johnson, 1989). Real choices between gambles often involve a class of heuristics called lexicographic rules (Gigerenzer, 2004;Loewenstein, Weber, Hsee, & Welch, 2001;Sunstein, 2003), which rank decision attributes according to the way they are examined by decision-makers. ...
Article
Full-text available
Many studies investigate the decisions made by animals by focusing on their attitudes toward risk, that is, risk-seeking, risk neutrality, or risk aversion. However, little attention has been paid to the extent to which individuals understand the different odds of outcomes. In a previous gambling task involving 18 different lotteries (Pelé, Broihanne, Thierry, Call, & Dufour, 2014), nonhuman primates used probabilities of gains and losses to make their decision. Although the use of complex mathematical calculation for decision-making seemed unlikely, we applied a gradual decrease in the chances to win throughout the experiment. This probably facilitated the extraction of information about odds. Here, we investigated whether individuals would still make efficient decisions if this facilitating factor was removed. To do so, we randomized the order of presentation of the 18 lotteries. Individuals from 4 ape and 2 monkey species were tested. Only capuchin monkeys differed from others, gambling even when there was nothing to win. Randomizing the lottery presentation order leads all species to predominantly use a maximax heuristic. Individuals gamble as soon as there is at least one chance to win more than they already possess, whatever the risk. Most species also gambled more as the frequency of larger rewards increased. These results suggest optimistic behavior. The maximax heuristic is sometimes observed in human managerial and financial decision-making, where risk is ignored for potential gains, however low they may be. This suggests a shared and strong propensity in primates to rely on heuristics whenever complexity in evaluation of outcome odds arises.
... Strategy assessment is the main way to know this kind of information. All open strategies to be modified in the future can then be adapted to various factors, whether internal or external factors constantly changing, (Bröder and Schiffer, 2003). In carrying out its operational strategy, management establishes some of the company's values that will be the spirit in all activities and that must be undertaken by all components within the organization. ...
... Given limited knowledge and time, these quick-and-dirty heuristics can provide satisfactory or superior results (Gigerenzer, Czerlinski, & Martignon, 2002;Gigerenzer & Goldstein, 1996). Additionally, it has been proposed that this perspective provides better descriptive models for how humans make decisions (Bröder & Schiffer, 2003;Dieckmann et al., 2009;Einhorn & Hogarth, 1981;Gigerenzer & Goldstein, 1996;Gigerenzer, Todd, & ABC Research Group, 1999;Hauser, 2014;Newell, Weston, & Shanks, 2003). Although noncompensatory heuristics may be situation specific (Payne, 1982), and hybrid, two-stage compensatory and noncompensatory models have also been proposed (Elrod, Johnson, & White, 2004;Gilbride & Allenby, 2004). ...
Article
When making decisions where options involve multiple attributes, a person can choose to use a compensatory, utility maximizing strategy, which involves consideration and integration of all available attributes. Alternatively, a person can choose a noncompensatory strategy that extracts only the most important and reliable attributes. The present research examined whether other‐oriented decisions would involve greater reliance on a noncompensatory, lexicographic decision strategy than self‐oriented decisions. In three studies (Mturk workers and college students), the difference in other‐oriented versus self‐oriented decisions in a medical decision context was explained by a subsample of participants that chose the death minimizing operation on all 10 decisions (Study 1) and a subsample of participants who self‐reported that they used a strategy that minimized the chance of death on every decision (i.e., a lexicographic mortality heuristic; Study 2). In Study 2, tests of mediation found that self‐reported use of the mortality heuristic completely accounted for the self–other effect on decisions. In Study 3, participants were more likely to report prospectively that they would adopt the mortality heuristic when making decisions for others than for themselves, suggesting that participants were not mistakenly inferring a lexicographic decision strategy from their past behavior. The results suggest that self–other effects in multiattribute choice involve differential use of compensatory versus noncompensatory decision strategies and that beyond this group difference, individual differences in the use of these strategies also exist within self‐oriented and other‐oriented decisions.
... In judgment and decision making research, statistical models are used to classify decision makers according to decision-making strategies such as exemplar or prototype categorization (Scheibehenne & Pachur, 2015), transitive or intransitive preference (Cavagnaro & Davis-Stober, 2014), optimal exploitation/exploration or heuristic choice in bandit problems (Steyvers, Lee, & Wagenmakers, 2009), and reinforcement learning or heuristic search in the Iowa Gambling Task (Bishara et al., 2009;Worthy, Hawthorne, & Otto, 2013). In each of these examples, categorical classifications are based on either the model with the highest value of a selection statistic, such as the Akaike information criterion (AIC; Akaike, 1976), or on the parameter of a model with the highest maximized likelihood (e.g., Bröder & Schiffer, 2003;Glöckner, 2009). These statistics are inherently probabilistic because they are based on random samples, hence the classifications based on them are probabilistic as well. ...
Article
Full-text available
Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher’s exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher’s exact test. We provide code in an online supplement that allows researchers to apply the method to their own data.
... Hausmann & Läge, 2008;Lee & Cummins, 2004;Newell et al., 2007;; though see Lee, Newell & Vandekerckhove, 2014 for an exception). Conducting comparisons across environments with different cue structures is important, given earlier findings indicating that the environment does influence the decision strategies applied (Bröder 2000(Bröder , 2003Bröder & Schiffer, 2003b;Rieskamp & Hoffrage, 1999;Rieskamp, 2006;Rieskamp & Otto, 2006) as well as thresholds in an accumulation process (Lee, Newell & Vandekerckhove, 2014;Simen, Cohen & Holmes, 2013). In particular, recent studies show that the dispersion of the cue validities influences the amount of information search, with high dispersion favoring less information search and low dispersion favoring more information search (Mata, Schooler, Rieskamp, 2011). ...
Article
Full-text available
Past research indicates that individuals respond adaptively to contextual factors in multi-attribute choice tasks. Yet it remains unclear how this adaptation is cognitively governed. In this paper, empirically testable implementations of two prominent competing theoretical frameworks are developed and compared across two multi-attribute choice experiments: The Adaptive Toolbox framework assuming discrete choice strategies and the Adjustable Spanner framework assuming one comprehensive adaptive strategy. Results from two experiments indicate that in the environments we tested, in which all cue information was presented openly, the Toolbox makes better predictions than the Adjustable Spanner both in- and out-of-sample. Follow-up simulation studies indicate that it is difficult to discriminate the models based on choice outcomes alone but allowed the identification of a small sub-set of cases where the predictions of both models diverged. Our results suggest that people adapt their decision strategies by flexibly switching between using as little information as possible and use of all of the available information.
... In the information-board paradigm, participants are presented with an information matrix that describes decision options, such as apartments, on different information dimensions, such as size, rent, and brightness of rooms. Prior to choice, individuals can search information by opening the hidden cells of the matrix to inspect the values (e.g., whether the size of a specific apartment is good or poor, Payne et al., 1988; for an overview and discussion, cf., e.g., Bröder & Schiffer, 2003). ...
Article
In many decision situations, individuals must actively search information before they can make a satisfying choice. In such instances, individuals must be aware of the fact that not all information may be equally relevant for the choice at hand – thus, individuals should weight information by its respective relevance. We compared children’s and adult’s decision making in a child-friendly decision game. For each decision, participants received information on the content of three piggy-banks on an information-board. In Experiment 1, we manipulated the weight-structure by presenting decisions with similarly relevant or differently relevant information. Results suggest that 8- to 9-year-olds do not adapt their search to the weight-structure. In contrast, 10- to 12-year-olds do consider relevance-weights. Still, 8- to 9-year-olds and 10- to 12-year-olds are unable to search for a good, adult-like information sample containing all relevant and no irrelevant information. Thus, children base their decisions on a biased information sample. In Experiment 2, we intensified the need to consider relevance-weights by introducing a search constraint. In doing so, we replicated the deficits of 8- to 9-year-olds and found adult-like behavior in 11- to 12-year-olds. Our findings suggest that although children understand that relevance may vary, they are not immediately able to effectively consider relevance-weights in their information search – this appears to be a skill that continues to develop throughout childhood. We will discuss the resulting implications for understanding children as decision makers as well as the general ability to perform structured information search.
... Beyond that, there are also plenty of other branches of psychological research that -in one way or another-can be conceptualized as function learning. For example, multi-attribute decision making (Bröder & Schiffer, 2003) requires participants to learn to associate different features with expected outcomes, pattern completion paradigms (Kanizsa et al., 1976) ask participants to extrapolate a pattern from seen points, and gener-alization across environments (Gershman & Niv, 2015) requires participants to extrapolate from features of the current environment to novel ones. That function learning is seen a separate subfield of psychology is mostly due to historical developments and practical conventions. ...
Conference Paper
How do humans generalize from observed to unobserved data? How does generalization support inference, prediction, and decision making? I propose that a big part of human generalization can be explained by a powerful mechanism of function learning. I put forward and assess Gaussian Process regression as a model of human function learning that can unify several psychological theories of generalization. Across 14 experiments and using extensive computational modeling, I show that this model generates testable predictions about human preferences over different levels of complexity, provides a window into compositional inductive biases, and --combined with an optimistic yet efficient sampling strategy-- guides human decision making through complex spaces. Chapters 1 and 2 propose that, from a psychological and mathematical perspective, function learning and generalization are close kin. Chapter 3 derives and tests theoretical predictions of participants' preferences over differently complex functions. Chapter 4 develops a compositional theory of generalization and extensively probes this theory using 8 experimental paradigms. During the second half of the thesis, I investigate how function learning guides decision making in complex decision making tasks. In particular, Chapter 5 will look at how people search for rewards in various grid worlds where a spatial correlation of rewards provides a context supporting generalization and decision making. Chapter 6 gauges human behavior in contextual multi-armed bandit problems where a function maps features onto expected rewards. In both Chapter 5 and Chapter 6, I find that the vast majority of subjects are best predicted by a Gaussian Process function learning model combined with an upper confidence bound sampling strategy. Chapter 7 will formally assess the adaptiveness of human generalization in complex decision making tasks using mismatched Bayesian optimization simulations and finds that the empirically observed phenomenon of undergeneralization might rather be a feature than a bug of human behavior. Finally, I summarize the empirical and theoretical lessons learned and lay out a road-map for future research on generalization in Chapter 8.
... A second, quite different model of binary choice probabilities is the constant error model 4 (see, e.g., Bröder & Schiffer, 2003;Carbone & Hey, 2000;Glöckner, 2009). It says that in any binary choice, there is some chance ϵ that a decision maker mistakenly chooses an option that she/he does not actually prefer. ...
Article
After more then 50 years of probabilistic choice modeling in economics, marketing, political science, psychology, and related disciplines, theoretical and computational advances give scholars access to a sophisticated array of modeling and inference resources. We review some important, but perhaps often overlooked, properties of major classes of probabilistic choice models. For within-respondent applications, we discuss which models require repeated choices by an individual to be independent and response probabilities to be stationary. We show how some model classes, but not others, are invariant over variable preferences, variable utilities, or variable choice probabilities. These models, but not others, accommodate pooling of responses or averaging of choice proportions within participant when underlying parameters vary across observations. These, but not others, permit pooling/averaging across respondents in the presence of individual differences. We also review the role of independence and stationarity in statistical inference, including for probabilistic choice models that, themselves, do not require those properties. Copyright © 2017 John Wiley & Sons, Ltd.
Article
Purpose Are there smart ways to find heuristics? What are the common principles behind heuristics? We propose an integrative definition of heuristics, based on insights that apply to all heuristics, and put forward meta-heuristics for discovering heuristics. Design/methodology/approach We employ Herbert Simon’s metaphor that human behavior is shaped by the scissors of the mind and its environment. We present heuristics from different domains and multiple sources, including scholarly literature, practitioner-reports and ancient texts. Findings Heuristics are simple, actionable principles for behavior that can take different forms, including that of computational algorithms and qualitative rules-of-thumb, cast into proverbs or folk-wisdom. We introduce heuristics for tasks ranging from management to writing and warfare. We report 13 meta-heuristics for discovering new heuristics and identify four principles behind them and all other heuristics: Those principles concern the (1) plurality, (2) correspondence, (3) connectedness of heuristics and environments and (4) the interdisciplinary nature of the scissors’ blades with respect to research fields and methodology. Originality/value We take a fresh look at Simon’s scissors-metaphor and employ it to derive an integrative perspective that includes a study of meta-heuristics.
Article
This work investigates a method to infer and classify decision strategies from human behavior, with the goal of improving human-agent team performance by providing AI-based decision support systems with knowledge about their human teammate. First, an experiment was designed to mimic a realistic emergency preparedness scenario in which the test participants were tasked with allocating resources into 1 of 100 possible locations based on a variety of dynamic visual heat maps. Simple participant behavioral data, such as the frequency and duration of information access, were recorded in real time for each participant. The data were examined using a partial least squares regression to identify the participants’ likely decision strategy, that is, which heat maps they relied upon the most. The behavioral data were then used to train a random forest classifier, which was shown to be highly accurate in classifying the decision strategy of new participants. This approach presents an opportunity to give AI systems the ability to accurately model the human decision-making process in real time, enabling the creation of proactive decision support systems and improving overall human-agent teaming.
Article
Full-text available
An important way to develop models in psychology and cognitive science is to express them as computer programs. However, computational modeling is not an easy task. To address this issue, some have proposed using artificial-intelligence (AI) techniques, such as genetic programming (GP) to semiautomatically generate models. In this paper, we establish whether models used to generate data can be recovered when GP evolves models accounting for such data. As an example, we use an experiment from decision-making which addresses a central question in decision-making research, namely, to understand what strategy, or “policy,” agents adopt in order to make a choice. In decision-making, this often means understanding the policy that best explains the distribution of choices and/or reaction times of two-alternative forced-choice decisions. We generated data from three models using different psychologically plausible policies and then evaluated the ability and extent of GP to correctly identify the true generating model among the class of virtually infinite candidate models. Our results show that, regardless of the complexity of the policy, GP can correctly identify the true generating process. Given these results, we discuss implications for cognitive science research and computational scientific discovery as well as possible future applications.
Research Proposal
Because strategic decision-making is associated with the past, present and future. Past and present can be analysed, however, the future is unpredictable and uncertain which is not necessarily quantified as measurable risk. The aim of this research is to bring forth how MNEs could construct Strategic Business Intelligence Competencies that will serve as a basis to achieve a progressive management approach and provide an informed strategic direction for the purpose to anticipate and lead change and not be led by change. The research will seek to delineate how firms could build their strategy and decision-making on strategic competitive intelligence rather than basic information and knowledge, how to take business intelligence beyond reporting and how to use information as a strategic asset to enable firms to read the environment ahead of competition, anticipate market shifts, control risk, tackle environmental uncertainties and create powerful competitive strategies to drive innovation and growth.
Thesis
In current decision research in connection with probabilistic inference tasks, two competing paradigms have emerged. Multiple strategy models (MSM) assume that people have different decision strategies in an adaptive toolbox that are used in an ecologically rational way in connection with environmental conditions to make decisions. Evidence accumulation models (EAM), on the other hand, assume that people make decisions based on variable internal threshold values. To this end, some variables are discussed that could influence these threshold values, e.g. the confidence in a decision. Broder et al. (2014) postulated the information intrusion paradigm through which the various model classes can be tested empirically in probabilistic inference tasks without the need to create concrete models by changing the stimulus environments in such a way that both paradigms make opposite predictions about the measurable search, stop, and decision making behavior during the information gathering process. The Oil Drilling Task (Rieskamp & Otto, 2006) was adapted for this purpose. The first goal of the conceptual replication study is to replicate the findings of Söllner and Bröder (2016) that speak for the EA models. The second goal is to manipulate the stimulus environment through information costs in order to generate and investigate an adaptive use of decision strategies by implementing the variation in information costs within-subject instead of between-subject (Söllner & Bröder, 2016). The third goal is to expand the results of Söllner and Bröder (2016) through the confidence in decisions, as this is recorded as a further dependent variable in the theoretical literature. The results indicate that test subjects systematically varied their search and stopping behavior in connection with the predictions of the EA models and that the results of Söllner and Bröder (2016) could be replicated to a large extent. Individual threshold values could be estimated for a large part of the sample (75%). An adaptive use of different decision strategies could not be observed due to the low salience of the information costs. Nevertheless, the test subjects for whom a threshold value could be estimated adapted it based on the cost conditions. In addition, it was shown that the confidence varied systematically in connection with the validities and compatibilities of cues and that the assumption of variable threshold values, which are regulated by changes in confidence, thus appears plausible. The theoretical effects of these results show further predictive power through the EA models, which is why a uniform theory formulation for probabilistic inference tasks should refer to EA models in connection with confidence thresholds.
Article
Previous studies have shown that people often use heuristics in making inferences and that subjective memory experiences, such as recognition or familiarity of objects, can be valid cues for inferences. So far, many researchers have used the binary choice task in which two objects are presented as alternatives (e.g., “Which city has the larger population, city A or city B?”). However, objects can be presented not only as alternatives but also in a question (e.g., “Which country is city X in, country A or country B?”). In such a situation, people can make inferences based on the relationship between the object in the question and each object given as an alternative. In the present study, we call this type of task a “relationships‐comparison task.” We modeled the three inference strategies that people could apply to solve it (familiarity‐matching [FM; a new heuristic we propose in this study], familiarity heuristic [FH], and knowledge‐based inference [KI]) to examine people's inference processes. Through Studies 1, 2, and 3, we found that (a) people tended to rely on heuristics, and that FM (inferences based on similarity in familiarity between objects) well explained participants' inference patterns; (b) FM could work as an ecologically rational strategy for the relationships–comparison task since it could effectively reflect environmental structures, and that the use of FM could be highly replicable and robust; and (c) people could sometimes use a decision strategy like FM, even in their daily lives (consumer behaviors). The nature of the relationships–comparison task and human cognitive processes is discussed.
Article
The combination of Web GIS (Geographic Information System) and Multicriteria Decision Analysis (MCDA) techniques offers an effective tool for collaborative spatial decision making. This study examines the relationship between the group recommendations on alternatives and the information search behaviour of decision makers in a web-based GIS-MCDA for parking site selection in Tehran, Iran. Decision participants used a GIS-MCDA framework without and with an access to group decision/recommendations. Findings show that the access to group decision leads to a decrease in the proportion of information search, the average time spent acquiring each piece of information, and variability of information search per attribute in a parking site selection context.
Article
Full-text available
Adaptive actors must be able to use probabilities as decision weights. In a computerized multi-attribute task, the authors examined the decisions of children (5–6 years, n = 44; 9–10 y., n = 39) and adults (21–22 y., n = 31) in an environment that fosters the application of a weighted-additive strategy that uses probabilities as weights (WADD: choose option with highest sum of probability-value products). Applying a Bayesian outcome-based strategy classification procedure from adult research, we identified the utilization of WADD and several other strategies (lexicographic, equal weight, naïve Bayes, guessing, and saturated model) on the individual level. As expected based on theory, the prevalence of WADD-users in adults was high. In contrast, no preschoolers could be classified as users of probability-sensitive strategies. Nearly one-third of third-graders used probability-sensitive strategies.
Article
Full-text available
A common assumption of many established models for decision making is that information is searched according to some prespecified search rule. While the content of the information influences the termination of search, usually specified as a stopping rule, the direction of search is viewed as being independent of the valence of the retrieved information. We propose an extension to the parallel constraint satisfaction network model (iCodes: integrated coherence-based decision and search), which assumes—in contrast to prespecified search rules—that the valence of available information influences search of concealed information. Specifically, the model predicts an attraction search effect in that information search is directed toward the more attractive alternative given the available information. In 3 studies with participants choosing between two options based on partially revealed probabilistic information, the attraction search effect was consistently observed for environments with varying costs for information search although the magnitude of the effect decreased with decreasing monetary search costs. We also find the effect in reanalyses of 5 published studies. With iCodes, we propose a fully specified formal model and discuss implications for theory development within competing modeling frameworks.
Article
Proper methods for model comparisons require including reasonable models and all data. In an article in this issue, we highlight that excluding some models was common in parts of the literature supposedly supporting fast-and-frugal heuristics and show in comprehensive simulations and exemplified reanalyses of studies by Rieskamp that this can—under specific conditions—lead to wrong conclusions. In a comment in this issue, Rieskamp provides no conclusive arguments against these basic principles of proper methodology. Hence, the principles to include (i) all reasonable competing models and (ii) all data still stand and have to remain the standards for evaluating past and future empirical work. Rieskamp addresses the first of the two issues in a Bayesian analysis. He, however, violates the second principle by omitting data to be able to uphold his previous conclusion, which invalidates his results. We explain in more detail why exclusion of data leads to biased estimates in this context and provide a refined Bayesian analysis—of course based on all data—that underlines our previously raised concerns about the criticized studies. Instead of arguing against the two basic principles of model comparison, we recommend authors of affected studies to conduct proper reanalyses including all models and data.
Article
Full-text available
The boundedly rational “Take-The-Best” heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.
Article
Full-text available
Research on people's confidence in their general knowledge has to date produced two fairly stable effects, many inconsistent results, and no comprehensive theory. We propose such a comprehensive framework, the theory of probabilistic mental models (PMM theory). The theory (a) explains both the overconfidence effect (mean confidence is higher than percentage of answers correct) and the hard–easy effect (overconfidence increases with item difficulty) reported in the literature and (b) predicts conditions under which both effects appear, disappear, or invert. In addition, (c) it predicts a new phenomenon, the confidence–frequency effect, a systematic difference between a judgment of confidence in a single event (i.e., that any given answer is correct) and a judgment of the frequency of correct answers in the long run. Two experiments are reported that support PMM theory by confirming these predictions, and several apparent anomalies reported in the literature are explained and integrated into the present framework.
Chapter
Full-text available
This chapter provides a general background to the studies of decision making addressed by the other contributors to the book. These contributions draw on a number of approaches, differentiated in terms of the aspects of the decision-making process that are the focus of attention and the theoretical and methodological approaches taken when conceptualizing and investigating the process. For instance, some contributions have investigated judgment, others choice, each adopting an approach that focuses primarily on the outcome of these activities or the nature of the cognitive processes that underlie them. In additon to identifying different approaches to the study of decision making, the potential of each approach for developing our understanding of the effects of time constraints on judgment and decision making are elaborated. This provides a broad coverage of issues, putting into perspective the different studies of judgment and decision making included in this volume.
Article
Full-text available
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following Herbert Simon's notion of satisficing, this chapter proposes a family of algorithms based on a simple psychological mechanism: one-reason decision making. These fast-and-frugal algorithms violate fundamental tenets of classical rationality: It neither looks up nor integrates all information. By computer simulation, a competition was held between the satisficing "take-the-best" algorithm and various "rational" inference procedures (e.g., multiple regression). The take-the-best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference.
Article
Full-text available
The role of effort and accuracy in the adaptive use of decision processes is examined. A computer simulation using the concept of elementary information processes identified heuristic choice strategies that approximate the accuracy of normative procedures while saving substantial effort. However, no single heuristic did well across all task and context conditions. Of particular interest was the finding that under time constraints, several heuristics were more accurate than a truncated normative procedure. Using a process-tracing technique that monitors information acquisition behaviors, two experiments tested how closely the efficient processing patterns for a given decision problem identified by the simulation correspond to the actual processing behavior exhibited by subjects. People appear highly adaptive in responding to changes in the structure of the available alternatives and to the presence of time pressure. In general, actual behavior corresponded to the general patterns of efficient processing identified by the simulation. Finally, learning of effort and accuracy trade-offs are discussed.
Article
Full-text available
In multiattribute decision problems, the subject has to evaluate a number of alternatives with given values on a number of attributes, in order to arrive at some conclusion about the attractiveness or utility of these alternatives. The information processing procedure leading to a conclusion is called adecision strategy, and one of the main research topics in multiattribute decision research has been the extent to which these strategies follow compensatory principles. Judges are said to follow compensatory strategies when low values on some attributes are compensated for by high values on other attributes. In process tracing studies using the information board technique, descriptions of decision strategies are usually based on three indices of the information search process:variability of search,search pattern (Payne, 1976), anddepth of search. Variability of search, defined as the standard deviation of the proportion of information searched per alternative, is considered to give an indication of the degree of compensation of a decision strategy, compensation being smaller as variability increases. In this article, we propose an alternative way for establishing the degree of compensation of decision strategies in information board studies. We argue that the degree of compensation depends on both variability of searchand depth of search (the proportion of information searched), and that a valid compensation index has to be a multiplicative function of these two indices.
Article
Full-text available
This paper introduces a representation system for the description of decision alternatives and decision rules. Examples of these decision rules, classified according to their metric requirements (i.e., metric level, commensurability across dimensions, and lexicographic ordering) in the system, are given. A brief introduction to process tracing techniques is followed by a review of results reported in process tracing studies of decision making. The review shows that most decision problems are solved without a complete search of information, which shows that many of the algebraic models of decision making are inadequate. When the number of aspects of a decision situation is constant, an increase in the number of alternatives (and a corresponding decrease in the number of dimensions) leads to a greater number of investigated aspects. Verbal protocols showed that decision rules belonging to the different groups were used by decision makers when making a choice. It was concluded that process tracing data can be fruitfully used in studies of decision making but that such data do not release the researcher from the burden of constructing theories or models in relation to which the data must then be analyzed.
Article
Full-text available
Following Brunswik (1952), social judgement theorists have long relied on regression models to describe both an individual's judgements and the environment about which such judgements are made. However, social judgement theory is not synonymous with these compensatory, static, structural models. We compared the characterisations of physicians' judgements using a regression model with that of a non-compensatory process model (called fast and frugal). We found that both models fit the data equally well. Both models suggest that physicians use few cues, that they disagree among themselves, and that their stated cue use is discrepant with the models' stated cue use. However, the fast and frugal model is easier to convey to physicians and is also more psychologically plausible. Implications for how social judgement theorists conceptualise the process of vicarious functioning are discussed.
Article
Full-text available
Humans and animals make inferences about the world under limited time and knowledge. In con- trast, many models of rational inference treat the mind as a Laplacean Demon, equipped with un- limited time, knowledge, and computational might. Following H. Simon's notion of satisficing, the authors have proposed a family of algorithms based on a simple psychological mechanism: one- reason decision making. These fast and frugal algorithms violate fundamental tenets of classical rationality: They neither look up nor integrate all information. By computer simulation, the authors held a competition between the satisficing "Take The Best" algorithm and various "rational" infer- ence procedures (e.g., multiple regression). The Take The Best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference.
Chapter
Full-text available
Analyzes heuristics that draw from inferences from information beyond mere recognition. The authors address how people make inferences, predictions, and decisions from a bundle of imperfect cues and signals. Three heuristics studied in the chapter are the Minimalist, Take The Last, and Take The Best. Findings indicate that fast and frugal heuristics that embody simple psychological mechanisms can yield inferences about a real-world environment that are at least as accurate as standard linear statistical strategies embodying classical properties of rational judgment. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A review of the literature indicates that linear models are frequently used in situations in which decisions are made on the basis of multiple codable inputs. These models are sometimes used (a) normatively to aid the decision maker, (b) as a contrast with the decision maker in the clinical vs statistical controversy, (c) to represent the decision maker "paramorphically" and (d) to "bootstrap" the decision maker by replacing him with his representation. Examination of the contexts in which linear models have been successfully employed indicates that the contexts have the following structural characteristics in common: each input variable has a conditionally monotone relationship with the output; there is error of measurement; and deviations from optimal weighting do not make much practical difference. These characteristics ensure the success of linear models, which are so appropriate in such contexts that random linear models (i.e., models whose weights are randomly chosen except for sign) may perform quite well. 4 examples involving the prediction of such codable output variables as GPA and psychiatric diagnosis are analyzed in detail. In all 4 examples, random linear models yield predictions that are superior to those of human judges. (52 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Chapter
Full-text available
Examined whether people use simple heuristics and discussed 2 possible approaches to identifying these strategies: process analysis and outcome analysis. The authors describe various decision strategies and outline an experiment in which participants had to choose among alternatives under low and high time pressure. The process- and outcome-oriented approaches are described and the experiment is used as an illustration for the methodological problems in identifying strategies. The authors review some studies that have investigated conditions that should have an impact on decision strategies, including time pressure. Evidence suggests that heuristics with a cue-wise information search can describe individuals' decision strategies for choice tasks. Results indicate that people indeed use smart and simple decision strategies. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Reviews the literature showing the effects of task and context variables on decision behavior and evaluates alternative theories for handling task and context effects. These frameworks include (a) cost/benefit principles, (b) perceptual processes, and (c) adaptive production systems. Both the cost/benefit and perceptual frameworks are shown to have strong empirical support but unresolved conceptual problems. The production system framework has less direct support but has the desirable property of containing elements of both of the other frameworks. Research is discussed in terms of variables encountered by the decision maker: task complexity, response mode, information display, agenda effects, similarity of alternatives, and the quality of the option set. (91 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Bayesian statistics, a currently controversial viewpoint concerning statistical inference, is based on a definition of probability as a particular measure of the opinions of ideally consistent people. Statistical inference is modification of these opinions in the light of evidence, and Bayes' theorem specifies how such modifications should be made. The tools of Bayesian statistics include the theory of specific distributions and the principle of stable estimation, which specifies when actual prior opinions may be satisfactorily approximated by a uniform distribution. A common feature of many classical significance tests is that a sharp null hypothesis is compared with a diffuse alternative hypothesis. Often evidence which, for a Bayesian statistician, strikingly supports the null hypothesis leads to rejection of that hypothesis by standard classical procedures. The likelihood principle emphasized in Bayesian statistics implies, among other things, that the rules governing when data collection stops are irrelevant to data interpretation. It is entirely appropriate to collect data until a point has been proven or disproven, or until the data collector runs out of time, money, or patience. (71 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The process of vicarious functioning, by which equivalent judgments can result from different patterns of cues, is central to any theory of judgment. It is argued that both linear regression and process-tracing models capture the various aspects of vicarious functioning: the former by dealing with the ambiguities that the organism faces with regard to the substitutions and trade-offs between cues in a redundant environment, and the latter by dealing with cue search and attention. Furthermore, although the surface structures and levels of detail of the 2 models are different, it is shown that process-tracing protocols can be generated via a general additive rule. Therefore, both types of models can be capturing the same underlying process, although at different levels of generality. Two experiments in which both models are built and tested on the same data are presented. In Exp I, experienced MMPI users made diagnostic judgments of the degree of adjustment/maladjustment from MMPI profiles; in Exp II, 1 S evaluated the nutritional quality of breakfast cereals. Results are discussed with respect to (a) links between judgment, choice, and task structure; (b) rule generality and awareness; and (c) advantages of a multimethod approach. (74 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Reveals that, under specified experimental conditions, consistent and predictable intransitivities can be demonstrated. The conditions under which intransitivities occur and their relationships to the structure of the alternatives and to processing strategies are investigated within the framework of a general theory of choice. Implications to the study of preference and the psychology of choice are discussed. (33 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This article presents a detailed discussion and application of a methodology, called multinomial modeling, that can be used to measure and study cognitive processes. Multinomial modeling is a statistically based technique that involves estimating hypothetical parameters that represent the probabilities of unobservable cognitive events. Models in this class provide a statistical methodology that is compatible with computational theories of cognition. Multinomial models are relatively uncomplicated, do not require advanced mathematical techniques, and have certain advantages over other, more traditional methods for studying cognitive processes. The statistical methodology behind multinomial modeling is briefly discussed, including procedures for data collection, model development, parameter estimation, and hypothesis testing. Three substantive examples of multinomial modeling are presented. Each example, taken from a different area within the field of human memory, involves the development of a multinomial model and its application to a specific experiment. It is shown how multinomial models facilitate the interpretation of the experiments. The conclusion discusses the general advantages of multinomial models and their potential application as research tools for the study of cognitive processes. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The role of effort and accuracy in the adaptive use of decision processes is examined. A computer simulation using the concept of elementary information processes identified heuristic choice strategies that approximate the accuracy of normative procedures while saving substantial effort. However, no single heuristic did well across all task and context conditions. Of particular interest was the finding that under time constraints, several heuristics were more accurate than a truncated normative procedure. Using a process-tracing technique that monitors information acquisition behaviors, two experiments tested how closely the efficient processing patterns for a given decision problem identified by the simulation correspond to the actual processing behavior exhibited by subjects. People appear highly adaptive in responding to changes in the structure of the available alternatives and to the presence of time pressure. In general, actual behavior corresponded to the general patterns of efficient processing identified by the simulation. Finally, learning of effort and accuracy trade-offs are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Shows that the assumption of simple scalability in most probabilistic analyses of choice is inadequate on both theoretical and experimental grounds. A more general theory of choice based on a covert elimination process is developed in which each alternative is viewed as a set of aspects. At each stage in the process, an aspect is selected (with probability proportional to its weight), and all the alternatives that do not include the selected aspect are eliminated. The process continues until all alternatives but 1 are eliminated. It is shown that this model (a) is expressible purely in terms of the choice alternatives without any reference to specific aspects, (b) can be tested using observable choice probabilities, and (c) generalize the choice models of R. D. Luce (see PA, Vol. 34:3588) and of F. Restle (see PA Vol. 36:5CP35R). Empirical support from a study of psychophysical and preferential judgments is presented. Strategic implications and the logic of elimination by aspects are discussed. (29 ref.)
Article
Egon Brunswik is one of the most brilliant, creative and least understood and appreciated psychologists/philosophers of the 20th century. This book presents a collection of Brunswik’s most important papers together with interpretive comments by prominent scholars who explain the intent and development of his thought. This collection and the accompanying diverse examples of the application of his ideas will encourage a deeper understanding of Brunswik in the 21st century than was the case in the 20th century. The 21st century already shows signs of acceptance of Brunswikian thought with the appearance of psychologists with a different focus; emulation of physical science is of less importance, and positive contributions toward understanding behavior outside the laboratory without abandoning rigor are claiming more notice. As a result, Brunswik’s theoretical and methodological views are already gaining the attention denied them in the 20th century. The plan of this book is to provide, for the first time, in one place the articles that show the origins of his thought, with all their imaginative and creative spirit, as well as thoughtful, scholarly interpretations of the development, meaning and application of his ideas to modern psychology. Thus, his views will become more understandable and more widely disseminated, as well as advanced through the fresh meaning given to them by the psychologists of the 21st century.
Book
The Adaptive Decision Maker argues that people use a variety of strategies to make judgments and choices. The authors introduce a model that shows how decision makers balance effort and accuracy considerations and predicts which strategy a person will use in a given situation. A series of experiments testing the model are presented, and the authors analyse how the model can lead to improved decisions and opportunities for further research.
Article
A model hypothesizes why decision makers choose different decision strategies in dealing with different decision problems. Given the motivation to choose the strategy which requires the least investment for a satisfactory solution, the choice of strategy should depend upon the type of problem, the surrounding environment, and the personal characteristics of the decision maker.
Article
Quantitative theories with free parameters often gain credence when they closely fit data. This is a mistake. A good fit reveals nothing about the flexibility of the theory (how much it cannot fit), the variability of the data (how firmly the data rule out what the theory cannot fit), or the likelihood of other outcomes (perhaps the theory could have fit any plausible result), and a reader needs all 3 pieces of information to decide how much the fit should increase belief in the theory. The use of good fits as evidence is not supported by philosophers of science nor by the history of psychology; there seem to be no examples of a theory supported mainly by good fits that has led to demonstrable progress. A better way to test a theory with free parameters is to determine how the theory constrains possible outcomes (i.e., what it predicts), assess how firmly actual outcomes agree with those constraints, and determine if plausible alternative outcomes would have been inconsistent with the theory, allowing for the variability of the data.
Chapter
This chapter highlights judgment analysis (JA). It describes the steps necessary to apply JA and provide guidelines for making the numerous decisions required for a successful application. JA is also known as “policy capturing,” and is a research method that has been used in hundreds of studies of judgment and decision making including studies of multiple cue probability learning, interpersonal learning, conflict, and expert judgment. It has been found useful as an aid to judgment, particularly in cases involving conflict about technical and political issues. The goal of JA is to describe, quantitatively, the relations between someone's judgment and the information, or “cues,” used to make that judgment. JA has also often been used to study judgment on unfamiliar tasks. For judgment analysis, as for all formal analytic methods, proper formulation of the problem is critical to success of the analysis. Designing the judgment task includes (1) defining the judgment of interest, (2) identifying the cues, (3) describing the context for judgment, (4) defining the distributions of the cue values, and (5) defining relations among the cues.
Article
For decades, the dominant paradigm for studying decision making-the expected utility framework-has been burdened by an increasing number of empirical findings that question its validity as a model of human cognition and behavior. However, as Kuhn (1962) argued in his seminal discussion of paradigm shifts, an old paradigm cannot be abandoned until a new paradigm emerges to replace it. In this article, we argue that the recent shift in researcher attention toward basic cognitive processes that give rise to decision phenomena constitutes the beginning of that replacement paradigm. Models grounded in basic perceptual, attentional, memory, and aggregation processes have begun to proliferate. The development of this new approach closely aligns with Kuhn's notion of paradigm shift, suggesting that this is a particularly generative and revolutionary time to be studying decision science.
Article
The cognitive “revolution” in psychology introduced a new concept of explanation and somewhat novel methods of gathering and interpreting evidence. These innovations assume that it is essential to explain complex phenomena at several levels, symbolic as well as physiological; complementary, not competitive. As with the other sciences, such complementarity makes possible a comprehensive and unified experimental psychology. Contemporary cognitive psychology also introduced complementarity of another kind, drawing upon, and drawing together, both the behaviorist and the Gestalt traditions.
Article
It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection, and accounting for model uncertainty is presented. Implementing this is straightforward through the use of the simple and accurate BIC approximation, and it can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this approach overcomes the difficulties with P-values and standard model selection procedures based on them. It also allows easy comparison of nonnested models, and permits the quantification of the evidence for a null hypothesis of interest, such as a convergence theory or a hypothesis about societal norms.
Article
In this article we describe research methods that are used for the study of individual multiattribute evaluation processes. First we explain that a multiattribute evaluation problem involves the evaluation of a set of alternatives, described by their values on a number of alternatives. We discuss a number of evaluation strategies that may be applied to arrive at a conclusion about the attractiveness or suitability of the alternatives, and next introduce two main research paradigms in this area, structural modelling and process tracing. We argue that the techniques developed within these paradigms all have their advantages and disadvantages, and conclude that the most promising technique to detect the true nature of the evaluation strategy used by a judge seems to be the analysis of verbal protocols. At the same time we think it is wise not to rely on just one technique, but to use a multimethod approach to the study of multiattribute evaluation processes whenever that is possible.
Article
This Thurstone (1927a) article developed a representational measurement model of comparative judgment; estimated discrimination probabilities yield scale values that imply values of other probabilities not yet observed, if the model provides a true representation. In practice, the accuracy of such inferences is captured by "goodness-of-fit" statistics. The specific representational measurement model developed can yield magnitude measurement on psychological dimensions for which no corresponding physical dimensions exist (e.g., favorability of "attitude toward"). This revolutionary article led to the development of many other representational measurement models. As opposed to psychophysics, however, the introduction of "true measurement" in social, attitudinal, and personality psychology did not yield the rapid progress Thurstone envisioned, and currently this specific model is seldom used in these areas.
Article
A model hypothesizes why decision makers choose different decision strategies in dealing with different decision problems. Given the motivation to choose the strategy which requires the least investment for a satisfactory solution, the choice of strategy should depend upon the type of problem, the surrounding environment, and the personal characteristics of the decision maker.
Article
Proper linear models are those in which predictor variables are given weights such that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in P. Meehl's (1954) book on clinical vs statistical prediction and research stimulated in part by that book indicate that when a numerical criterion variable (e.g., graduate GPA) is to be predicted from numerical predictor variables, proper linear models outperform clinical intuition. Improper linear models are those in which the weights of the predictor variables are obtained by some nonoptimal method. The present article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors. In fact, unit (i.e., equal) weighting is quite robust for making such predictions. The application of unit weights to decide what bullet the Denver Police Department should use is described; some technical, psychological, and ethical resistances to using linear models in making social decisions are considered; and arguments that could weaken these resistances are presented. (50 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Provides an overview of the history of heuristics and visions of rationality and discusses a new revolution under the premise that much of human reasoning and decision making can be modeled by fast and frugal heuristics that make inferences with limited time and knowledge. The authors discuss a research program that is designed to elucidate 3 distinct but interconnected aspects of rationality: bounded rationality, ecological rationality, and social rationality. (PsycINFO Database Record (c) 2012 APA, all rights reserved)