Article

Procedural debiasing

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper presents two experiments that illustrate how Bayesian inferences can be debiased by analyzing and correcting the cognitive procedures that lead to the biases. In the first experiment, a training procedure is used that corrects a common error in the adjustment process used by subjects when integrating evidence. In the second experiment, a focusing technique is used to improve the relative weighting of samples in the overall judgment. These results are discussed in terms of a model of the judgment process comprising four basic stages: (a) initial scanning of stimulus information; (b) selection of items for processing in order of importance; (c) extraction of scale values on the judgment dimension; and (d) adjustment of a composite value that summarizes already-processed components.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Anderson, 1981;Shanteau, 1970) and procedural (e.g. Hogarth & Einhorn, 1992;Lopes, 1987) models of belief revision and information integration. In contrast with the influence of background beliefs, which can be argued to be im plicitly Bayesian, the recency effect suggests that participants integrated that information in a manner that is at odds with the prescriptions of decision theory. ...
... First, they replicate the results of Experiment 1 where participants were found to be sensitive to the rarity of the evidential features when interpreting the information they received (as in Experiment 1 the size of the difference due to rarity is not as large as might have been expected from the results of the pre-test). Although the non-significance of the difference due to rarity in the confidence ratings of participants given an initial piece of information concerning the non-favoured category in this experiment is somewhat surprising, it may be due to uncertainty concerning which hypothesis the evidence supports (see Hogarth & Einhorn, 1992;Lopes, 1987). Participants told that 25% of the non-favoured category possessed a rare feature, or that 60% of the non-favoured category possessed a common feature, may have been uncertain as to which hypothesis the information supported. ...
Article
Full-text available
In this paper we argue that it is often adaptive to use one's background beliefs when interpreting information that, from a normative point of view, is incomplete. In both of the experiments reported here participants were presented with an item possessing two features and were asked to judge, in the light of some evidence concerning the features, to which of two categories it was more likely that the item belonged. It was found that when participants received evidence relevant to just one of these hypothesised categories (i.e. evidence that did not form a Bayesian likelihood ratio) they used their background beliefs to interpret this information. In Experiment 2, on the other hand, participants behaved in a broadly Bayesian manner when the evidence they received constituted a completed likelihood ratio. We discuss the circumstances under which participants, when making their judgements, consider the alternative hypothesis. We conclude with a discussion of the implications of our results for an understanding of hypothesis testing, belief revision, and categorisation.
... In practice this meant that the expert's opinion which should have supported the prosecution's case was interpreted as supporting the defense case by a clear majority of participants in the low strength verbal conditions. Although not previously unknown [10,[36][37][38], and to some extent context dependent [34], this inversion of the valence of the opinions of forensic scientists is somewhat concerning. Specifically, these weak evidence effects are of concern not only because they inaccurately reflect the valence of the expert's opinion, but also because of the stated belief that verbal expressions of evidence should be used by forensic science experts because they provide the most appropriate basis for communication [6]. ...
... Similarly the averaging [37] and expectancy violation [39] accounts predict weak evidence effects as a result of mistakenly combining (averaging) our prior beliefs, or through a violation of our initial (high) expectations of evidence strength with the addition of (low strength) evidence-causing a final belief that is lower than would be predicted from a normative belief-updating perspective. Yet as with the neglect account above, the different evidentiary presentation styles are not expected to systematically vary with regard to participants' prior beliefs or their expectations regarding the evidence strength; therefore, it is not possible to derive predictions regarding the presence or absence of weak evidence effects in the current study from these theories. ...
Article
Likelihood ratios are increasingly being adopted to convey expert evaluative opinions to courts. In the absence of appropriate databases, many of these likelihood ratios will include verbal rather than numerical estimates of the support offered by the analysis. However evidence suggests that verbal formulations of uncertainty are a less effective form of communication than equivalent numerical formulations. Moreover, when evidence strength is low a misinterpretation of the valence of the evidence - a "weak evidence effect" - has been found. We report the results of an experiment involving N=404 (student and online) participants who read a brief summary of a burglary trial containing expert testimony. The expert evidence was varied across conditions in terms of evidence strength (low or high) and presentation method (numerical, verbal, table or visual scale). Results suggest that of these presentation methods, numerical expressions produce belief-change and implicit likelihood ratios which were most commensurate with those intended by the expert and most resistant to the weak evidence effect. These findings raise questions about the extent to which low strength verbal evaluative opinions can be effectively communicated to decision makers at trial.
... Because in real life probabilities are rather noisy and hence rarely known exactly, this property is crucial. Thus, this model not only incorporates the well-established assumption that controlled human judgment entails weighting and adding processes (Anderson, 1981(Anderson, , 1996Hogarth & Einhorn, 1992;Juslin et al., 2008;Lopes, 1985Lopes, , 1987Roussel, Fayol, & Barrouillet, 2002;Shanteau, 1970Shanteau, , 1972Shanteau, , 1975 but also leads to good choices. Even though this model overestimates probabilities, it succeeds in rank ordering probabilities accurately in noisy environments. ...
... Besides the fact that the configural weighted average model can explain a range of phenomena in the conjunction effect literature , the hypothesis that people combine probabilities by a configural weighted average is interesting for two reasons. First, it is consistent with a tradition of research indicating that people's controlled judgment is guided by weighting and adding processes (Anderson, 1981(Anderson, , 1996Hogarth & Einhorn, 1992;Juslin et al., 2008;Lopes, 1985Lopes, , 1987Roussel et al., 2002;Shanteau, 1970Shanteau, , 1972Shanteau, , 1975, which can cause effects such as the dilution effect (e.g., Jenny et al., 2013) and base-rate neglect (Juslin et al., 2011). Second, when conjunctive probabilities are based on experienced noisy samples, a weighting and adding process can limit the effect of noise and lead to good judgments and decision returns . ...
Article
Full-text available
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2 experiments, participants had to repeatedly choose between pairs of 2 conjunctive events (represented as 2 gambles). To estimate the probability that both events occur, they had access to a small sample of information. The 1st experiment consisted of a balanced set of gambles, whereas in the 2nd experiment, the gambles were constructed such that the models maximally differed in their predictions. A hierarchical Bayesian approach used for estimating the models' parameters and for testing the models against each other showed that the majority of participants were best described by the configural weighted average model. This model performed best in predicting people's choices, and it assumes that constituent probabilities are ranked by importance, weighted accordingly, and added up. The cognitive modeling approach provides an understanding of the cognitive processes underlying people's conjunctive probability judgments. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
... Procedural debiasing. The term procedural debiasing was first introduced by Lopes (1987), whose work aimed to modify the cognitive procedures within the judge (or individual decision-maker) to debias judgements. Here, however, procedural debiasing pertains to strategies that improve the nature of tasks in order to fit human cognition for optimal judgement or decision outcomes. ...
Article
Full-text available
Objective To review and synthesise research on technological debiasing strategies across domains, present a novel distributed cognition-based classification system, and discuss theoretical implications for the field. Background Distributed cognition theory is valuable for understanding and mitigating cognitive biases in high-stakes settings where sensemaking and problem-solving are contingent upon information representations and flows in the decision environment. Shifting the focus of debiasing from individuals to systems, technological debiasing strategies involve designing system components to minimise the negative impacts of cognitive bias on performance. To integrate these strategies into real-world practices effectively, it is imperative to clarify the current state of evidence and types of strategies utilised. Methods We conducted systematic searches across six databases. Following screening and data charting, identified strategies were classified into (i) group composition and structure, (ii) information design and (iii) procedural debiasing, based on distributed cognition principles, and cognitive biases, classified into eight categories. Results Eighty articles met the inclusion criteria, addressing 100 debiasing investigations and 91 cognitive biases. A majority (80%) of the identified debiasing strategies were reportedly effective, whereas fourteen were ineffective and six were partially effective. Information design strategies were studied most, followed by procedural debiasing, and group structure and composition. Gaps and directions for future work are discussed. Conclusion Through the lens of distributed cognition theory, technological debiasing represents a reconceptualisation of cognitive bias mitigation, showing promise for real-world application. Application The study results and debiasing classification presented can inform the design of high-stakes work systems to support cognition and minimise judgement errors.
... While weak evidence effects in particular (Fernbach, Darlow, & Sloman, 2011) and more general directional effects (Lopes, 1987) have been observed fairly regularly in other contexts, these effects have been given relatively little attention in the context of statistical statements from forensic scientists. Smith et al. (1996) observed that a small number of participants reduced their guilt assessments in light of incriminating statistical statements and Thompson and Newman (2015) noted that a small number of participants (7.6% in the log scale condition and 6.4% in the odds conditions) treated incriminating evidence as exculpatory. ...
Preprint
Ultimately, the question of whether lay decision-makers appropriately comprehend statistical statements from forensic scientists is an important one, but its answer remains elusive. Irrespective of legal interest in empirically established understanding, cognitive and forensic scientists interested in just trial outcomes can and should continue collaborating to optimize the communication of uncertainty in the high-stakes forensic decision-making environment. The evidence gathered is valuable for fueling discussion and debate surrounding justice reforms and is necessary for improved communication between practitioners and decision makers. Knowledge means little if it can’t be shared (Howes, 2015).
... La représentation séquentielle amène assez naturellement à une représentation de même type, mais plus systématique et plus abstraite : une représentation algorithmique. Lola Lopes (1987) a proposé ce type de représentation dans une étude ayant porté sur l'estimation de la probabilité de ce qu'une chaîne de montage soit endommagée, à partir du constat de pièces défectueuses lors de deux contrôles séparés. Disposer d'un algorithme simulant le processus de jugement des personnes semblait un bon point de départ pour faire en sorte que celles-ci prennent conscience de leurs erreurs et améliorent leur manière de juger. ...
... It is a simple question, but the simplicity belies numerous complexities in our understanding of how mere presentation biases-such as the order in which arguments are presented in a persuasive appeal-can have a substantial influence on persuasive efficacy (Lana, 1961;Miller & Campbell, 1959;Schultz, 1963). Though this work traditionally used sequential messages and two-sided appeals, these same complexities are found when argument order varies within the same appeal (i.e., one-sided messages), as some research on one-sided messages finds that starting strong is most persuasive (Fernbach et al., 2011;Igou & Bless, 2003Unnava et al., 1994) and other research finds that ending strong is most persuasive (Krosnick et al., 1990;Lopes, 1985Lopes, , 1987. ...
Article
Full-text available
Should persuasion start strong or end strong? Though persuasion researchers have long known that the order in which the same arguments are presented can influence the efficacy of an appeal, much less is known about the factors that determine optimal argument order. In this paper, we propose that consumers hold expectations regarding the order in which arguments are most effectively presented—expectations grounded in lay beliefs regarding message recipients’ capacity to attend to the persuasive appeal. However, we predict that messages that violate these expectations invoke greater processing and thus generate greater persuasion in the form of more favorable intentions toward the target product. We present three experiments in support of these hypotheses and thereby demonstrate the importance of consumers’ expectations about the structure of one-sided advertisements in determining the efficacy of different argument orders.
... Consider the weak evidence effect (Fernbach et al., 2011;Lopes, 1987;McKenzie et al., 2002) or boomerang effect (Petty, 2018), a striking case of non-monotonic belief updating where weak evidence in favor of a particular conclusion may backfire and actually reduce an individual's belief in that conclusion. For example, suppose a juror is determining the guilt of a defendant in court. ...
Article
Full-text available
Language is not only used for neutral information; we often seek to persuade by arguing in favor of a particular view. Persuasion raises a number of challenges for classical accounts of belief updating, as information cannot be taken at face value. How should listeners account for a speaker’s “hidden agenda” when incorporating new information? Here, we extend recent probabilistic models of recursive social reasoning to allow for persuasive goals and show that our model provides a pragmatic account for why weakly favorable arguments may backfire, a phenomenon known as the weak evidence effect. Critically, this model predicts a systematic relationship between belief updates and expectations about the information source: weak evidence should only backfire when speakers are expected to act under persuasive goals and prefer the strongest evidence. We introduce a simple experimental paradigm called the Stick Contest to measure the extent to which the weak evidence effect depends on speaker expectations, and show that a pragmatic listener model accounts for the empirical data better than alternative models. Our findings suggest further avenues for rational models of social reasoning to illuminate classical decision-making phenomena.
... Our model, however, also departs from classic range principle by incorporating further assumptions about biases in perceiving the categories of gains and losses that originate from principles of perception that are well-established in psychophysics. Namely, we assume that (1) attention is allocated differentially to the sub-ranges gains and losses by different decision makers (Epley and Gilovich, 2006;Lopes, , 1987b; and (2) that a reference or anchoring bias of the uncertain gamble biases the decision maker in choosing between sure and uncertain gambles Krueger, 1984). We also assume that the sensitivity in perceiving gains and losses are systematically affected by: (3) the overall magnitude of the offers, which scales the decision value over and above perceived range (Khaw et al., 2020); and (4) noise that biases the perceived size of gain/loss categories . ...
... Although the selfreflexivity may be shown to generate a paradox in certain unusual circumstances, since there is no danger of succumbing to it in the vast majority of real research contexts it can be safely ignored. After all, there are well-known techniques for debiasing (Lopes, 1987) and there are experimental manipulations designed explicitly to remove certain cognitive biases and prevent them from influencing our judgments and reasoning processes (Larrick, 2004). Thus, practicing cognitive scientists have ways of ensuring that no paradox ensues when they are performing research on cognitive bias. ...
Article
Full-text available
Cognitive scientists claim to have discovered a large number of cognitive biases, which have a tendency to mislead reasoners. Might cognitive scientists themselves be subject to the very biases they purport to discover? And how should this alter the way they evaluate their research as evidence for the existence of these biases? In this paper, we posit a new paradox (the ‘Self-Reflexive Bias Paradox’), which bears a striking resemblance to some classical logical paradoxes. Suppose that research R appears to be good evidence for the existence of bias B, but if B exists, then R would have been subject to B. Thus, it seems sensible for the researcher to reject R as good evidence for the existence of B. However, rejecting R for this reason admits the existence of B. We examine four putative cognitive biases and criticisms of them, each of which seem to be subject to self-reflexivity. In two cases, we argue, paradox is avoidable. In the remaining two, we cannot find a way to avoid the paradox, which poses a practical obstacle to scientific inquiry and results in an intriguing theoretical quandary.
... Alternative approaches to belief updating are weighting-andadding theories that have repeatedly been demonstrated to describe people's controlled judgments well (Anderson, 1981(Anderson, , 1996Juslin et al., 2008;Lopes, 1985Lopes, , 1987Roussel et al., 2002;Shanteau, 1970Shanteau, , 1972Shanteau, , 1975, explain order effects (Hogarth & Einhorn, 1992), and, recently, describe conjunctive probability judgments (Jenny et al., 2014;Nilsson et al., 2009Nilsson et al., , 2013. Even the integration of sensory input from different modalities is assumed to happen through a weighting-and-adding process (Ernst & Bülthoff, 2004). ...
Article
Full-text available
People often take nondiagnostic information into account when revising their beliefs. A probability judgment decreases due to nondiagnostic information represents the well-established “dilution effect” observed in many domains. Surprisingly, the opposite of the dilution effect called the “confirmation effect” has also been observed frequently. The present work provides a unified cognitive model that allows both effects to be explained simultaneously. The suggested similarity-updating model incorporates two psychological components: first, a similarity-based judgment inspired by categorization research, and second, a weighting-and-adding process with an adjustment following a similarity-based confirmation mechanism. Four experimental studies demonstrate the model’s predictive accuracy for probability judgments and belief revision. The participants received a sample of information from one of two options and had to judge from which option the information came. The similarity-updating model predicts that the probability judgment is a function of the similarity of the sample to the options. When one is presented with a new sample, the previous probability judgment is updated with a second probability judgment by taking a weighted average of the two and adjusting the result according to a similarity-based confirmation. The model describes people’s probability judgments well and outcompetes a Bayesian cognitive model and an alternative probability-theory-plus-noise model. The similarity-updating model accounts for several qualitative findings, namely, dilution effects, confirmation effects, order effects, and the finding that probability judgments are invariant to sample size. In sum, the similarity-updating model provides a plausible account of human probability judgment and belief revision.
... By contrast, an "averaging" model makes the unequivocal prediction that adding relatively weak publications will always hurt a candidate because they will lower the average publication strength. Considerable evidence indicates that people often average when updating their opinions, regardless of whether the new information is quantitative or qualitative (Anderson, 1981;Lopes, 1985Lopes, , 1987Meyvis & Janisze-wski, 2002;Nisbett, Zukier, & Lemley, 1981;Shanteau, 1970Shanteau, , 1972Shanteau, , 1975. For example, people have been found to average the information contained in a product's distinct attributes (Troutman & Shanteau, 1976) or in an individual's personality traits (Anderson & Alexander, 1971). ...
Article
Full-text available
Using psychology professors as participants, the present study investigates how publications in low-impact psychology journals affect evaluations of a hypothetical tenure-track psychology job applicant. Are "weak" publications treated as evidence for or against a candidate's ability? Two experiments revealed that an applicant was rated as stronger when several weak publications were added to several strong ones and was rated as weaker when the weak publications were removed. A third experiment showed that the additional weak publications were not merely viewed as a signal of additional strong publications in the future; instead, the weak publications themselves appear to be valued. In a fourth and final experiment, we found that adding a greater number of weak publications also strengthened the applicant, but not more so than adding just a few. The study further suggests that the weak publications may signal ability, as applicants with added weak publications were rated as both more hardworking and more likely to generate innovative research ideas. Advice for tenure-track psychology applicants: Do not hesitate to publish in even the weakest journals, as long as it does not keep you from publishing in strong journals. Implications of the market rewarding publications in low-impact journals are discussed. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... However, we are not necessarily assuming that individuals are Bayesian problem-solvers. Nevertheless, some evidence has shown that individuals use intuitively averaged strategies (Hogarth & Einhorn, 1992;Lopes, 1987) which are associated with those Bayesian responses (McKenzie, 1994). Future research may address if this is the case with regard to our intuitive-rational respondents. ...
Article
Full-text available
Research has established that human thinking is often biased by intuitive judgement. The base-rate neglect effect provides such an example, so named because people often support their decisions in stereotypical individuating information, neglecting base-rates. Here, we test the hypothesis that reasoners acknowledge information provided by base-rates and may use individuating information in support of a “rational” decision process. Results from four experiments show that “base-rate neglecting” occurs when participants acknowledge sample distributions; participants who prefer individuating over base-rate information perceive base-rates as less diagnostic and are more confident in their individuating-based responses; and that posterior probabilities (assigned after all relevant information is considered) predict more individuating-based responses for individuating-preference participants (suggesting a rational process). However, data also show a deeper form of base-rate neglect: even when some participants report to prefer base-rate information, define individuating information as non-diagnostic, and their posterior probabilities suggest otherwise, they still provide individuating-based responses.
... In summary, a process plan describes the cognitive/affective processes that are active during the solution of a particular judgement problem. Most, but not all of the process stages have been described in different studies over the years, for example, by Lopes (1987). In the following, we will present the different strategies and describe how they can form a generic model for studies of quantitative judgement processes. ...
Article
Full-text available
This contribution presents a review and a theoretical process framework for human intuitive numerical judgments based on numerical information, The NJP model. The model is descriptive and includes one or several of the following stages, each consisting of information processing and solution strategies (1) problem readings (2) recognitions, (3) associations, (4) similarity assessments, (5) problem interpretations, (6) computations, (7) marker nominations, (8) start value selections and (9) adjustments. three main types of strategies are used separately, in sequence or simultaneously with others in and across stages: (i) Associative strategies, e.g., an answer is retrieved immediately, (ii) Computational strategies, different algorithms are applied to the information and (iii) Analogue strategies, visual analogue representations, e.g., anchoring and adjustment. The paper concludes that a generic model of intuitive judgments will inspire further studies of the psychological processes activated when a judge makes an intuitive numerical judgment.
... Similarly, Lopes (1985) suggested that non-Bayesian behaviour might be less likely to occur in situations where stimuli were more clearly 'marked' in support of or against a given hypothesis. Lopes (1987) succeeded in improving the match between participants' responses and normative predictions in a belief revision experiment by instructing them to separate their judgments into two steps. First participants labelled a piece of evidence as either favouring or countering a hypothesis, and then they made an estimate of how much it favoured one hypothesis of the other. ...
Article
Full-text available
Comparing the responses of participants in reasoning experiments to the normative standard of Bayes’ Theorem has been a popular empirical approach for almost half a century. One longstanding finding is that people’s belief revision is conservative with respect to the normative prescriptions of Bayes’ Theorem, that is, beliefs are revised less than they should be. In this paper, we consider a novel explanation of conservatism, namely that participants do not perceive information provided to them in experiments as coming from a fully reliable source. From the Bayesian perspective, less reliable evidence should lead to more conservative belief revision. Thus, there may be less of discrepancy between normative predictions and behavioural data than previously assumed.
... Lopes (1985) tentatively suggested that averaging might be less likely to occur in situations where stimuli were more clearly "marked" in support of or against a given hypothesis. Subsequently, Lopes (1987) succeeded in reducing participants' use of an averaging rule by instructing them to separate their judgements of belief updating into two steps, where the first required labelling a piece of evidence as either favouring or countering the hypothesis. We believe that our participants did not show the use of sub-optimal averaging strategies because the domain used is familiar to them and hence the evidence is subjectively well "marked" as to the hypothesis it supports. ...
Article
Full-text available
“Damned by faint praise” is the phenomenon whereby weak positive information leads to a negative change in belief. This seemingly conflicts with normative Bayesian predictions, which prescribe that positive information should only exert a positive change in belief. We argue that the negative belief change is due to an inference from critical missing evidence; that is, an implicit argument from ignorance. Such an inference is readily incorporated within a version of Bayes’ theorem incorporating the concept of epistemic closure. This reformalisation provides a general theoretical framework for the phenomenon that clearly outlines those conditions under which it should be observed, and its conceptual relationship with other argumentation phenomena.
... A wealth of evidence suggests that humans are inclined to rely on linear additive combination when making controlled judgments that are constrained by capacity-limited and sequential consideration of cues (Anderson, 1981(Anderson, , 1996Hogarth & Einhorn, 1992;Juslin, Karlsson, & Olsson, 2008;Lopes, 1985Lopes, , 1987Roussel, Fayol, & Barrouillet, 2002;Shanteau, 1970Shanteau, , 1972Shanteau, , 1975. Data on multiple-cue judgment thus typically suggest that judgment is a linear additive combination of the cues (Brehmer, 1994;Cooksey, 1996;Hammond, 1996;Hammond & Stewart, 2001;Juslin, Olsson, & Olsson, 2003;Karelaia & Hogarth, 2008). ...
Article
While a wealth of evidence suggests that humans tend to rely on additive cue combination to make controlled judgments, many of the normative rules for probability combination require multiplicative combination. In this article, the authors combine the experimental paradigms on probability reasoning and multiple-cue judgment to allow a comparison between formally identical tasks that involve probability vs. other task contents. The purpose was to investigate if people have cognitive algorithms for the combination, specifically, of probability, affording multiplicative combination in the context of probability. Three experiments suggest that, although people show some signs of a qualitative understanding of the combination rules that are specific to probability, in all but the simplest cases they lack the cognitive algorithms needed for multiplication, but instead use a variety of additive heuristics to approximate the normative combination. Although these heuristics are surprisingly accurate, normative combination is not consistently achieved until the problems are framed in an additive way. Copyright © 2014 Elsevier B.V. All rights reserved.
... Each fund has a continuous distribution of outcomes, and each can be combined with other funds to create more portfolios than could be enumerated in a lifetime. As Lopes (1987) puts it, simple prospects "occur most frequently in the context of formal gambling and psychology experiments." Measuring risk in the domain of multiple-alternative, multiple-outcome prospects seems warranted. ...
Article
SESSION OVERVIEW This symposium presents four methodological advances for illuminating the psychological processes underlying consumer decision making. The methodologies address two main problems. First, process data are often collected via static self-reports that distort what researchers wish to unveil, as is sometimes the case in mediation studies. Second, processes are often assumed instead of inferred from process data, which is often the case in behavioral decision theory. The converging message of the papers in this symposium is that while reality is more complex than standard measures admit, the appropriate methodologies can both capture and clarify psycho-logical processes. Each paper presents feasible and accessible tools that promise to provide significant leaps over conventional meth-ods used in the consumer behavior literature. The first two papers highlight the dynamic nature of decision making. Willemsen and Johnson designed the MouselabWEB methodology and illustrate how the rich information coming out of such studies may help to set apart theories about well-known phenomena (in this case context effects). Ramanathan illustrates the wealth of information that the measurement of moment-to-moment affective changes, as mea-sured with a joy stick, may reveal about processes underlying well-known effects. Dewitte identifies criteria that a moderation-by-process design should meet before a moderation interaction can be interpreted as evidence for the hypothesized underlying process. Goldstein introduces an interactive, graphical tool for risk prefer-ence assessment that, compared to earlier techniques, allows one to more useful process information in less time. The symposium may be of interest to experimental researchers who, because of the phenomena they tackle, struggle to find accurate process measures. Willemsen and Johnson's contribution may help decision researchers to illustrate the process underlying an emerging decision. This insight may help to put conflicting decision theories to the test. Ramanathan's contribution may help affect, goal, and social interaction researchers to illuminate the dynamics underlying affect, motivation, and decisions in a social context. Dewitte's contribution helps researchers to clearly specify and identify the process without measuring it. Goldstein's contribu-tion will help risk researchers to model more complex risk deci-sions. The symposium may also inspire consumer researchers to apply the proposed methods to new domains in our field. Eric Johnson will lead the discussion. Several methodological contributions to our field (e.g. Johnson 2001, JCR; Lohse and Johnson 1996, OBHDP) attest to his seminal role in the method-ological advance of our field. He will critically weigh the contribu-tors' suggestions against his experience as an experimentalist interested in processes. Individual presentations will take 14 min-utes, which leaves 19 minutes for discussion.
... 232 228 Zie bijv. Fischoff 1982a en Lopes 1987. 229 Lopes 1987 Een verdere methode om order effects door middel van psychologische bijsturing te verminderen is door de besluitvormer verantwoording af te laten leggen omtrent zijn of haar eindoordeel. ...
Chapter
Full-text available
Uit onderzoek op het gebied van de cognitieve en sociale psychologie blijkt dat cognitieve illusies, in de vorm van ‘heuristics’ en ‘biases’, in veel gevallen een belangrijke invloed uitoefenen op onze besluitvormingsprocessen. Dit geldt niet alleen voor alledaagse beslissingen in de privésfeer, maar ook voor oordeels- en besluitvorming in de professionele sfeer, zoals bij rechterlijke beslissingen. In deze bijdrage trachten wij te achterhalen welke cognitieve illusies specifiek bij de bewijswaardering in civiele zaken van belang zijn of zouden kunnen zijn, hoe het Nederlandse burgerlijke procesrecht daarmee omgaat, indien dat al gebeurt, en hoe dat eventueel anders of beter zou moeten/kunnen. Wij bieden dus een schets van het te bespreken juridisch domein, de bewijswaardering, schetsen vervolgens de psychologische inzichten op dat vlak, en koppelen dan beide (voorzichtig) aan elkaar: waar wringt het en hoe is die wrijving eventueel op te lossen? Daarbij gaan wij in het bijzonder in op de ‘confirmation bias’, het ‘anchoring effect’ en het ‘primacy effect’ en aanverwanten, zoals het ‘recency effect’ en andere ‘order effects’. Ook bezien wij of eventuele spanningen tussen de psychologische en de juridische inzichten wellicht oplosbaar zijn door psychologische bijsturing (‘debiasing’) of door een verandering op juridisch vlak. What is the influence of cognitive biases on judicial decisionmaking, in particular decisions regarding the assessment of the evidence in civil procedures?
... This provides judges with an overview and thereby enables them to focus on the major issues without overlooking possibly relevant aspects (Beach, 1990;Evans, 1989;Fischho, 1982;Pitz, 1983;Hammond, Anderson, Sutherland & Marvin, 1984;Keren, 1992;Slovic, Fischho & Lichtenstein, 1981;Turban, 1993;Westenberg & Koele, 1993). The second technique is instilling a more critical attitude in decision makers, both towards their own decision processes and towards their favourite decision options, by asking them to pay attention to decisive reasons (in our system: contra-indications) why their decisions might be wrong (Arkes, 1981(Arkes, , 1991Green, 1990;Keren, 1990;Lopes, 1987;Silverman, 1992;Williams, 1992). Both techniques work counter to the psychotherapists' framing and conservative strategies. ...
Article
In this paper we describe SelectCare, a computer system to support psychotherapists in their decision making for the treatment of depressed patients. Treatment decision tasks are complex and ill-structured: it is not clear what the relevant bits of information are and how these should be integrated into a correct decision, or even what a correct decision is. Still, treatment decisions are important and merit thorough consideration. We therefore set ourselves the goal of equipping psychotherapists with a system that helps them improve the completeness and overview of their considerations for treatment decisions.
... Most detrimental to the accuracy of final estimates, however, is the tendency for people to choose anchors because they are handy rather than because they are relevant (Bazerman, 1990). Anchoring is a robust phenommn that has been o b w ~ e d in many domains and tasks, including assessing p r ~ i 1 k i e s (Edwards, Lindman, & Phillips, 1965; Lopes, 1985 Lopes, , 1987 Peterson & DuCharme, 1967; Wright & Andemon, 1989), making predictions based on historical data (Sniezek, 19881 , making utility assessments (Johnson & Schkade, 1988; Shanteau & Phelps, 1979), exercising clinical judgment (Friedlander & Stockman, 1983; Z u c k e m , Kwtner, Coldla, & Alton, 1984), infemng causal attributions (Quattrone, 1982), estimating confidence ranges (Block & Harper, 1991), malung accounting-related judgments (Butler, 1986). goal setting (Mano, 1990), malung motivation-related judgments (Cervone & bake, 1986; Switzer & Sniezek, 1991), belief updating and change (Einhorn & Hogarth, 1985; Hogarth & Einhorn, 1989), evaluating product bundles (Yadov, 1994), and determining listing prices for houses (Northcraft & Neale, 1987). ...
Article
Selection interviews are decision-making tools used in organizations to make hiring and promotion decisions. Individuals who conduct such interviews, however, are susceptible to deviations from rationality that may bias interview ratings. This study examined the effect of the anchoring-and-adjustment heuristic on the ratings given to a job candidate by interviewers (n = 190) using 3 different types of interview techniques: the conventional structured interview, the patterned behavior description interview, and the situational interview. The ratings of interviewers who were given a high anchor were significantly higher than the ratings of interviewers who were given a low anchor across all three interview techniques. The effect of the anchoring manipulation, however, was significantly less when the situational interview was used.
Chapter
This chapter examines the psychological studies of biases and de-biasing measures in human decision-making with special reference to adjudicative factfinding. Research shows that factfinders are prone to cognitive biases (such as anchoring, framing, base-rate neglect, and confirmation bias) as well as social biases. Driven by this research, multiple studies have examined the extent to which those biases can be mitigated by de-biasing measures like “consider the opposite” and “give reasons.” After a brief overview of the research, the author points to the problematic evidential basis and identifies future research needs, and concludes that empirical research on de-biasing measures has so far delivered less than one would hope for.
Chapter
Intelligence analysis is a complex process that not only requires substantial training and deep expertise, but is heavily impacted by human cognitive factors. Studies have shown that even experienced, highly-trained personnel sometimes commit serious errors in judgment as a result of heuristic thinking and the impact of judgment bias in matters of national security can be catastrophic. Developing effective debiasing techniques requires addressing a number of daunting challenges. While intuitively appealing, the ability to construct suitable methods to test behaviour under actual work conditions is limited and the generalisability of findings from laboratory settings to work settings is a serious concern. To date, researchers have performed only limited investigations of a small number of debiasing techniques in the workplace. There is still a strong need for experimentally validated debiasing techniques that can be incorporated into analytic tradecraft so that foreseeable thinking errors can be avoided. Drawing from the useful features of prior studies, a reference framework has been developed for the experimental evaluation of bias mitigations applied to problems of an intelligence nature.
Chapter
Any appraiser is subject to many biasing influences which compromise the accuracy of the appraisal. One of the most prominent biases is the anchoring heuristic: appraisers involuntarily anchor to reference points such as their previous valuation, the value opinion of the seller, or the last transaction price. While many studies have proven the importance of the anchoring effect, very few studies have suggested practical means to counter it. In this chapter we demonstrate that the effect can be reduced with a tool helping the valuer to make better decisions. In our experiment probands appraised an office building with the help of a self-written valuation software. The software came in three versions with different features for debiasing in order to test its influence on the appraised values. It turned out that the probands who used the decision support version of the software produced significantly less dispersed market values than the others.
Article
Studies of actual judicial decisions and recent experimental work simulating legal decisionmaking reveal a strong relationship between ideology and judicial decisions. There is also preliminary evidence linking ideology and constitutional interpretation preferences. This chapter proposes that legal decisions' policy implications generate an automatic, affective response that biases subsequent information processing. The biased processing can involve: positive-testing or searching mostly for information supporting initial beliefs; counter-arguing or more critically scrutinizing information inconsistent with goals; overweighting information consistent with goals and discounting inconsistent information; and biases in storing and retrieving information. This motivated reasoning is more likely to influence decisions when the legal evidence is more ambiguous. As ideology operates through non-conscious cognitive processes, judges cannot identify ideology's impact, making debiasing difficult.
Article
The subjective value given to time, also known as the psychological interest rate, or the subjective price of time, is a core concept of the microeconomic choices. Individual decisions using a unique and constant subjective interest rate will refer to an exponential discounting function. However, many empirical and behavioural studies underline the idea of a non-flat term structure of subjective interest rates with a decreasing slope. Using an empirical test this paper aims at identifying in individual behaviours if agents see their psychological value of time decreasing or not. A sample of 243 individuals was questioned with regard to their time preference attitudes. We show that the subjective interest rates follow a negatively sloped term structure. It can be parameterized using two variables, one specifying the instantaneous time preference, the other characterizing the slope of the term structure. A trade-off law called “balancing pressure law” is identified between these two parameters. We show that the term structure of psychological rates depends strongly on gender, but appears not linked with life expectancy. In that sense, individual subjective time preference is not exposed to a tempus fugit effect. We also question the cross relation between risk aversion and time preference. On the theoretical ground, they stand as two different and independent dimensions of choices. However, empirically, both time preference attitude and slope seem directly influenced by the risk attitude.
Article
Two empirically well-supported research findings in the judgment literature are (1) that human judgments often appear to follow an averaging rule, and (2) that judgments in Bayesian inference tasks are usually conservative relative to optimal judgments. This paper argues that both averaging and conservatism in the Bayesian task occur because subjects produce their judgments by using an adjustment strategy that is qualitatively equivalent to averaging. Two experiments are presented that show qualitative errors in the direction of revisions in the Bayesian task that are well accounted for by the simple adjustment strategy. Also noted is the tendency for subjects in one experiment to evaluate sample evidence according to representativeness rather than according to relative likelihood. The final discussion describes task variables that predispose subjects toward averaging processes.
Article
The effects of the format by which information is presented on the cognitive processes of belief updating were investigated in the present research. Because of the differences in the affordance of verbal vs. numerical information, it is predicted that the belief updating processes involved in processing verbal and numerical information would be different. Specifically, the additive rule is used to combine information using verbal formats, while the averaging rule is used to combine information using numerical formats. Two experiments were conducted to test these hypotheses. Experiment 1 tested the belief updating process in the positive direction, and Experiment 2 tested the process in the negative direction. Two independent variables were manipulated: information presentation format (verbal vs numerical) and presentation order (strong-weak vs weak-strong). The participants were asked to adjust their purchase likelihood of a consumer product based on the sequential presentations of two experts' opinions. These two opinions varied in their formats (verbal vs. numerical) and strengths (strong vs weak). The two opinions were presented in either the strong-weak order or the weak-strong order. Participants were instructed to first anchor their purchase likelihood at 50%, and then adjust the purchase likelihood, first based on the first expert's opinion, and second based on both experts' opinions. In both experiments the hypotheses that participants employed an additive rule to integrate verbal information and an averaging rule to integrate numerical information were supported.
Article
Most of people′s apparent strategies for covariation assessment and Bayesian inference can lead to errors. However, it is unclear how often and to what degree the strategies are inaccurate in natural contexts. Through Monte Carlo simulation, the respective normative and intuitive strategies for the two tasks were compared over many different situations. The results indicate that (a) under some general conditions, all the intuitive strategies perform much better than chance and many perform surprisingly well, and (b) some simple environmental variables have large effects on most of the intuitive strategies′ accuracy, not just in terms of the number of errors, but also in terms of the kinds of errors (e.g., incorrectly accepting versus incorrectly rejecting a hypothesis). Furthermore, common to many of the intuitive strategies is a disregard for the strength of the alternative hypothesis. Thus, a key to better performance in both tasks lies in considering alternative hypotheses, although this does not necessarily imply using a normative strategy (i.e., calculating the φ coefficient or using Bayes′ theorem). Some intuitive strategies take into account the alternative hypothesis and are accurate across environments. Because they are presumably simpler than normative strategies and are already part of people′s repertoire, using these intuitive strategies may be the most efficient means of ensuring highly accurate judgment in these tasks.
Article
Exonerations famously reveal that eyewitness identifications, confessions, and other “direct” evidence can be false, though police and jurors greatly value them. Exonerations also reveal that “circumstantial” non-matches between culprit and defendant can be telling evidence of innocence (e.g., an aspect of an eyewitness’s description of the perpetrator that does not match the suspect she identifies in a lineup, or a loose button found at the crime scene that does not match the suspect’s clothes). Although non-matching clues often are easily explained away, making them seem uninteresting, they frequently turn out to match the real culprit when exonerations reveal that the wrong person was convicted. This Article uses “non-exclusionary non-matches” and what would seem to be their polar opposite, inculpatory DNA, to show that: (1) all evidence of identity derives its power from the aggregation of individually uninteresting matches or non-matches, but (2) our minds and criminal procedures conspire to hide this fact when they contemplate “direct” and some “circumstantial” evidence (e.g., fingerprints), making those forms of evidence seem stronger than they are, while, conversely, (3) our minds and procedures magnify the circumstantial character of non-exclusionary non-matches, making them seem weaker than they are. We propose ways to use circumstantial matches and non-matches more effectively to avoid miscarriages of justice.
Article
Dit preofschrift onderzoekt de relevantie van sociale invloeden als accountability, de verwachting van de besluitvormer dat zij haar beslissingen misschien moet verdedigen, op individuele besluitvorming. Deze sociale invloeden worden daarbij afgezet tegen marktinvloeden, en de relevantie van deze twee wordt vergeleken. Hoofdstuk 1 geeft een gedetailleerde behandeling van sociale invloeden, en geeft een algemene overzicht van dit proefschrift.
Article
According to the causal powers theory, all causal relations are understood in terms of causal powers of one thing producing an effect by acting on liability of another thing. Powers can vary in strength, and their operation also depends on the presence of preventers. When an effect occurs, there is a need to account for the occurrence by assigning sufficient strength to produce it to its possible causes. Contingency information is used to estimate strengths of powers and preventers and the extent to which they account for occurrences and nonoccurrences of the outcome. People make causal judgements from contingency information by processes of inference that interpret evidence in terms of this fundamental understanding. From this account it is possible to derive a computational model based on a common set of principles that involve estimating strengths, using these estimates to interpret ambiguous information, and integrating the resultant evidence in a weighted averaging model. It is shown that the model predicts cue interaction effects in human causal judgement, including forward and backward blocking, second and third order backward blocking, forward and backward conditioned inhibition, recovery from overshadowing, superlearning, and backward superlearning.
Article
This chapter discusses that the cognitive approach to judgment and decision making behavior is gradually developing and expanding in scope and in number of adherents. It presents a collection of experimental methods and results from one laboratory studying several different judgment tasks to illustrate some major trends that are central in the cognitive approach: evidence representation in the form of a model or explanation of the decision situation, enhanced memory for decision-relevant information after a decision has been made, and adaptive flexibility in individual decision strategies in response to variations in task conditions. One obstacle for the development of cognitive theories of judgment and decision making behavior is that there are considerable differences among the theories that are called cognitive. This situation is probably good for the eventual development of cognitive theories, but it is troublesome for individual researchers, especially those who are currently attempting to spread the faith to new task domains. In the field of judgment and decision making, it is essential to think about the relationships between alternate theoretical developments and formulations based on traditional expected utility theory. The expectation is that the crucible of empirical evaluation forces all of the approaches to converge on a common theoretical framework. The chapter describes that cognitive precepts are core of the next generation of theories of judgment and decision making.
Article
Koehler's work will assist the effort to understand legal fact finding. It leaves two questions somewhat open: (i) the extent to which empirical research can measure correctness of fact-finding, a function that involves the resolution of normative questions and (ii) the standards judges should use in the absence of the research he advocates.
Article
The underutilization of base rates is a consistent finding. The strong claim that base rates are ignored has been rejected and this needs no further emphasis. Following the path of “normal science,” research examines the conditions predicting changes in the degree of underutilization. A scientific revolution that might dethrone the heuristics and biases paradigm is not in sight.
Article
Full-text available
A recent study showed physicians' reasoning about a realistic case to be ignorant of base rate. It also showed physicians interpreting information pertinent to base rate differently, depending on whether it was presented early or late in the case. Although these adult reasoners might do better if given hints through talk of relative frequencies, this would not prove that they had no problem of base rate neglect.
Article
Base rates have no necessary relation to judgments that are not themselves probabilities. There is no logical imperative, for instance, that behavioral base rates must affect causal attributions or that base rate information should affect judgments of legal liability. Decision theorists should be cautious in arguing that base rates place normative constraints on judgments of anything other than posterior probabilities.
Article
This commentary discusses three points: (1) The implications of the fact that it is rational to ignore base rates if probabilities are estimated by frequencies from samples without missing data (natural sampling); (2) second order probabilities distributions are a plausible way to model imprecise probabilities; and (3) Bayesian networks represent a normative reference for multi-cue models of probabilistic inference.
Article
(1) The miscitations of seminal experiments in the base rate literature adds to the existing database of systematic miscitations of wellknown psychological experiments. These miscitations may be caused by a process of reconstructive remembering. (2) Representative design should be the methodological core of Koehler's call for ecologically valid research. This approach can benefit both basic and applied research.
Article
Koehler is right that base rate information is used, to various degrees, both in laboratory tasks and in everyday life. However, it is not time to turn our backs on laboratory tasks and focus solely on ecologically valid decision making. Tightly controlled experimental data are still needed to understand how base rate information is used, and how this varies among groups.
Article
Koehler is right to argue for more nuanced interpretation of base rate anomalies. These anomalies are best understood in relation to a broader class of cognitive anomalies, which are important for theory and practice. Recognizing a need for more nuanced analysis should not be taken as a license for treating the effects as “explained away.”
Article
Base rates are vital in predicting violent criminal recidivism. However, both lay people given simulated prediction tasks and professionals milking real life predictions appear insensitive to variations in the base rate of violent recidivism. Although there are techniques to help decision makers attend to base rates, increased decision accuracy is better sought in improved actuarial models as opposed to improved clinicians.
Article
This commentary presents a self-assessment inventory that will allow readers to determine their own attitude toward the base rate fallacy and its literature. The inventory is scientifically valid but not Medicare/Medicaid reimbursable.
Article
Full-text available
This article examines how others’ opinions can influence a consumer's evaluation of a product. This influence is said to be informational when the consumer accepts it as evidence of the product's true nature. An anchoring and adjustment process is proposed to explain how information from others is combined with direct experience when consumers form a global evaluation of a product. Two experiments are conducted to test this explanation. Findings from the two experiments suggest that when others offer their opinions about the quality of a product, the opinions have the most potential to influence a consumer who has tried the product when the opinions are considered before the consumer considers the evaluative implications of his or her own product experience. Findings from a third experiment suggest that others’ opinions about product quality have limited potential to influence a consumer who has had an unambiguous experience with the product, even when conditions are most favorable for an influence to occur. The 3 experiments suggest that informational social influence obeys information processing principles associated with other kinds of private judgments.
Article
Two exploratory studies assessed subjects' risk preferences in a series of dynamic, competitive games with real payoffs. The objective was to determine whether people adapt their risk preferences to (1) short-run, or tactical, task demands (i.e., whether one is currently winning or losing), and (2) long-run, or strategic, demands imposed by the structure of the task (i.e., whether one is playing offense or defense). We also wanted to learn whether (3) subjects who adapt their preferences perform better than subjects who do not adapt. Most subjects in both experiments were tactically responsive, and winners were somewhat more tactically responsive than losers. Evidence for strategic responsiveness was much weaker. Although the instructions and the payoff scheme both suggested the need for greater risk taking on offense than on defense, many Experiment 1 subjects took equal or greater risk on defense than on offense. In Experiment 2, monetary penalties for lost games were eliminated and more strategic responsiveness occurred. Possible determinants of whether tactical and/or strategic responsiveness occurs in a task are outlined.
Article
Consistent with Koehler's position, we propose a generalization of the base rate fallacy and earlier conservatism literatures. In studies using both traditional tasks and new tasks based on ecologically valid base rates, our subjects typically underweight individuating information at least as much as they underweight base rates. The implications of cue consistency for averaging heuristics are discussed.
Article
The base rate fallacy is directly dependent on a particular judgment paradigm in which information may be unambiguously designated as either “base rate” or “individuating,” and in which subjects make two-stage sequential judgments. The paradigm may be a poor match for real world settings, and the fallacy may thus be undefined for natural ecologies of judgment.
Article
There is a familiar risk of antinomy if from x is E and p(x is H/x is E) = r it is permissible to infer p(x is H) = r, and what Carnap (1950) called “The requirement of total evidence” will not prevent such antinomies satisfactorily. What is needed instead is a properly developed theory of evidential weight.
Article
Full-text available
54 undergraduates were shown sequences of red and white lights, where each light represented a sample with replacement from a population. After each successive light, Ss either estimated the proportion of white lights in the population (estimation) or judged the probability that the population contained more white than red (inference). Stimulus sequences were constructed from factorial designs. This permitted simple tests of an additive model of sequential decision making derived from a theory of information integration. The additive model worked fairly well in 2 experiments and was able to handle both general recency effects and effects due to sequence length. These effects, along with a failure to find a difference between the estimation and inference conditions, raise some questions about previous Bayesian treatments of sequential decision making. Supplementary data with a static decision-making task were nonadditive, in disagreement with the information-integration approach. (23 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Sequences of pink and white beads were drawn with replacement from 1 of 2 boxes. After each bead was displayed, each of 48 undergraduates made inference judgments by estimating the probability that the beads were drawn from the box with more white beads. Stimulus sequences were constructed from factorial designs which permitted simple tests of additivity as well as evaluation of serial-position and diagnosticity effects. The judgments were additive in probability form but less additive in Bayesian log-odds form and contained both general recency effects and small diagnosticity effects. A normative Bayesian model did not do as well with these results as a descriptive model from information-integration theory. Both models were able to handle diagnosticity effects, but only the descriptive model could handle additivity and serial-position effects. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Various theories of probabilistic inference and stimulus classification predict that for stimuli with separable dimensions (Ds) (a) Ds are utilized sequentially from most to least salient; (b) equating for likelihood ratio, the more salient a D is, the greater its effect on opinion; (c) the number of Ds processed varies systematically with costs, payoffs, and available time; and (d) interdimensional additivity is increasingly violated as dimensional salience decreases. The predictions were tested in 2 probabilistic inference experiments. Exp I (13 college students) utilized stimuli with 1 or 3 binary Ds, and Exp II (36 Ss) utilized stimuli with 5 binary dimensions. Ss in Exp II were either under time pressure or not and were paid according to either an extreme or a moderate payoff rule. The predictions were generally sustained, but there were specific violations in terms of sequential effects and systematic patterns of D dependence, such that restructure of the basic theory is necessary. It is suggested that processing occurs in 2 stages, one leading to a tentative binary decision and the other to a degree of confidence in the choice. In the 2nd stage, Ds are processed sequentially and configurally with the bias toward the choice already made. (38 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
3 experiments investigated the effects on posterior probability estimates of: (1) prior probabilities, amount of data, and diagnostic impact of the data; (2) payoffs; and (3) response modes. Ss usually behaved conservatively, i.e., the difference between their prior and posterior probability estimates was less than that prescribed by Bayes' theorem. Conservatism was unaffected by prior probabilities, remained constant as the amount of data increased, and decreased as the diagnostic value of each datum decreased. More learning occurred under payoff than under nonpayoff conditions and between-S variance was less under payoff conditions. Estimates were most nearly Bayesian under the (formally inappropriate) linear payoff, but considerable overestimation resulted; the log payoff condition yielded less conservatism than the quadratic payoff. Estimates were most nearly Bayesian when Ss estimated odds on a logarithmic scale.
Article
Full-text available
IN 2 EXPERIMENTS, SS RATED LIKABILITY OF PERSONS DESCRIBED BY SETS OF 1, 2, 3, 4, OR 6 ADJECTIVES OF EQUAL VALUE, PRESENTED SIMULTANEOUSLY. A STRONG SET-SIZE EFFECT WAS OBTAINED: LARGER SETS YIELDED MORE EXTREME RESPONSE. THE HYPOTHESIS THAT THE RESPONSE IS A WEIGHTED AVERAGE OF THE ADJECTIVE VALUES AND A NEUTRAL IMPRESSION WAS TESTED. IN EXP. I, THIS HYPOTHESIS WAS SUPPORTED EXCEPT FOR THE SETS OF 6 ADJECTIVES. THIS DISCREPANCY WAS ELIMINATED BY THE INTRODUCTION OF ANCHOR SETS OF 9 ADJECTIVES IN EXP. II. THE RESULTS SUPPORTED THE HYPOTHESIS THAT THE DISCREPANCY AT THE LARGEST SET RESULTS FROM AN EXTRANEOUS END-EFFECT RESPONSE TENDENCY. IT IS CONCLUDED THAT THE AVERAGING MODEL ACCOUNTED FOR THE BEHAVIOR AT A QUANTITATIVE LEVEL.
Article
An experiment is described in which subjective probability revisions were obtained in a standard probability estimation task, the ‘bookbag-and-pokerchips’situation. Three aspects of probability revision were examined: conservatism, sequential effects, and coherence. Under two experimental conditions, the conservatism effect obtained was closely related to subjects' use of a simple strategy. A recency effect was also obtained. Coherence of the probability estimates was excellent. Conditions under which the observed strategy leads to conservatism are explored and previously published results are reconsidered in the light of this strategy. Conservatism in the bookbag- and-pokerchips situation is explained as an artefact of subjects' strategies
Article
We have suggested (Marks and Clarkson, 1972) that conservatism in t he bookbag-and- pokerchips situaton results from subjects' use of a simple non-Bayesian rule. Our explanation of conservatism can account for many results in the literature, including those of De Swart (1972a, b). De Swart's experiments have confounded the variables (r−b), (r+b), and r/(r+b) and the supposed Bayesian performance of his subjects is shown to be an artefact of De Swart's data transformation procedure. De Swart's (1972c) theory of conservatism is implausible, unparsimonious, descriptive rather than explanatory, and is not supported by experimental results on subject's startegies (Beach et al., 1970; Marks and Clarkson, 1972; Marks, 1973). Probability revision in the bookbag-and-pokerchips situation is not that of a “misperceiving” of “misaggregating” Bayesian, it is governed by a simple non-Bayesian strategy.
Article
The hypothesis was that the proportions of poker chips in the displayed samples influence subjective probability revisions that are obtained in “book bags-and-poker chips” experiments. Subjects revised for simultaneous and sequential samples from two 80%–20% symmetrical binomial populations and two 70%–30% symmetrical binomial populations. Sample proportions account in large part for the revision responses for both kinds of populations for simultaneous samples. For sequential samples, however, proportions appeared to have less influence on revision responses even though 62% of the subjects claimed to use them. The implications are discussed.
Article
A theory of how opinions are formed and revised on the basis of probabilistic evidence is presented. The decision maker is viewed as a limited information processor who attends to the dimensions of a sample of equivocal information in a sequential fashion, beginning with the most salient dimension and continuing in decreasing order of salience. The contribution of each dimension level to the final opinion depends on the strength of association between that level and each of the two hypotheses under consideration. This theory is represented formally as an additive-difference model, various special cases of which correspond to other algebraic models in the literature. Two of the special cases were empirically investigated under varying levels of experience and payoffs, using the techniques of conjoint-measurement theory and ordinal data from individual subjects. The data provided reasonably good support for the models under all conditions investigated, and in addition showed interesting effects on salience and on processing of the independent variables.
Article
The conventional multidimensional distance model of similarity judgment was compared with a new model in which component differences are weighted and then averaged. To evaluate the models, qualitative and quantitative predictions were derived from Romney and D'Andrade's (1964) componential analysis of American kinship terms, and these predictions were tested by having subjects rate the similarity (in experiment 1) and the difference (in experiment 2) between all possible pairs of 12 kinship terms. In both experiments, violations of qualitative predictions for both a simple distance model and a simple averaging model revealed that the componential analysis was not sufficient to account for the data. However the averaging model was able to account for the data when the dichotomous dimension of lineality used by Romney and D'Andrade was replaced by a continuous dimension of immediacy or closeness of kin. In contrast, no comparable elaboration under the distance model was successful. These results were discussed in terms of the likely psychological processes underlying similarity judgment.
Article
A general theory of probabilistic information processing, together with its formal representation as an additive-difference model, is specialized to account for behavior when information is observed sequentially. Both the basic assumptions and the derived additive model are testable separately with ordinal data from individual subjects. Furthermore, scale values for the sequential information can be derived with the ordinal data and the model. Eight subjects were run extensively in probabilistic information processing tasks, in each problem sequentially receiving two samples of information in order to decide which of two hypotheses was more likely correct. The additive model was supported by the data of all but one subject; for those subjects described by this model, the prior assumptions received moderate to strong support. The pattern of results was employed to discuss additive versus averaging processes, sequential effects, and the general processing theory.
Article
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
Foundations of information integration theory New York: Academic press Sample proportions and subjective probability revisions
  • N H Anderson
Anderson, N.H., 1981. Foundations of information integration theory. New York: Academic press. Beach, L.R., J.A. Wise and S. Barclay, 1970. Sample proportions and subjective probability revisions. Organizational Behavior and Human Performance 5, 183-190.
Developing the technology of probabilistic inference: aggregating by averaging reduces conservatism A theory of diagnostic inference: I. Imagination and the psychophysics of evidence
  • L C Eiis
  • D A Seaver
  • W Edwards
Eiis, L.C., D.A. Seaver and W. Edwards, 1977. Developing the technology of probabilistic inference: aggregating by averaging reduces conservatism. Research Report 77-3, Social Science Research Institute, University of Southern California. Rinhom. H.J. and RM. Hogarth, 1982. A theory of diagnostic inference: I. Imagination and the psychophysics of evidence. Technical Report, University of Chicago Graduate School of Business.
Toward a proceduraI theory of judgment Averaging r&s and adjustment processes in Bayesian inference
  • L L Lopes
Lopes, L.L., 1982. Toward a proceduraI theory of judgment. Technical Report, Wisconsin Human Information Processing Program (WHIPP 17), Madison, WI. Lopes, L.L., 1985. Averaging r&s and adjustment processes in Bayesian inference. Bulletin of the Psychonomic Society 23, 509-512.
Developing the technology of probabilistic inference: aggregating by averaging reduces conservatism
  • Eils
Judgment under uncertainty: heuristics and biases
  • Tversky