Book

Judgment under uncertainty: Heuristics and biases

Authors:

Abstract

Amos Tversky and Daniel Kahneman's 1974 paper 'Judgement Under Uncertainty: Heuristics and Biases' is a landmark in the history of psychology. Though a mere seven pages long, it has helped reshape the study of human rationality, and had a particular impact on economics - where Tversky and Kahneman's work helped shape the entirely new sub discipline of 'behavioral economics.' The paper investigates human decision-making, specifically what human brains tend to do when we are forced to deal with uncertainty or complexity. Based on experiments carried out with volunteers, Tversky and Kahneman discovered that humans make predictable errors of judgement when forced to deal with ambiguous evidence or make challenging decisions. These errors stem from 'heuristics' and 'biases' - mental shortcuts and assumptions that allow us to make swift, automatic decisions, often usefully and correctly, but occasionally to our detriment. The paper's huge influence is due in no small part to its masterful use of high-level interpretative and analytical skills - expressed in Tversky and Kahneman's concise and clear definitions of the basic heuristics and biases they discovered. Still providing the foundations of new work in the field 40 years later, the two psychologists' definitions are a model of how good interpretation underpins incisive critical thinking.
... However, there was no evidence provided to the directors 5 to support such an optimistic assessment, nor any indication of the risks involved if that assessment was inaccurate; c) There was also a tendency to assume that management was sufficiently skilled that they would be able to handle any downturn and would "as in the 1990s" be able to capitalize on "substantial opportunities". This is an example of the "illusion of control" bias identified by, among others, Langer (1975); d) There is no evidence any follow-up 'Actions' requested by the Board. In the midst of the rapidly evolving subprime crisis, it is difficult to believe that, at the very least, a prudent Board would not have asked for regular briefings on the impact of the crisis, especially verifying and monitoring claims by management that "corrective measures have been put in place". ...
... The objective of the proposal here is to improve Risk Governance, especially for Systemically Important Banks, by developing measures that aim to improve both Board decision-making and regulatory oversight. The aim is NOT to second guess, influence or change decisions made by a bank Board but to ensure that the decision-making process surrounding decisions, especially a strategic decision, are comprehensive and as free as possible from cognitive biases, such as Groupthink (Janis 1971, McConnell 2013 and "Illusions of Control" (Langer 1975, Durand 2003, Kahneman 2011 The approach proposed here is both 'carrot' and stick'. The 'carrot' is that Boards will retain their fiduciary role of setting/approving strategy and overseeing its execution by management but their decisions will be scrutinized by independent experts, as to the completeness and effectiveness of the decision processes, with feedback aimed at improving overall Board effectiveness. ...
Article
Full-text available
This paper presents the result of a discourse analysis using a novel approach inspired by the work of Michael Foucault. In doing so it explores a number of influences that have contributed to the emergence of an emphasis on risk in contemporary Western societies in relation to the mentally ill. This is undertaken firstly, through an analysis of the effects of modernism and late modernism and the commoditisation of risk. This, it is argued, is achieved via a focus on the ‘event’ and its relations to responsibility, fault and blame. Locating these within the body politic we note a breakdown of trust formed through cultural mosaics and notions of crime. Second, we review the rise of risk in relation to forms of professional, political and public insurance. Through the mobilisation of interest professional technology is scrutinised and risk equates with danger. Finally, postmodern risk is associated with communal risk through an interplay of the discourses of dangerous events and the pseudo-scientific techno-language of psychiatric risk assessment and management strategies. This leads to the rise of unsustainable expectations observed.
... The result is 0.078. We know from several studies that physicians, college students (Eddy, 1982) and staff at Harvard Medical School (Casscells, Schoenberger, & Grayboys, 1978) all have equally great difficulties with this and similar medical disease problems. For instance, (Eddy, 1982)reported that 95 out of 100 physicians estimated the posterior probability P(cancer|positive) to be between 70% and 80%, rather than 7.8%. ...
... We know from several studies that physicians, college students (Eddy, 1982) and staff at Harvard Medical School (Casscells, Schoenberger, & Grayboys, 1978) all have equally great difficulties with this and similar medical disease problems. For instance, (Eddy, 1982)reported that 95 out of 100 physicians estimated the posterior probability P(cancer|positive) to be between 70% and 80%, rather than 7.8%. ...
... Traditional theoretical approaches to studying disaster risk communication exist in disciplinary approaches in two broad ways, viz. psychometrics through perceptionbased studies [33,43] and application of heuristics [40,44,45]. Contemporary scholarly research around risk communication and perception has explored the integration of technological advancements such as Artificial Intelligence into Disaster Risk Communication [46,47]; disaster politics and community engagement [15,32,48]; socio-psychological perspectives in post-disaster communication [15]; public health integration into disaster risk reduction and resilience approaches [49]; the role of media [11,[50][51][52]; integration with climate change [38,53,54], climate change communication [3,4,7,55,56]; individual as well as community development [55,57] and livelihood-environmental sustainability linkages [58]. ...
Article
The role of media framing is drawing significant scholarly attention among disaster and climate scholars in recent times, in terms of its short and long-term impacts on risk preparedness and climate change adaptation. In this paper, we explore the connections between media framing of disasters, and risk communication and preparedness. Through the case of media coverage of eleven newspapers (international and national publications) around the event of Cyclone Amphan in South Asia, we portray a strong media framing around the event. Our findings are three-fold. Firstly, the response system in India and Bangladesh could not follow pre-determined disaster action plans and protocols for a coordinated response, due to the risks and restrictions associated with the Covid-19 pandemic. Secondly, the journalistic response to cyclone Amphan framed the disaster event as ‘natural’, thus reinforcing the reliance on a short-term Response & Recovery centric approach (evacuation, rescue, and relief), over long-term approaches such as disaster preparedness and prevention (adaptation, mitigation, and resilience). Finally, we find that media framing focused on personal stories of individuals helps advance the needs of vulnerable groups; yet at the same time concretizes a relief-centric approach that ignores questions around disaster infrastructure, resilience, and climate change adaptation. We contend that an integrated risk communication approach that is adaptive, takes into account multiple risks and complexities while allowing coordinated efforts between actors and institutions is necessary to develop an effective response policy for disasters and climate-induced extreme events in the future.
... On the contrary, they can represent a simplification of certain problems as well as a shortcut to reasoning. However, they can often lead to cognitive errors-misconceptions, of which the subject has no awareness of (Kahneman & Tversky, 1972;Kang & Park, 2019;Morvan & Jenkins, 2017). In terms of knowledge improvement, many studies (Batanero & Borovcnik, 2016;Khazanov & Prado, 2010) testify that standard courses in stochastics do not have a great influence on probability misconceptions. ...
Article
Full-text available
Background In the rapidly changing industrial environment and job market, engineering profession requires a vast body of skills, one of them being decision making under uncertainty. Knowing that misunderstanding of probability concepts can lead to wrong decisions, the main objective of this study is to investigate the presence of probability misconceptions among undergraduate students of electrical engineering. Five misconceptions were investigated: insensitivity to sample size, base rate neglected, misconception of chance, illusory correlation, and biases in the evaluation of conjunctive and disjunctive events. The study was conducted with 587 students who attended bachelor schools of electrical engineering at two universities in Serbia. The presence of misconceptions was tested using multiple-choice tasks. This study also introduces a novel perspective, which is reflected in examination of the correlation between students’ explanations of given answers and their test scores. Results The results of this study show that electrical engineering students are, susceptible to misconceptions in probability reasoning. Although future engineers from the sample population were most successful in avoiding misconceptions of chance, only 35% of examinees were able to provide a meaningful explanation. Analysis of students’ explanations, revealed that in many cases majority of students were prone to common misconceptions. Among the sample population, significant percentage of students were unable to justify their own answers even when they selected the correct option. The results also indicate that formal education in probability and statistics did not significantly influence the test score. Conclusions Results of the present study indicate a need for further development of students’ deep understanding of probability concepts, as well as the need for the development of competencies that enable students to validate their answers. The study emphasizes the importance of answer explanations, since they allow us to discover whether students who mark the correct answer have some misconceptions or may be prone to some other kind of error. We found that the examinees who failed to explain their choices had much lower test scores than those who provided some explanation.
... Other effects found in the framework of behavioral economics can affect decision-making, such as the anchoring effect. It is considered a cognitive bias that explains the general human tendency for individuals making decisions to depend excessively on the first piece of information provided (Morvan & Jenkins, 2017). In the process of decision-making, anchoring takes place when individuals use initial information to make successive judgments. ...
Article
Anchoring bias has been extensively studied in behavioral economics, but the influence of individual differences on this bias has rarely received attention. However, research suggests that personality traits may affect susceptibility to bias. The present study addresses how dark triad traits may affect decisions within the framework of behavioral game theory, how decisions are justified, whether anchors are generated, and how personality traits affect anchors. A total of 379 participants played the prisoner's dilemma and traveler's dilemma games. In both games, three groups were configured, two with anchors and one control group without anchors. The games were conducted through an online questionnaire in which participants also completed the short dark triad test. The Machiavellianism and psychopathy traits yielded relationships with the decisions made in the games but did not affect the anchors. In the traveler's dilemma, there was a clear anchoring, and in the prisoner's dilemma, there was no anchoring. In addition, the players rationalized the decisions made against anchoring.
... Po raz pierwszy opisała go Ellen J. Langer w 1982 roku. Polega on na przypisaniu sobie roli sprawczej w zdarzeniu, którego wynik zależy całkowicie od czynników zewnętrznych, a zwłaszcza od przypadku [Langer 1982]. W odniesieniu do grania hazardowego wyraża się w poszukiwaniu odpowiednich, zwiększających szansę na wygraną "strategii" grania, jak: sposób uderzania przycisków automatu do gry, obserwowanie automatów i prowadzenie statystyk wypłacania przez nie wygranych, rozpisywanie układów wylosowanych cyfr w grach losowych i zakreślanie cyfr w odniesieniu do poprzednich losowań itd. ...
... However, in primary care settings (which may have a low disease prevalence) some doctors grossly overestimate the disease probability from a screening test, when the patient has a positive result. 3 They also seem to confuse the sensitivity of the test with its positive predictive value, that is, if the test is very sensitive; a positive result means the presence of the disease. 4 The correct definitions for sensitivity and predictive values are known to most doctors but only a few know how to apply it correctly to their patients. ...
Article
A 51-year-old male presented for treatment with a high-grade fever which had persisted for 7 days and was associated with abdominal discomfort. He self-medicated with paracetamol, which temporarily relieved the fever, but it recurred a few hours later. He also noted vague abdominal pain associated with a soft bowel movement. There was no cough or any sign of respiratory infection. On physical examination the patient had normal vital signs with a temperature of 39 ∞ C. Typhoid fever was considered as a diagnosis because there had been reports of a recent outbreak. In order to make a correct diagnosis and give appropriate treatment the physician must choose between a Widal test or a dot-blot enzyme-linked immunosorbent assay (ELISA) test. Clinical dilemma Although the Widal test was introduced over 100 years ago it continues to be plagued with controversies involving the quality of the antigens used and the interpretation of the result, particularly in endemic areas. 1 A recently developed monoclonal antibody test, the dot-blot ELISA was compared with the Widal test and was found to be accurate using blood culture as the reference standard. 2 Between these two available tests, which should a family physician request? The answer to this question depends on several factors: • accuracy • availability • difficulty in performance • and cost of the test. Another important consideration in making a diagnostic decision is to weigh up how much additional information the test will add to what is already known.
... Self-regulation theory is closely related to the feeling of control, that people strive to have even if they only build an " illusion of control " (Langer, 1982), suggesting that activities of denial and ex-post rationalization gain in importance when feeling of control decreases. (Fenton-O'Creevy et al., 2003). ...
... There can also be a diffusion of responsibility in groups leading to reduced participation, sometimes referred to as social loafing: the probability of such behaviour increases with group size and degree of physical and temporal dispersion of group members (Chidambaram & Tung, 2005). Further, a number of well-documented cognitive biases reduce the quality of individual judgment and reasoning (e.g., Arnott, 2006;Kahneman, Slovic & Tversky, 1982;Morvan & Jenkins, 2017) and these might be exacerbated by group processes. For example, individuals have often been found to be overconfident (e.g., Johnson & Fowler, 2011;Lichtenstein, Fischhoff & Phillips, 1982), and groups demonstrate shifts towards even greater risk taking and confidence in judgment than by their component members (Dodoiu, Leenders & van Dijk, 2016;Stoner, 1968). ...
Preprint
Full-text available
Bayes Nets (BNs) are extremely useful for causal and probabilistic modelling in many real-world applications, often built with information elicited from groups of domain experts. But their potential for reasoning and decision support has been limited by two major factors: the need for significant normative knowledge, and the lack of any validated methods or software supporting collaboration. Consequently, we have developed a web-based structured technique – Bayesian Argumentation via Delphi (BARD) – to enable groups of domain experts to receive minimal normative training and then collaborate effectively to produce high-quality BNs. BARD harnesses multiple perspectives on a problem, while minimising biases manifest in freely interacting groups, via a Delphi process: solutions are first produced individually, then shared, followed by an opportunity for individuals to revise their solutions. To test the hypothesis that BNs improve due to Delphi, we conducted an experiment whereby individuals with a little BN training and practice produced structural models using BARD for two Bayesian reasoning problems. Participants then received 6 other structural models for each problem, rated their quality on a 7-point scale, and revised their own models if they wished. Both top-rated and revised models were on average significantly better quality (scored against a gold-standard) than the initial models, with large and medium effect sizes. We conclude that Delphi – and BARD – improves the quality of BNs produced by groups. Further, although rating cannot create new models, rating seems quicker and easier than revision and yielded significantly better models – so, we suggest efficient BN amalgamation should include both.
... There can also be a diffusion of responsibility in groups leading to reduced participation, sometimes referred to as social loafing: the probability of such behaviour increases with group size and degree of physical and temporal dispersion of group members (Chidambaram & Tung, 2005). Further, a number of well-documented cognitive biases reduce the quality of individual judgment and reasoning (e.g., Arnott, 2006;Kahneman, Slovic & Tversky, 1982;Morvan & Jenkins, 2017) and these might be exacerbated by group processes. For example, individuals have often been found to be overconfident (e.g., Johnson & Fowler, 2011;Lichtenstein, Fischhoff & Phillips, 1982), and groups demonstrate shifts towards even greater risk taking and confidence in judgment than by their component members (Dodoiu, Leenders & van Dijk, 2016;Stoner, 1968). ...
Preprint
Full-text available
Bayes Nets (BNs) are extremely useful for causal and probabilistic modelling in many real-world applications, often built with information elicited from groups of domain experts. But their potential for reasoning and decision support has been limited by two major factors: the need for significant normative knowledge, and the lack of any validated methods or software supporting collaboration. Consequently, we have developed a web-based structured technique – Bayesian Argumentation via Delphi (BARD) – to enable groups of domain experts to receive minimal normative training and then collaborate effectively to produce high-quality BNs. BARD harnesses multiple perspectives on a problem, while minimising biases manifest in freely interacting groups, via a Delphi process: solutions are first produced individually, then shared, followed by an opportunity for individuals to revise their solutions. To test the hypothesis that BNs improve due to Delphi, we conducted an experiment whereby individuals with a little BN training and practice produced structural models using BARD for two Bayesian reasoning problems. Participants then received 6 other structural models for each problem, rated their quality on a 7-point scale, and revised their own models if they wished. Both top-rated and revised models were on average significantly better quality (scored against a gold-standard) than the initial models, with large and medium effect sizes. We conclude that Delphi – and BARD – improves the quality of BNs produced by groups. Further, although rating cannot create new models, rating seems quicker and easier than revision and yielded significantly better models – so, we suggest efficient BN amalgamation should include both.
... M achine learning and data abstraction techniques are nowadays widely used in the medical domain. [1][2][3] Support systems currently used are primarily based on rule-based systems, 4-7 heuristics, 8 decision trees 9 fuzzy logic, [10][11][12][13] artificial neural networks, 14,15 Bayesian networks. 16,17 this list of modelling techniques applied for Decision Support System (DSS) is not exhaustive and medical diagnosis is a popular problem for researchers working on classification algorithms. ...
Article
Full-text available
BACKGROUND: in dentistry, clinical problems could be resolved using many therapeutic approaches that may results in very different therapies. In order to choose the best option, a good evaluation of therapy long-term survival and success rate is mandatory. The routine use of a decision support analysis software is nowadays limited due to the lack of software’s flexibility especially when a variety of possible therapeutic option are present. The aim of this research was to develop a new algorithm model for a Decision Support System software to give diagnosis support in dentistry. METHODS: Beta tests were designed to study the computer software in different clinical situations based on clinical data. The therapeutic options can be conservative/endodontic or extractive/prosthetic therapies. In two of clinical situation selected could be possible to choose both therapies. RESULTS: in clinical situations tested, the DDS software correctly identified the several therapeutic options. When multiple treatments were possible the beta test showed an output mask that correctly showed a range of options with their corresponding survival and success rate. CONCLUSIONS: The software architecture proposed by the authors is technically feasible, can support the clinician choices based on scientific evidence and up-to-date references and gain informed consent based on data easily understandable for the patient.
Article
This article aims to reconnect project risk management with its roots in psychology and economics and thereby generate a cognitive approach to project risk management. While there has been widespread application of the tools and techniques of project risk management, and good practice has been captured in a large number of different standards and texts, few signs of improvement are apparent in project performance. The article suggest that the inappropriate use of project risk management techniques may be part of the problem rather than part of the solution here, and that we need to rethink project risk management from first principles. Starting from a presumption that project risk management is the essence of project management more generally, the article offers a review of some of the key contributions from psychology and economics that have shaped our thinking before presenting a cognitive model of project risk managing.
Article
Full-text available
Özellikle son elli yıl içerisinde, davranışsal finans alanı genel olarak finans literatüründe son derece önem kazanmıştır. Finansal piyasaları doğrudan etkilemesi sebebiyle, bireysel yatırımcıların finansal anlamda karar ve davranış mekanizmalarını şekillendiren faktörlerin incelenmesi oldukça önemli hale gelmiştir. Geleneksel finansal yaklaşımdan farklı olarak, davranışsal finans alanı yatırımcıların irrasyonel olduğunu varsaymaktadır. Bu davranışsal finans yaklaşımının getirilmesi ile, bireylerin finansal yatırım kararlarında nasıl ve ne şekilde rasyonel olmaktan uzaklaştıkları anlaşılmaya çalışılmıştır. Yatırımcıların irrasyonel davranışlarının bazıları bilişsel önyargılar ve hevristikler başlıkları altında şekillenmiştir. Bu çalışmada, yatırımcı davranışları bilişsel önyargılar ve hevristikler başlıkları altında ve kavramsal çerçevede incelenmiştir.
Article
The purpose of the study was to explore the criteria used by mid-level managers for the evaluation of managerial decisions' goodness. A questionnaire composing of 25 items was administered to 145 managers who were asked to assign each item a score indicating its suitability to serve as a criterion for decisions's goodness. A factor analysis performed on these scores yielded eight meaningful factors. The factors were interpreted as representing the following criteria, which are ordered according to the values of the factor's scores: (1) goodness of outcomes, (2) correctness of the decision process, (3) information utilization, (4) realism and resources, (5) ethics, (6) subjective rationality, (7) acceptance, and (8) feelings and social compromise.
Conference Paper
Full-text available
Despite increasingly sophisticated programming languages, software developer training, testing tools, integrated development environments and project management techniques, software project failure, abandonment and overrun rates remain high. One way to address this is to focus on common systematic errors made by software project participants. In many cases, such errors are manifestations of cognitive biases. Consequently this paper proposes a theory of the role of cognitive biases in software development project success. The proposed theory posits that such errors are mutual properties of people and tasks; they may therefore be avoided by modifying the person-task system using specific sociotechnical interventions. The theory is illustrated using the case of planning poker, a task estimation technique designed to overcome anchoring bias.
Article
A method is presented for deciding whether correct predictions about other people are based on simulation or theory use. The differentiating power of this method was assessed with cognitive estimation biases (e.g. estimating the area of Brazil) in two variations. Experiments 1 and 2 operated with the influence of response scales of different length. Experiment 3 used the difference between free estimates that tended to be far off the true value and estimates constrained by an appropriate response scale, where estimates became greatly more realistic. The critical question is how well observer subjects can predict these target biases under two different presentation conditions. Response scale biases (Experiments 1 and 2) were more strongly predicted when observer subjects were presented with the two scales juxtaposed, than when responses for each scale were given independently. This speaks for the use of a theory, since simulation should, if there is any difference at all, be made more difficult by the juxtaposition of conditions. The difference between free and constrained estimations (Experiment 3) was more strongly predicted under independent than under juxtaposed presentation. This speaks for the use of simulation since use of a theory should, if anything, be helped by juxtaposition of problems since it helps highlight the theoretically relevant factor. Results are discussed in view of recent proposals about when simulation is likely to be used, i.e. for belief fixation but not action prediction (Stich and Nichols, 1995b), for content fixation (Heal, 1996a), and for rational effects only (Heal, 1996b).
Article
This paper explores the knowledge-related factors explaining the timing of entry of VC firms into new technological waves. On one hand, we argue that the firm’s prior investment expertise facilitates early entry by enabling the firm to deal better and benefit from the uncertainty surrounding a new technology. On the other hand, the firm’s knowledge concentration and distance as well as the ossification associated with its age impede such entry. We found empirical support for these notions in the history of US venture capital investment activity from 1962 to 2004. The results are consistent across four technology cycles that span the above period – semiconductors, hardware, biotechnology, and internet – suggesting good generalizability of our results. We contribute to the literature of decision making under uncertainty by highlighting the endogenous role that the decision maker plays in this process.
Article
Full-text available
Judgment permeates any forecasting process. It is also subject to systematic study. Considering judgment as an understandable phenomenon allows access to the research literature examining judgments in other contexts and to the research methodologies needed to study the judgments needed for specific forecasting tasks. Such research can clarify how much forecasts are to be trusted and how forecasts might be improved (by evaluating and improving their judgmental component). Indeed, just identifying where judgment enters a forecast can make it more useful. The approach outlined here offers a complement both to seeing ‘judgmental forecasting’ as an irreducible whole and to focusing primarily on a few judgmental subtasks (e.g., assessing confidence intervals). It argues that such focused empirical study can be profitably performed on other subtasks, creating a more comprehensive picture of judgment.
Article
Full-text available
Observing the distribution of the old European Union 15 (EU15) governments ordered by political families since 1978, a sharp Right–Left partisan cycle seems to appear. If we hypothesize that the EU15 is one geo-political unit called Euroland, such an empirical observation is accurate both for the aggregate number of Prime Ministers in office and for the aggregate vote share. In our Euroland, we consider each country of the EU15 as a region where citizens can choose between five political families when voting (the classic Right, the moderate and social-democratic Left, the Left of the Left, the far Right and rightist populists, and the ecologists). Our panel data from these countries includes results from 130 legislative elections, 1978–2008. After building a politico–economic vote function for each political bloc, we estimate a seemingly unrelated regression (SUR) from which we forecast their respective electoral weights in Europe for the years to come. Accordingly, should we have more Keynesian, monetarist or free market oriented policies? Forecasting partisan dynamics should provide some answers.
Article
This article examines prospects for theories and methods of preferences, both in the specific sense of the preferences of the ideal rational agents considered in economics and decision theory and in the broader interplay between reasoning and rationality considered in philosophy, psychology, and artificial intelligence. Modern applications seek to employ preferences as means for specifying, designing, and controlling rational behaviors as well as descriptive means for understanding behaviors. We seek to understand the nature and representation of preferences by examining the roles, origins, meaning, structure, evolution, and application of preferences.
Chapter
Das von Weber und Schäffer konzeptionell entwickelte und in der Unternehmenspraxis weit verbreitete Verständnis von Controlling als Rationalitätssicherung der Führung definiert die Aufgabe des Controllings, Rationalitätsdefizite von Managern zu erkennen und zu reduzieren. Rationalitätsdefizite lassen sich auf Motivationsprobleme von Managern (opportunistisches Verhalten) und auf kognitive Begrenzungen der Manager zurückführen (Weber/Schäffer 2006, 33 ff.).1 Die Ableitung und Durchführung von Gegenmaßnahmen zur Reduzierung von Rationalitätsdefiziten der Manager setzt eine fundierte Kenntnis typischer kognitiver Begrenzungen von Managern, z.B. Wahrnehmungsverzerrungen oder Überschätzungen bestimmter Sachverhalte, voraus. Dafür bietet es sich an, auf bisherige Erkenntnisse und bewährte Methoden der verhaltenswissenschaftlichen Forschung zurückzugreifen. Zahlreiche verhaltenswissenschaftliche Studien zeigen nämlich, „dass sich Menschen in klar spezifizierten Situationen nicht in der Weise verhalten, wie es das ökonomische Verhaltensmodell voraussagen würde, in diesem Sinn also anomal oder paradox handeln: Sie unterliegen Verhaltensanomalien . “ (Eichenberger 1992, 2; vgl. ähnlich für den Accounting-Kontext Sprinkle 2003, 302).
Article
We tested a method for solving Bayesian reasoning problems in terms of spatial relations as opposed to mathematical equations. Participants completed Bayesian problems in which they were given a prior probability and two conditional probabilities and were asked to report the posterior odds. After a pre‐training phase in which participants completed problems with no instruction or external support, participants watched a video describing a visualization technique that used the length of bars to represent the probabilities provided in the problem. Participants then completed more problems with a chance to implement the technique by clicking interactive bars on the computer screen. Performance improved dramatically from the pre‐training phase to the interactive‐bar phase. Participants maintained improved performance in transfer phases in which the interactive bars were removed and they were required to implement the visualization technique with either pencil‐and‐paper or no external medium. Accuracy levels for participants using the visualization technique were very similar to participants trained to solve the Bayes theorem equation. The results showed no evidence of learning across problems in the pre‐training phase or for control participants who did not receive training, so the improved performance of participants using the visualization method could be uniquely attributed to the method itself. A classroom sample demonstrated that these benefits extend to instructional settings. The results show that people can quickly learn to perform Bayesian reasoning without using mathematical equations. We discuss ways that a spatial solution method can enhance classroom instruction on Bayesian inference and help students apply Bayesian reasoning in everyday settings.
Chapter
Underlying conceptsComparison with other evaluation approachesKey messagesSuggested reading (loosely grouped by the authors' primary discipline)References and notes
Chapter
Nowadays, cancer constitutes the second leading cause of death globally. The application of an efficient classification model is considered essential in modern diagnostic medicine in order to assist experts and physicians to make more accurate and early predictions and reduce the rate of mortality. Machine learning techniques are being broadly utilized for the development of intelligent computational systems, exploiting the recent advances in digital technologies and the significant storage capabilities of electronic media. Ensemble learning algorithms and semi-supervised algorithms have been independently developed to build efficient and robust classification models from different perspectives. The former attempt to achieve strong generalization by using multiple learners while the latter attempt to achieve strong generalization by exploiting unlabeled data. In this work, we propose an improved semi-supervised self-labeled algorithm for the cancer prediction, based on ensemble methodologies. Our preliminary numerical experiments illustrate the efficacy and efficiency of the proposed algorithm, proving that reliable and robust prediction models could be developed by the adaptation of ensemble techniques in the semi-supervised learning framework.
Article
Bayesian approaches to data analysis are considered within the context of behavior analysis. The paper distinguishes between Bayesian inference, the use of Bayes Factors, and Bayesian data analysis using specialized tools. Given the importance of prior beliefs to these approaches, the review addresses those situations in which priors have a big effect on the outcome (Bayes Factors) versus a smaller effect (parameter estimation). Although there are many advantages to Bayesian data analysis from a philosophical perspective, in many cases a behavior analyst can be reasonably well‐served by the adoption of traditional statistical tools as long as the focus is on parameter estimation and model comparison, not null hypothesis significance testing. A strong case for Bayesian analysis exists under specific conditions: When prior beliefs can help narrow parameter estimates (an especially important issue given the small sample sizes common in behavior analysis) and when an analysis cannot easily be conducted using traditional approaches (e.g., repeated measures censored regression).
Preprint
Full-text available
Popper on justification of science
Article
Full-text available
Risk matrices are a common way to communicate the likelihood and potential impacts of a variety of risks. Until now, there has been little empirical work on their effectiveness in supporting understanding and decision making, and on how different design choices affect these. In this pair of online experiments (total n = 2699), we show that risk matrices are not always superior to text for the presentation of risk information, and that a nonlinear/geometric labeling scheme helps matrix comprehension (when the likelihood/impact scales are nonlinear). To a lesser degree, results suggested that changing the shape of the matrix so that cells increase in size nonlinearly facilitates comprehension as compared to text alone, and that comprehension might be enhanced by integrating further details about the likelihood and impact onto the axes of the matrix rather than putting them in a separate key. These changes did not affect participants’ preference for reducing impact over reducing likelihood when making decisions about risk mitigation. We recommend that designers of risk matrices consider these changes to facilitate better understanding of relationships among risks.
Article
Full-text available
This paper examines if investors exhibited evidence of the availability heuristic in their investment decisions when significant price changes occurred in the British stock market during the 2010-2018 period. We raise the hypothesis that if a significant stock price move takes place on a day when the stock market index also undergoes a significant change (either positive or negative), then the magnitude of that shock may be increased by the availability of positive investment or negative outcomes. We applied three different proxies for large stock price changes which yielded a robust sample of events for this study. We found no significant evidence of the availability heuristic. In addition, we also found no significant evidence of price overreaction for both price decreases and increases. Inversely, we found robust results that suggest randomness in the behavior of stock prices in this period, thus supporting the efficiency of financial markets and opposing the results from similar studies carried out in the United States.
Article
Gender differences in peer review and the associated impact on innovation financing are well documented but less well understood. We study peer review in the National Aeronautics and Space Administration Small Business Innovation Research program, a public initiative seeking to increase women's access to innovation funds. We theorize that reviewers use status characteristics inappropriately as heuristics and create gender bias. Econometric analysis shows evidence of direct bias against female applicants, an effect linked to challenges for newcomers in demonstrating individual legitimacy rather than concerns of the organizational legitimacy of the associated firm. We also demonstrate a corrective redistribution to reverse this bias and create equity in the funding outcome. As these results negatively impact diversity in innovation, we propose policy recommendations to overcome this bias. Peer review is an important mechanism to rank and select technical proposals for funding. We examine the role of gender in a government program conducting this process. Controlling for the proposal quality and other factors, we show that the gender of the proposer is linked to lower scores. This effect is associated with proposals from females who are new to the program, suggesting their challenges in demonstrating credibility as leaders of these projects, and exacerbated by the fact that women represent a disproportionately high share of newcomers. Subsequently, the program reverses this bias such that the funding outcomes do not show the same inequities. This has important implications for policies supporting gender diversity in innovation.
ResearchGate has not been able to resolve any references for this publication.