Article

Belief in the law of small numbers

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

People have erroneous intuitions about the laws of chance. In particular, they regard a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Yet many scientists fall short in their understanding of statistical concepts. One cognitive bias demonstrated by Tversky & Kahneman [1] is the 'belief in the law of small numbers'. This refers to the tendency to overestimate the stability of estimates that come from small samples-which, following Yoon et al. [2], we shall term 'sample size neglect'. ...
... We suspect the explanation may go beyond lack of training and reflect the influence of sample size neglect, which leads us to have intuitions about sample size that are at odds with reality. Consider this example [1]: ...
... Items 1A and 1B were somewhat analogous to the 'proportion of male births' item from the original study by Tversky & Kahneman [1], although we focused on means rather than proportions, to make the question more relevant to the training. The most common response (both before and after training) was to select a foil that stated that sample size did not matter, even though in training, the participants had been able to see that the mean bars for small samples were far more variable than those of large samples. ...
Article
Full-text available
‘Sample size neglect’ is a tendency to underestimate how the variability of mean estimates changes with sample size. We studied 100 participants, from science or social science backgrounds, to test whether a training task showing different-sized samples of data points (the ‘beeswarm’ task) can help overcome this bias. Ability to judge if two samples came from the same population improved with training, and 38% of participants reported that they had learned to wait for larger samples before making a response. Before and after training, participants completed a 12-item estimation quiz, including items testing sample size neglect (S-items). Bonus payments were given for correct responses. The quiz confirmed sample size neglect: 20% of participants scored zero on S-items, and only two participants achieved more than 4/6 items correct. Performance on the quiz did not improve after training, regardless of how much learning had occurred on the beeswarm task. Error patterns on the quiz were generally consistent with expectation, though there were some intriguing exceptions that could not readily be explained by sample size neglect. We suggest that training with simulated data might need to be accompanied by explicit instruction to be effective in counteracting sample size neglect more generally.
... But since most of the evidence we will draw on comes from or pertains to the cognitive, behavioral, and social sciences, they will be our main focus. 4 Exceptions include, for instance, Mahoney (1976), or Tversky and Kahneman (1971); we will return to their work below. Moreover, drawing unreflective "default inferences" from particular samples to broader populations of individuals is commonly thought to be at odds with how scientists generalize (Claveau & Girard, 2019, p. 855). ...
... There is more direct empirical evidence of generalization bias in scientific induction. In a seminal study, Tversky and Kahneman (1971) found that many psychologists viewed a sample randomly drawn from a population as highly representative, i.e., generalizable, even when this was not warranted. Most of the surveyed psychologists underestimated the systematic increase in uncertainty for smaller samples such that they placed about the same confidence in a mean derived from a small sample as in a mean derived from a larger, more representative sample. ...
... Inversely, estimates from smaller samples tend to be less reliable. Tversky and Kahneman (1971) thus dubbed the systematic overconfidence in estimates obtained from small samples the "belief in the law of small numbers". With respect to this "belief" among psychologists, it is fair to assume that the psychologists that Tversky and Kahneman surveyed did not deliberately overgeneralize. ...
Article
Many scientists routinely generalize from study samples to larger populations. It is commonly assumed that this cognitive process of scientific induction is a voluntary inference in which researchers assess the generalizability of their data and then draw conclusions accordingly. Here we challenge this view and argue for a novel account. The account describes scientific induction as involving by default a generalization bias that operates automatically and frequently leads researchers to unintentionally generalize their findings without sufficient evidence. The result is unwarranted, overgeneralized conclusions. We support this account of scientific induction by integrating a range of disparate findings from across the cognitive sciences that have until now not been connected to research on the nature of scientific induction. The view that scientific induction involves by default a generalization bias calls for a revision of our current thinking about scientific induction and highlights an overlooked cause of the replication crisis in the sciences. Commonly proposed interventions to tackle scientific overgeneralizations that may feed into this crisis need to be supplemented with cognitive debiasing strategies to most effectively improve science.
... Furthermore, the inherent complexity of maritime systems with numerous interconnected actors and significant pertinent information, is impossible for humans to fully synthesise. Significant work in the social sciences has demonstrated how experts are subject to bias and heuristics which impact the accuracy of their judgements (Tversky and Kahneman, 1971;Slovic et al. 1979;Kahneman et al. 1982;Tetlock, 2005;Rae and Alexander, 2017). Without an evidence based and systematic method for maritime risk assessment, any conclusions reached might be flawed, or open to challenge by stakeholders with differing viewpoints and experiences. ...
... Experts are expensive inputs to develop the models and assign probabilities. Furthermore, many studies have noted their inputs are subject to biases and heuristics that impact the accuracy of their judgements (Tversky and Kahneman, 1971;Slovic et al. 1979;Kahneman et al. 1982;Tetlock, 2005;Rae and Alexander, 2017). Each of these limitations presents a key challenge to the effective implementation of maritime risk assessment. ...
... Thirdly, their high cost and questionable predictive capability both leads to a reluctance to use them and distrust amongst stakeholders in their results. As a result, many decisions are made using qualitative expert judgement (Munim et al. 2020) which are subject to bias and heuristics (Tversky and Kahneman, 1971;Tetlock, 2005). ...
Thesis
Shipping is an essential component of the global economy, but every year accidents result in significant loss of life and environmental pollution. Navigating vessels might collide with one another, run aground or capsize amongst a multitude of challenges to operating at sea. As the number and sizes of vessels have increased, novel or autonomous technologies are adopted and new environments such as the Arctic are exploited, these risks are likely to increase. Coastal states, ports and developers have a responsibility to assess these risks, and where the risk is intolerably high, implement mitigation measures to reduce them. To support this, significant research has developed a field of maritime risk analysis, attempting to employ rigorous scientific study to quantifying the risk of maritime accidents. Such methods are diverse, yet have received criticism for their lack of methodological rigour, narrow scope and one-dimensional rather than spatial-temporal approach to risk. More broadly, there is a recognition that by combining different datasets together, novel techniques might lead to more robust and practicable risk analysis tools. This thesis contributes to this purpose. It argues that by integrating massive and heterogenous datasets related to vessel navigation, machine learning algorithms can be used to predict the relative likelihood of accident occurrence. Whilst such an approach has been adopted in other disciplines this remains relatively unexplored in maritime risk assessment. To achieve this, four aspects are investigated. Firstly, to enable fast and efficient integration of different spatial datasets, the Discrete Global Grid System has been trialled as the underlying spatial data structure in combination with the development of a scalable maritime data processing pipeline. Such an approach is shown to have numerous advantageous qualities, particular relevant to large scale spatial analysis, that addresses some of the limitations of the Modifiable Areal Unit Problem. Secondly, a national scale risk model was constructed for the United States using machine learning methods, providing high-resolution and reliable risk assessment. This supports both strategic planning of waterways and real-time monitoring of vessel transits. Thirdly, to overcome the infrequency of accidents, near-miss modelling was undertaken, however, the results were shown to only have partial utility. Finally, a comparison is made of various conventional and machine methodologies, identifying that whilst the latter are often more complex, they address some failings in conventional methods. The results demonstrate the potential of these methods as a novel form of maritime risk analysis, supporting decision makers and contributing to improving the safety of vessels and the protection of the marine environment.
... When people encounter new information, they may struggle to properly incorporate past knowledge and existing beliefs to make judgments with regard to the new information, leading to biases in data interpretation [13]. For example, failed belief updating can drive people to be overconfident in their judgment [18,74], as can confirmation bias [35,74]. People who are subject to confirmation bias often attend and search only for information that supports prior beliefs and do not consider alternative explanations [46,61,81]. ...
... When people encounter new information, they may struggle to properly incorporate past knowledge and existing beliefs to make judgments with regard to the new information, leading to biases in data interpretation [13]. For example, failed belief updating can drive people to be overconfident in their judgment [18,74], as can confirmation bias [35,74]. People who are subject to confirmation bias often attend and search only for information that supports prior beliefs and do not consider alternative explanations [46,61,81]. ...
Preprint
When an analyst or scientist has a belief about how the world works, their thinking can be biased in favor of that belief. Therefore, one bedrock principle of science is to minimize that bias by testing the predictions of one's belief against objective data. But interpreting visualized data is a complex perceptual and cognitive process. Through two crowdsourced experiments, we demonstrate that supposedly objective assessments of the strength of a correlational relationship can be influenced by how strongly a viewer believes in the existence of that relationship. Participants viewed scatterplots depicting a relationship between meaningful variable pairs (e.g., number of environmental regulations and air quality) and estimated their correlations. They also estimated the correlation of the same scatterplots labeled instead with generic 'X' and 'Y' axes. In a separate section, they also reported how strongly they believed there to be a correlation between the meaningful variable pairs. Participants estimated correlations more accurately when they viewed scatterplots labeled with generic axes compared to scatterplots labeled with meaningful variable pairs. Furthermore, when viewers believed that two variables should have a strong relationship, they overestimated correlations between those variables by an r-value of about 0.1. When they believed that the variables should be unrelated, they underestimated the correlations by an r-value of about 0.1. While data visualizations are typically thought to present objective truths to the viewer, these results suggest that existing personal beliefs can bias even objective statistical values people extract from data.
... The gambler's fallacy describes an irrational belief that a sequence of random events must correspond with their perception of what constitutes randomness, which leads to believing that certain outcomes are more or less likely to happen than their base probability based on what has happened so far (Goodie et al., 2019;Tversky & Kahneman, 1971). For example, this could refer to an individual who is attempting to predict the outcome of a coin flip based on previous results of the coin flip, despite every flip being independent from each other. ...
Article
Full-text available
Sports betting is an activity that has seen tremendous growth over the past decade. The integrative nature of sports betting in marketing mediums and the advent of modern technology makes it a particularly dangerous form of gambling. This study aimed to compare the cognitions of sports bettors and non-sports gamblers. A total of 713 participants were recruited, of which 80 were sports bettors, 270 were non-sports gamblers, and 363 were non-gamblers. Cognitive distortions were measured using the Gamblers Belief's Questionnaire, which comprises two factors: Luck/Perseverance, and Illusion of Control. The results of a between-groups MANOVA showed that sports bettors recorded higher scores for Luck/Perseverance (M = 35.27, SD = 13.63) than non-gamblers (M = 17.60, SD = 8.20, p < .001) and non-sports gamblers (M = 27.19, SD = 11.81, p < .001). Sports gamblers also recorded higher Illusion of Control scores (M = 25.48, SD = 8.81) than both non-gamblers (M = 13.46, SD = 6.50, p < .001) and non-sports gamblers (M = 19.76, SD = 7.91, p < .001). Problem gambling was measured using the South Oaks Gambling Screen. One-way analysis of variance between the three groups showed sports bettors scores (M = 3.45, SD = 3.29) were higher than those of non-sports gamblers (M = 1.62, SD = 2.30), and non-gamblers (M = 0.29, SD = 0.96, p < .001). These findings suggest that gamblers should not be treated as a homogenous group, and that greater attention should be placed on sports bettors in prevention and treatment efforts.
... On the one hand, a positive correlation may arise due to affective priming, such as in emotion recognition [43]. On the other hand, negative correlation may arise due to gambler's fallacy (or the so-called "law of small numbers" [45]). That is, people tend to overestimate how small-size samples are representative of the population characteristics, believing that "early draws of one signal increase the odds of next drawing other signals" [38]. ...
Preprint
We consider the problem of sequential evaluation, in which an evaluator observes candidates in a sequence and assigns scores to these candidates in an online, irrevocable fashion. Motivated by the psychology literature that has studied sequential bias in such settings -- namely, dependencies between the evaluation outcome and the order in which the candidates appear -- we propose a natural model for the evaluator's rating process that captures the lack of calibration inherent to such a task. We conduct crowdsourcing experiments to demonstrate various facets of our model. We then proceed to study how to correct sequential bias under our model by posing this as a statistical inference problem. We propose a near-linear time, online algorithm for this task and prove guarantees in terms of two canonical ranking metrics, matched with lower bounds demonstrating optimality in a certain sense. Our algorithm outperforms the de facto method of using the rankings induced by the reported scores.
... The use of these systems can lead to biases in reasoning and potential incorrect assessments. A simple example is the Gambler's Fallacy, which demonstrates how humans tend to believe that a sequence of flips from a fair coin should be self-correcting [48,128]. In other words, when the coin is flipped multiple times, a sequence of identical outcomes is considered less and less likely as the length of the sequence grows. ...
Preprint
As we discussed in Part I of this topic, there is a clear desire to model and comprehend human behavior. Given the popular presupposition of human reasoning as the standard for learning and decision-making, there have been significant efforts and a growing trend in research to replicate these innate human abilities in artificial systems. In Part I, we discussed learning methods which generate a model of behavior from exploration of the system and feedback based on the exhibited behavior as well as topics relating to the use of or accounting for beliefs with respect to applicable skills or mental states of others. In this work, we will continue the discussion from the perspective of methods which focus on the assumed cognitive abilities, limitations, and biases demonstrated in human reasoning. We will arrange these topics as follows (i) methods such as cognitive architectures, cognitive heuristics, and related which demonstrate assumptions of limitations on cognitive resources and how that impacts decisions and (ii) methods which generate and utilize representations of bias or uncertainty to model human decision-making or the future outcomes of decisions.
... El descubrimiento de los heurísticos y sesgos cognitivos se debe a la investigación creativa y disruptiva de Daniel Kahneman y Amos Tversky en la década de los años 70 (Tversky & Kahneman, 1971,1974, aunque con anterioridad ya existía suficiente evidencia para cuestionar el paradigma de hombre racional (Katona, 1951;Simon, 1955Simon, , 1957. Su auge se debe al interés por comprender las determinantes y el funcionamiento humano dentro de las relaciones económicas, intentando predecir diversas pautas de comportamiento. ...
Article
Full-text available
La economía conductual aporta valiosos conocimientos sobre el funcionamiento de los agentes económicos, alejándose de la concepción de racionalidad ilimitada. Actualmente se aplica en múltiples áreas de la vida social como las finanzas conductuales, el neuromarketing, las políticas públicas, el ahorro, la salud pública, etc. La actividad de juegos de azar genera importantes resultados económicos y la cantidad de personas que apuestan crece cada año. En este escenario, las predicciones deportivas deben ser analizadas desde el conocimiento que aporta la economía conductual, para compren�der las determinantes de las decisiones de las personas. Este estudio pretende analizar la presencia de sesgos cognitivos que influyen en la predicción de resultados deportivos. Se diseñó un experimento de orientación teórica de tipo preex�perimental con la participación de 66 sujetos, quienes debían realizar estimaciones de resultados deportivos a partir de seis situaciones hipotéticas creadas. Se puede concluir que las predicciones deportivas operan bajo el principio de la racionalidad limitada, al presentar características del pensamiento intuitivo en las decisiones, así como el heurístico de la representatividad y los sesgos del optimismo, la sobre inferencia, la mano caliente y los pequeños números. Los resultados de este preexperimento apuntan hacia la presencia de un exceso de confianza en el conocimiento previo, la experiencia y la intuición, subvaloración de la información estadística e influencia de los componentes afectivos en las decisiones de predicción deportiva.
... El descubrimiento de los heurísticos y sesgos cognitivos se debe a la investigación creativa y disruptiva de Daniel Kahneman y Amos Tversky en la década de los años 70 (Tversky & Kahneman, 1971,1974, aunque con anterioridad ya existía suficiente evidencia para cuestionar el paradigma de hombre racional (Katona, 1951;Simon, 1955Simon, , 1957. Su auge se debe al interés por comprender las determinantes y el funcionamiento humano dentro de las relaciones económicas, intentando predecir diversas pautas de comportamiento. ...
Article
La economía conductual aporta valiosos conocimientos sobre el funcionamiento de los agentes económicos, alejándose de la concepción de racionalidad ilimitada. Actualmente se aplica en múltiples áreas de la vida social como las finanzas conductuales, el neuromarketing, las políticas públicas, el ahorro, la salud pública, etc. La actividad de juegos de azar genera importantes resultados económicos y la cantidad de personas que apuestan crece cada año. En este escenario, las predicciones deportivas deben ser analizadas desde el conocimiento que aporta la economía conductual, para comprender las determinantes de las decisiones de las personas. Este estudio pretende analizar la presencia de sesgos cognitivos que influyen en la predicción de resultados deportivos. Se diseñó un experimento de orientación teórica de tipo preexperimental con la participación de 66 sujetos, quienes debían realizar estimaciones de resultados deportivos a partir de seis situaciones hipotéticas creadas. Se puede concluir que las predicciones deportivas operan bajo el principio de la racionalidad limitada, al presentar características del pensamiento intuitivo en las decisiones, así como el heurístico de la representatividad y los sesgos del optimismo, la sobre inferencia, la mano caliente y los pequeños números. Los resultados de este preexperimento apuntan hacia la presencia de un exceso de confianza en el conocimiento previo, la experiencia y la intuición, subvaloración de la información estadística e influencia de los componentes afectivos en las decisiones de predicción deportiva.
... Pupils/students often miss the principle that in the case of independence the probability of one partial event is not affected by the results of previous (or subsequent) partial event. So called Kahneman's law of small numbers (Tversky and Kahneman, 1971) manifests phenomena, where the probability of a partial event is also applied to small numbers of its repetitions, regardless of the low reliability of such a conclusion. This mistake is deeply ingrained in gamblers, and it is often called "gambler's fallacy (bias)". ...
Conference Paper
Full-text available
When companies decide to promote their workers, one of the options for the latter is training. Based on game theory concepts, this paper derives a threshold that determines the conditions under which a firm can promote an employee; threshold identified after the worker sends a signal that he has finished his college studies. Once the threshold is deducted and from the reports on training and characteristics of the workers such as education, the result shows that companies are willing to promote the worker, as long as the additional benefits for the promotion of the worker are twice as high as the investment made. Although this paper shows a cost-benefit requirement, which could be useful for companies to identify whom to promote, the application is limited as it only considers approximate values rather than data from institutions.
... First, participants still projected more positive trends after a positive (vs. negative) price change, even if told that the change was random: People are "fooled by randomness" (Taleb, 2001;Tversky & Kahneman, 1971), even when randomness is noted explicitly. Second, the no-explanation condition was always more extreme than the noise condition, but less extreme than the internal-explanation condition: People consider unexplained price changes to contain some signal but not as much as explained changes. ...
Article
Full-text available
Conviction Narrative Theory (CNT) is a theory of choice under radical uncertainty —situations where outcomes cannot be enumerated and probabilities cannot be assigned. Whereas most theories of choice assume that people rely on (potentially biased) probabilistic judgments, such theories cannot account for adaptive decision-making when probabilities cannot be assigned. CNT proposes that people use narratives —structured representations of causal, temporal, analogical, and valence relationships—rather than probabilities, as the currency of thought that unifies our sense-making and decision-making faculties. According to CNT, narratives arise from the interplay between individual cognition and the social environment, with reasoners adopting a narrative that feels ‘right’ to explain the available data; using that narrative to imagine plausible futures; and affectively evaluating those imagined futures to make a choice. Evidence from many areas of the cognitive, behavioral, and social sciences supports this basic model, including lab experiments, interview studies, and econometric analyses. We propose 12 principles to explain how the mental representations (narratives) interact with four inter-related processes (explanation, simulation, affective evaluation, communication), examining the theoretical and empirical basis for each. We conclude by discussing how CNT can provide a common vocabulary for researchers studying everyday choices across areas of the decision sciences.
... In cases where people know the process of generating data, the law of small numbers causes the result of gambler's fallacy (Barberis and Thaler, 2003). According to Tversky and Kahneman (1971), the reason for gambler's fallacy is the misinterpretation of the accuracy of the laws of chance. If a coin throw results around five times in heads, people will think that heads have already been thrown many times, so this time, tails should be thrown (Barberis and Thaler, 2003). ...
Article
Full-text available
According to traditional finance theories, individuals behave rationally and take financial decisions under this rationality. Contrary to traditional finance theories, behavioural finance states that individuals do not always act rationally because they are affected by emotions and feelings. Thus, behavioural finance can be defined as systematic errors that keep individuals away from rationality. The biases might cause unhelpful or even hurtful decisions. Therefore, a high level of behavioural biases might negatively affect the financial well-being of individuals. It is vital to investigate young adults' financial behaviours as the future of the economies are influenced by their decisions. In this research, behavioural biases among young adults in Bristol, UK and Istanbul, Turkey, was examined to prevent young adults from making irrational financial decisions by identifying the most common behavioural biases. Thus, economies might be robust than today. According to result of this research, young adults have different behavioural biases depending on their culture. The most common biases among young adults in Bristol are over-optimism, anchoring, categorisation, conservatism, and the illusion of control while they are framing, cognitive dissonance, the illusion of knowledge and cue competition among young adults in Istanbul. These common behavioural biases that young adults in Bristol and Istanbul have to lead to many irrational financial decisions. It is not possible to reduce these behavioural biases by direct intervention, and for this, individuals need to be educated. Families may educate young adults about behavioural biases. After that rest of the education about behavioural biases may be given in the schools. Lastly, individuals should be informed about their behavioural biases and possible effects of these biases on their financial well-being.
... En ciencia en general, se tiende a caer en el sesgo llamado creencia en la ley de los pequeños números, que consiste en sobreestimar la representatividad de muestras de pequeño tamaño, sobreestimar la importancia de las diferencias, subestimar la magnitud de los intervalos de confianza y encontrar explicaciones causales para cualquier discrepancia entre los resultados y nuestras expectativas (Tversky y Kahneman 1971). De ahí que en vez de obsesionarnos con la significación estadística de las diferencias (cuyo valor depende críticamente del tamaño de muestra), debemos concentrarnos en el tamaño del efecto observado (que es independiente del tamaño de muestra) (Cohen 1994, Amrhein et al. 2019). ...
Preprint
Full-text available
En este este capítulo se revisan algunos elementos de psicología cognitiva relevantes para el aprendizaje, se hace una revisión crítica de la pedagogía constructivista y se identifican prácticas que han demostrado ser eficaces para mejorar los resultados de la enseñanza. Se discuten también diferentes sesgos cognitivos que tienen profundas implicaciones en el aprendizaje y la práctica de la ciencia, y en nuestra visión del mundo.
... We show that these anomalies are robust, even when faced with a large proportion of rational traders. The gambler's fallacy, first proposed by Tversky and Kahneman [41], is the erroneous belief that a certain random event is less likely to happen following an event or a series of events. 23 The opposite cognitive bias to the gambler's fallacy is the so-called "hot-hand fallacy" -the fallacious belief that a person who has experienced success in a random event has a greater chance of further success in any additional attempt. ...
Article
Full-text available
A multi-period stock trading model is developed in which there are two types of traders—a “rational” type and a “gambler’s fallacy” type—both observing a public signal about the fundamental value in each period. The rational type holds correct beliefs on the signals, whereas the gambler’s fallacy type mistakenly believes that the sequence of the signals exhibits systematic reversals. We explore the dynamic equilibrium in which the two types trade with each other to speculate future price changes based on their inferences about the fundamental value. It is shown that the presence of the gambler’s fallacy type can generate both short-term momentum and long-term reversal. Furthermore, the pattern is robust even in a market with a large proportion of rational traders as the price is closer to the gambler’s fallacy type’s valuation. We also show that the gambler’s fallacy type becomes more influential in determining the price as the market switches from momentum to reversal. Interestingly, to an outside observer, it would appear as though the rational traders act as if they have “hot-hand” fallacy in prices, and that a trend following strategy is optimal in this model.
... Surprising Results Require Strong Evidence-Lower P-Values Eliason (2018) shares 16 popular myths that persist despite evidence they are likely false. In the Belief in the Law of Small Numbers (Tvesrky and Kahneman 1971), the authors take the reader through intuition busting exercises in statistical power and replication. ...
Conference Paper
Full-text available
A/B tests, or online controlled experiments, are heavily used in industry to evaluate implementations of ideas. While the statistics behind controlled experiments are well documented and some basic pitfalls known, we have observed some seemingly intuitive concepts being touted, including by A/B tool vendors and agencies, which are misleading, often badly so. Our goal is to describe these misunderstandings, the "intuition" behind them, and to explain and bust that intuition with solid statistical reasoning. We provide recommendations that experimentation platform designers can implement to make it harder for experimenters to make these intuitive mistakes.
... In six main experiments using online ratings contexts, we find that people perceive that averages compatible with a possible input reflect less variable underlying distributions than averages that are not compatible with a possible input, even for comparisons in which this is statistically less likely. Our finding that people are frequently inaccurate in their perceptions of dispersion associated with average ratings is consistent with past work showing that humans frequently have erroneous intuitions when interpreting statistics (e.g., Tversky & Kahneman, 1971), despite having substantial experience with them. ...
Article
In this paper, we show how one property of an average affects perceptions of the variance of the distribution that the average is derived from. Specifically, we find that when people view average ratings compatible with a possible input they perceive these ratings to come from less variable distributions—even when this is statistically less likely. Six experiments and four supplemental studies (total N = 16,988) document evidence for this effect: People perceive less dispersion in the distributions of “compatible average ratings” (i.e., averages matching a possible input; e.g., 4; 4.0; 4.00 on a discrete scale from 1 to 5 stars) compared to those of “non-compatible average ratings” (i.e., averages that do not match a possible input; e.g., 4.01 and 4.10). We argue that this error can be explained by a compatibility principle which states that the weighting of an input increases with its degree of compatibility with the output. People rely on the perceived compatibility between an output and input when forming judgments about the frequency of the input, affecting their assessment of the dispersion associated with the average. For instance, people recognize that a 4.0 average matches a 4 and thus perceive this average to be comprised of more 4s and indicative of less dispersion. We close with a discussion of consequences of this perception for choice and search.
... Mauboussin (2013) integrated this research into decision-making errors, heuristics and cognition, and proposed a set of simple suggestions for avoiding common decision-making mistakes: particularly those where intuition may lead to missteps and unanticipated harms. Informed by the heuristics and biases approach (Tversky & Kahneman, 1971), Mauboussin argued that humans very easily fall foul of simplified mental shortcuts (heuristics) that-when they are misaligned to the situation or task-prevent us from dealing effectively with the complexity of real-world decisionmaking. This well-supported model proposes a dual processing approach, based on the notion of two decision-making systems: System 1 and System 2. According to Kahneman (2011), System 1 is fast, automatic and effortless, using intuition to make decisions. ...
... For instance, we know that people have a tendency to believe that a head/tail sequence such as HHHHHHH is less likely than a sequence such as HTTHTTT (cf. Tversky & Kahneman, 1971). This could as well be understood from a Gestalt perspective, in that the former has the qualities of a singular, prägnant shape, i.e., is a figure ( rate for the critical "figure" item (i.e., of "sül" in Fig. 1a and "19" in Fig. 1b) was around 70%. ...
Preprint
Full-text available
This article is a sequel to “Gestalt Theory: Its Past, its Stranding, and its Future.” The aim of this article is to bring to light the conceptual and empirical contributions of Gestalt theory within the field of memory. It is typically believed that Gestalt theory is a theory about perception only. This, however, is not true. The first part of the article discusses some critical thoughts about memory processes as presented by Kurt Koffka in his Principles of Gestalt (1936) book. These involve Koffka’s proposal about the involvement and effects of memory processes in the perception of successive Gestalts; a discussion of the similarities and differences between percepts and memory traces; Koffka’s reference to research suggesting that memory traces are dynamic such that, depending on their Prägnanz, they will or will not change during storage in a way that can even be predicted in some cases. The article then reviews one of the most powerful empirical studies on memory within a Gestalt framework, i.e., Hedwig von Restorff’s 1933 dissertation demonstrating figure-ground dynamics in memory tasks. In the final part of this article, I present the main ideas of an utterly ignored memory researcher, Erich Goldmeier, from his 1982 book The Memory Trace: Its Formation and Its Fate. It is dismaying that these very original and interesting studies went unnoticed by mainstream cognitive psychology.
... Some prominent examples will illustrate how the rationality of behavior has been tested within the research program of Kahneman and Tversky (1972, 1973Kahneman et al., 1982;Tversky & Kahneman, 1971. For instance, preference reversals have been found in numerous experiments (Slovic, 1995;Tversky, 1969). ...
... These findings raise intriguing questions regarding the nature and extent of predictability of one's success and team success in a team game. This is particularly interesting, since these findings not only refute the wellestablished narratives of the absence of hot hands in team games 18,19,22,23 where performances are usually driven by stochastic events. Our findings suggest that the hot hand effect is not just a psychological bias 18,19 . ...
Article
Full-text available
We investigate the predictability and persistence of individual and team performance (hot-hand effect) by analyzing the complete recorded history of international cricket. We introduce an original temporal representation of performance streaks, which is suitable to be modelled as a self-exciting point process. We confirm the presence of predictability and hot-hands across the individual performance and the absence of the same in team performance and game outcome. Thus, Cricket is a game of skill for individuals and a game of chance for the teams. Our study contributes to recent historiographical debates concerning the presence of persistence in individual and collective productivity and success. The introduction of several metrics and methods can be useful to test and exploit clustering of performance in the study of human behavior and design of algorithms for predicting success.
... Subjects who have succumbed to the hot hand fallacy (Burns, 2001;Gilovich, Vallone & Tversky, 1985) will tend to choose the portfolio of 2 Z shares. Subjects who believe in the gambler's fallacy (Rogers, 1998;Tversky & Kahneman, 1971) will prefer the 2 Y shares portfolio. Subjects who think they can predict the next random events will not make use of the robo advisor. ...
Article
Full-text available
Within the framework of a laboratory experiment, we examine to what extent algorithm aversion acts as an obstacle in the establishment of robo advisors. The subjects have to complete diversification tasks. They can either do this themselves or they can delegate them to a robo advisor. The robo advisor evaluates all the relevant data and always makes the decision which leads to the highest expected value for the subject's payment. Although the high level of efficiency of the robo advisor is clear to see, the subjects only entrust their decisions to the robo advisor in around 40% of cases. In this way they reduce their success and their payment. Many subjects orientate themselves towards the 1/n-heuristic, which also contributes to their sub-optimal decisions. As long as the subjects have to make decisions for others, they noticeably make a greater effort and are also more successful than when they make decisions for themselves. However, this does not have an effect on their acceptance of robo advisors. Even when they make decisions on behalf of others, the robo advisor is only consulted in around 40% of cases. This tendency towards algorithm aversion among subjects is an obstacle to the broader establishment of robo advisors.
... be highly representative of the population from which they are drawn.Tversky and Kahneman (1971) point out that most people have wrong intuitions about samples and think that all samples are similar to their population. For example, if a researcher found a statistical correlation of r = 0.30 in a sample of N = 40 participants, most researchers think that testing N = 20 participants would similary result in finding a statistical cor ...
Article
Full-text available
Most of us who do research on language acqusition have had to use statistics to evaluate the results of experiments. Some may use only the statistical procedures they learned in graduate school and may thus miss out on new advances in statistics that might shed light on some problems in a more straightforward way. The three papers that conduct empirical studies that I will discuss today have used statistical procedures that you may not be very familiar with—bootstrapping, Monte Carlo simulations, and Rasch (or item response theory [IRT]) analysis. Their use of these procedures, however, means that they are able to give quite precise and interesting answers to the questions that they have asked. The fourth paper I will discuss is not an empirical study but a review of studies and call for future research going forward.
... The authors argue that heuristics lead to systematic errors and therefore should be avoided in many cases. The experimental research (Tversky and Kahneman, 1971;Kahneman and Tversky, 1972) shows that there is a set of heuristics, including representativeness, availability, adjustment and anchoring, which are associated with biases in judgments. Accordingly, heuristics have come under criticism to assume a nonrational and negative character for human cognition, even if Kahneman andTversky (1974, p. 1129) recognize that "heuristics are very useful". ...
Article
Purpose This paper aims to provide a wide picture of studies on heuristics for international decision-making with a focus on foreign market entry. This paper systematically reviews studies published in the international business and international marketing domain to examine heuristically based decisions for foreign market entry. Design/methodology/approach This paper proposes a systematic literature review and an in-depth analysis of 32 papers published between 1997 and 2021 dealing with foreign market entry and the use of heuristics for international decision-making. Findings Even if the marketing and management literature is in many ways permeable to the debate around heuristics developed in experimental psychology and cognitive science, international business and international marketing studies on the one hand recognize that international decision-making, especially when dealing with foreign market entry, is strongly characterized by uncertainty, on the other hand, there isn’t a developed and systematized literature about it. This paper shows key topics and areas fundamental to foreign market entry in which heuristics are applied by decision makers and their effectiveness. Originality/value A systematic review of the use of heuristics for foreign market entry decision-making can represent a useful step for a more organic development of knowledge about the more general use of heuristics for international decision-making. Understanding the decision-making process on the modes of entry in foreign markets is a key topic for international marketing and international business scholars and practitioners.
... Gilles also presents with several distorted cognitions: the gambler's fallacy, overconfidence, and trends in number picking (Tversky & Kahneman, 1971). Gilles is convinced that certain sequences of color in roulette occur and that he can predict the results (i.e. ...
Article
Full-text available
Blaszczynski and Nower conceptualized in 2002 an integrative Pathways Model leading to gambling disorder by postulating three subtypes of individuals with problem gambling characterized by common and specific characteristics (sociodemographic features, comorbidities, psychological factors). Here we propose a clinical illustration that fits each subtype. For each pathway, we (1) describe a corresponding clinical case, (2) propose a symptom-based clinical description, and (3) elaborate a process-based case formulation to explain the development and maintenance of the problematic gambling behavior. We argue that the clinical work with patients benefited from this two-level approach (symptoms vs. psychological processes) combined with a more holistic approach that takes into account intrapersonal (e.g. personality), interpersonal (e.g. family functioning), and environmental variables (e.g. life events). Crucially, our approach not only considers psychopathological dimensions (e.g. symptoms, diagnostic criteria), but it also views as central individual differences (personality traits) and cognitive and affective processes postulated to mediate relationships between biopsychosocial antecedents and psychopathological symptoms. In the current paper, we aim to demonstrate how the Pathways Model can be used as a framework to embrace a holistic perspective that promotes individualized and process-centered psychological interventions for individuals with gambling problems.
... The conditional averages do not necessarily approximate the fundamental relationships which generated the dataset-the ground truth. This can be caused by sparse areas of data which can be a↵ected by the 'law of small numbers' creating misleading averages (Tversky and Kahneman 1971). Additionally, the interaction of noise and high convexity or concavity in input-output relationships creates a gap between the conditional averages of a dataset and the underlying relationships which generated them, a direct result from Jensen's inequality (Jensen 1906). ...
Thesis
As machine learning technology improves, it is increasingly relied upon when making significant decisions which require a high level of trust. Accuracy and interpretability is paramount for trust in regression methods, which comprise a large portion of the field. To apply these methods with confidence there needs to be a certainty that they have modelled the ground truth of a dataset— the correct input-output relationships. Conventional regression error measures, however, do not ensure that the correct relationships are modelled, as they only require accurate point predictions to assign low error to a method. A case study of power prediction for merchant vessels is used to illustrate the problem, where accurate prediction and correct input-output relationship modelling is required, although there is limited understanding of these input-output relationships. For this problem neural networks can produce predictions with a 2% Mean Absolute Relative Error, which is low enough for use in fuel saving devices on-board vessels in operation. The methods developed in this thesis have been deployed on over a dozen merchant vessels operated by Shell Shipping and Maritime, saving over 1/4 million tonnes of CO<sub>2</sub> emissions in 2020. However, the predictions are not interpretable, as the input-output relationships modelled are not consistent or correct. A new error measure, the Mean Fit to Median Error, is investigated which ensures networks approximate the conditional averages and is applicable to any dataset. This is verified on 36 artificial datasets, where the ground truth is known, and is shown to correlate to the ground truth on average 60% higher than traditional error measures correlate to the ground truth. The Mean Fit to Median Error is then applied to the ship powering example and shows a shift in the approximated relationships for the same Mean Absolute Relative Error values, showing an improvement in determining the ground truth. Networks reporting low Mean Fit to Median errors model more consistent and correct input-output relationships and are robust to areas of sparse data.
Article
Using upper echelons and the entrepreneurship event model as theoretical umbrellas, we develop a model suggesting that, under munificent but resistant contexts, the interaction between entrepreneurs’ vertical and horizontal spirituality and their ego resilience and alertness prompt social innovation–as inclusiveness, frugality, and flexibility–at their firms. Using a sample of 85 Saudi entrepreneurs, we find that the interaction between entrepreneurs’ ego resilience and vertical and horizontal spirituality drive innovation inclusiveness. Also, we find that while the interaction between entrepreneurs’ alertness and vertical and horizontal spirituality drive innovation frugality, the interaction between entrepreneurs’ alertness and horizontal spirituality drive innovation flexibility. Lastly, data reveal that when entrepreneurs have low levels of vertical spirituality and alertness, their high levels of resilience drive the highest level of innovation frugality. We highlight the importance that entrepreneurs’ mindsets as values and beliefs, personality, and cognitive schema have in social entrepreneurial activity at their firms.
Article
Full-text available
Purpose The purpose of this paper is to comprehensively review a large and heterogeneous body of academic literature on investors' feedback trading, one of the most popular trading patterns observed historically in financial markets. Specifically, the authors aim to synthesize the diverse theoretical approaches to feedback trading in order to provide a detailed discussion of its various determinants, and to systematically review the empirical literature across various asset classes to gauge whether their feedback trading entails discernible patterns and the determinants that motivate them. Design/methodology/approach Given the high degree of heterogeneity of both theoretical and empirical approaches, the authors adopt a semi-systematic type of approach to review the feedback trading literature, inspired by the RAMESES protocol for meta-narrative reviews. The final sample consists of 243 papers covering diverse asset classes, investor types and geographies. Findings The authors find feedback trading to be very widely observed over time and across markets internationally. Institutional investors engage in feedback trading in a herd-like manner, and most noticeably in small domestic stocks and emerging markets. Regulatory changes and financial crises affect the intensity of their feedback trades. Retail investors are mostly contrarian and underperform their institutional counterparts, while the latter's trades can be often motivated by market sentiment. Originality/value The authors provide a detailed overview of various possible theoretical determinants, both behavioural and non-behavioural, of feedback trading, as well as a comprehensive overview and synthesis of the empirical literature. The authors also propose a series of possible directions for future research.
Book
Full-text available
Article
Full-text available
A democracy is widely accepted to be a system that efficiently manifests public opinion of the electorate while also maintaining a checks and balance on power through free elections. However, India continues to show an increasing incidence of rent-seeking and criminal politics, even while the exercise of democracy remains intact. This paper employs North, Wallis and Weingast's conceptualisation of social organisation as access orders in a society to show that Indian democracy has a system of political representation with an inefficient system of political access.The analysis further contributes to the literature by conceptualising the means of access in societies and argues that India is a society of limited access orders. Using this framework, the paper argues that the limited access in Indian democracy occurs as a result of manipulation of the means of access by a small politico-economic elite, using a system of privileged and personal inter-elite relationships that results in a growing convergence of rent-seeking practices in Indian politics.
Article
The developments of offshore wind farms can place increased pressures on conflicting marine users, particularly in already crowded waterways. Risk analysis of potential hazard scenarios are conducted by developers and regulators in the form of Navigation Risk Assessments which seek to identify, measure and mitigate impacts through data collection, consultation, modelling and risk assessment. These activities have inherent uncertainties and limitations which are rarely discussed and have the potential to undermine the value and credibility of the risk assessment. To evaluate the accuracy of Navigation Risk Assessments, their predictions are compared with the historical incident record of accidents involving wind farms. This review identifies significant methodological limitations and sources of uncertainty endemic in the Navigation Risk Assessment process which results in an over-estimation of risk. These include a lack of inclusion of historical evidence, issues during elicitation of expert judgment and methodological limitations of both quantitative risk models and the underlying risk assessment. Based on our evaluation, future research directions are highlighted to support decision makers on marine spatial planning by increasing the robustness of Navigation Risk Assessments.
Article
How do people forecast an actor’s future rank after observing a rank change and what are the factors that shape these forecasts? In this research, we shed new light on the attributions that people make when they observe an actor change rank and on how these attributions explain where people expect the actor to rank in the future. Specifically, in Studies 1a and 1b we document an asymmetric extrapolation bias, whereby people extrapolate upward rank trajectories more steeply into the future than downward trajectories – a pattern of results that differ in both magnitude and direction from actual rank change patterns over time. In Studies 2 and 3 we provide evidence of the different attributions that explain people’s asymmetric extrapolation through measurement and manipulation. Finally, in Study 4 we demonstrate a practical downstream consequence of this asymmetric extrapolation bias (i.e., promotion recommendation) (Study 4). Theoretical and practical implications are discussed.
Article
In this commentary on Claire White’s An Introduction to the Cognitive Science of Religion: Connecting Evolution, Brain, Cognition, and Culture (London: Routledge, 2021), I contrast the circuitous way in which I (and probably a number of others) initially came to teach cognitive science of religion (CSR) at the undergraduate university level with the more direct (and knowledgeable) way in which White came to do so. I then briefly discuss her comprehensive and coherent presentation of the CSR, noting, however, several issues with which I have problems (fractionation, an ahistorical “presentist” bias, and whether or not an “agnostic” view of religious teachings should remain the norm in the modern university curricula). Nevertheless, White’s Introduction is a most welcome and long-overdue contribution to the academic study of religion, the 150-year trajectory of which has been characterized by an anti-scientific history.
Article
Full-text available
Polarization and extremism are often viewed as the product of psychological biases or social influences, yet they still occur in the absence of any bias or irrational thinking. We show that individual decision-makers implementing optimal dynamic decision strategies will become polarized, forming extreme views relative to the true information in their environment by virtue of how they sample new information. Extreme evidence enables decision makers to stop considering new information, whereas weak or moderate evidence is unlikely to trigger a decision and is thus under-sampled. We show that this information polarization effect arises empirically across choice domains including politically-charged, affect-rich and affect-poor, and simple perceptual decisions. However, this effect can be disincentivized by asking participants to make a judgment about the difference between two options (estimation) rather than deciding. We experimentally test this intervention by manipulating participants' inference goals (decision vs inference) in an information sampling task. We show that participants in the estimation condition collect more information, hold less extreme views, and are less polarized than those in the decision condition. Estimation goals therefore offer a theoretically-motivated intervention that could be used to alleviate polarization and extremism in situations where people traditionally intend to decide.
Article
Importance: The rate of postoperative death in children undergoing tonsillectomy is uncertain. Mortality rates are not separately available for children at increased risk of complications, including young children (aged <3 y) and those with sleep-disordered breathing or complex chronic conditions. Objective: To estimate postoperative mortality following tonsillectomy in US children, both overall and in relation to recognized risk factors for complications. Design, setting, and participants: Retrospective cohort study based on longitudinal analysis of linked records in state ambulatory surgery, inpatient, and emergency department discharge data sets distributed by the Healthcare Cost and Utilization Project for 5 states covering 2005 to 2017. Participants included 504 262 persons younger than 21 years for whom discharge records were available to link outpatient or inpatient tonsillectomy with at least 90 days of follow-up. Exposures: Tonsillectomy with or without adenoidectomy. Main outcome and measures: Postoperative death within 30 days or during a surgical stay lasting more than 30 days. Modified Poisson regression with sample weighting was used to estimate postoperative mortality per 100 000 operations, both overall and in relation to age group, sleep-disordered breathing, and complex chronic conditions. Results: The 504 262 children in the cohort underwent a total of 505 182 tonsillectomies (median [IQR] patient age, 7 [4-12] years; 50.6% females), of which 10.1% were performed in young children, 28.9% in those with sleep-disordered breathing, and 2.8% in those with complex chronic conditions. There were 36 linked postoperative deaths, which occurred a median (IQR) of 4.5 (2-20.5) days after surgical admission, and most of which (19/36 [53%]) occurred after surgical discharge. The unadjusted mortality rate was 7.04 (95% CI, 4.97-9.98) deaths per 100 000 operations. In multivariable models, neither age younger than 3 years nor sleep-disordered breathing was significantly associated with mortality, but children with complex chronic conditions had significantly higher mortality (16 deaths/14 299 operations) than children without these conditions (20 deaths/490 883 operations) (117.22 vs 3.87 deaths per 100 000 operations; adjusted rate difference, 113.55 [95% CI, 51.45-175.64] deaths per 100 000 operations; adjusted rate ratio, 29.39 [95% CI, 13.37-64.62]). Children with complex chronic conditions accounted for 2.8% of tonsillectomies but 44% of postoperative deaths. Most deaths associated with complex chronic conditions occurred in children with neurologic/neuromuscular or congenital/genetic disorders. Conclusions and relevance: Among children undergoing tonsillectomy, the rate of postoperative death was 7 per 100 000 operations overall and 117 per 100 000 operations among children with complex chronic conditions. These findings may inform decision-making for pediatric tonsillectomy.
Article
We investigate cross-cultural differences in forecasting stock markets and trading tendencies using a survey sample of 339 participants from Switzerland, Ukraine, and China. We find that (1) subjects in all countries exhibit representative bias but in different directions. The Swiss tend to extrapolate recent trends, whereas the Chinese tend to predict reversals. This result is consistent with previous research on cultural lay theories of change. Ukrainian students tend to be optimistic in both up and down markets. (2) Swiss students tend to provide a wider confidence interval in their forecasts than Chinese and Ukrainian students. (3) Subjects from all three countries make similar selling decisions and exhibit the disposition effect, i.e., selling winning stocks while holding losing ones. (4) Concerning buying decisions, Swiss and Chinese students are trend followers. In contrast, Ukrainian students are more likely to buy in both bear and bull markets, perhaps driven by their optimistic view of the future. We did not find a uniform pattern across countries for gender differences in forecast intervals and decision confidence.
Article
How robust are experimental results to changes in design? And can researchers anticipate which changes matter most? We consider a real-effort task with multiple behavioral treatments and examine the stability along six dimensions: (i) pure replication, (ii) demographics, (iii) geography and culture, (iv) the task, (v) the output measure, and (vi) the presence of a consent form. We find near-perfect replication of the experimental results and full stability of the results across demographics, significantly higher than a group of experts expected. The results differ instead across task and output change, mostly because the task change adds noise to the findings. (JEL C90, D82, D91)
Chapter
Dieser Beitrag schließt sich unmittelbar an den Beitrag von Reinhard H. Schmidt an und beleuchtet die Entwicklungen im Bereich Finance der jüngeren Vergangenheit. Auch wenn dabei die deutsche Betriebswirtschaftslehre zwar klar US-amerikanischen Entwicklungen folgt, so zeigen doch einige Aspekte immer noch kulturelle Eigenheiten von Deutschland bzw. Europa auf, z. B. im Teilgebiet der Experimental Finance. Der Beitrag wird abgerundet durch einige Betrachtungen, die die bis heute nicht immer einfache Lage der nicht-US-amerikanischen Forschung in einem traditionell US-dominierten Forschungsgebiet wie Finance beschreibt.
Article
Full-text available
2016 Research Leap/Inovatus Services Ltd. Decision making with regards to food choice can be traditionally viewed as an economic transaction, whereby consumers make a choice of which foods they would like to purchase and consume within the framework of how much disposable income they have at any particular time. However, within this framework, research has shown that there is a balance that consumers aim to achieve between the hedonistic qualities of the food and the perceived effects it may have on their health. Consequently, one area that is of significant importance is the concept of how food risks are perceived and how this perception affects the decision-making process. Research has indicated that Irish food consumers use a set of heuristic decision-making tools in order to assist them in making food choices for themselves and their families. These decision-making tools are evoked irrespective of age, gender or social class. This has led to concern (despite numerous health promotion and media campaigns) regarding the national food diet, with imperfections in consumption observed in an increase in obesity, nutritional imbalances and chronic ill health which expose individuals to medical conditions such as cancer and heart disease. The increase risks associated with these are prevalent in Ireland and for many measures Irish consumers rank poorly with other countries in Europe. Although the food choices are predicated by these decision-making tools there are reflections of previous historical dietary choices that persist within the diets pursued today by the majority of Irish consumers. This in addition to the effects of acculturation following recent changes in the demographic structure and the growth global networks for information flow and exchange has resulted in a dynamic food environment with "nutrition echoes" observed in the choices people make. All rights reserved.
Article
Purpose This paper aims to analyze the heuristics and cognitive biases described by behavioral finance in the investment decision-making process of Portugal’s housing market. Design/methodology/approach In a first step, the authors applied an exploratory factor analysis (EFA) to assess the impact of heuristics and cognitive biases on investors’ decision-making. In a second step, the authors run a structural equation model (SEM) diagram path to assess if the sociodemographic characteristics of housing market investors determine the identified heuristics and if the heuristics condition the investors’ investment criteria. Findings Herd behavior and the heuristics of representativeness, availability and anchoring influence the housing market’s investors’ behavior in their decision-making process. Investors with above-average income show higher levels of overconfidence. Investors showing higher levels of overconfidence also tend to be more sensitive to the house price under analysis for investment. Women tend to show higher levels of the availability and anchoring heuristic. In turn, housing market investors showing higher levels of availability and anchoring heuristic tend to be more sensitive to the price and location of the house under analysis for investment. Research limitations/implications The explained variance of the EFA is below 50%, and the root mean square of approximation of the SEM is above the threshold of 0.05. These indicators are evidence of the models’ fragility. Practical implications Governments and regulators can better prevent real estate bubbles if they monitor behavioral biases and heuristics of housing investors together with quantitative indicators. Realtors can profit from adapting their marketing strategy and commercial communication to investors of sociodemographic groups more prone to a specific type of heuristics. Originality/value To the best of the authors’ knowledge, this is the first study that combines the contributions of behavioral finance with Portugal’s housing investment market and the first study connecting heuristics to investment criteria.
Chapter
In cognitive psychology, those who behave in a particularly rational manner are considered intelligent. First, we briefly introduce the psychology of judgment and decision-making, which has played a key role in theorizing and empirical investigation of cognitive research for decades. We then give an impression of the pessimistic view of the rationality of human behavior that emerged from the research program of the two influential researchers, Daniel Kahneman and Amos Tversky. Yet, Herbert Simon’s idea of bounded rationality provides an often-cited explanation of the many violations of mathematical and logical rules due to “heuristics and biases.” This chapter highlights an alternative explanation that has traditionally received less attention: “metacognitive myopia” is a weakness in the metacognitive monitoring and control function that regulates our thinking. While numerous cognitive fallacies and misjudgments are recurrent and unavoidable, a comprehensive explanation of irrational behavior must also explain why biases and illusions are not detected and corrected at the metacognitive level, despite feedback and education. The uncritical and often naïve adherence to patently non-valid information is the subject of research on metacognitive myopia.
Conference Paper
The maritime sector is exploring the applicability of alternative powering options and ways to implement new technologies to increase safety, efficiency, and autonomy of ship power systems. The technological development of power systems, their complexity, and high costs of their malfunction or downtime have led to employment of different approaches in safety engineering. In order to reduce hazards and failures in ship operation, shipbuilders use several methods during design phase to identify, investigate and manage all safety concerns. For this purpose, there is range of methods, as for instance Fault Tree Analysis (FTA), Event Tree Analysis (ETA), Failure Mode Effects Analysis (FMEA), and Failure Mode, Effects and Criticality Analysis (FMECA), which can be used separately or combined. This paper reviews these methods with their advantages and limitations in their application to risk assessment of ship power systems.
Article
Purpose Although some research has been carried out on feedback trading in different asset classes, there have been few empirical investigations that consider both major and emerging stock markets (Koutmos, 1997; Antoniou et al. , 2005; Kim, 2009) stock index futures (Salm and Schuppli, 2010). In this study, the author examines positive/negative feedback trading in both developed-emerging-frontier-standalone (51) stock markets for 2010–2020 and sub-periods including COVID-19 period. Design/methodology/approach The hypothesis “feedback trading behaviour led the price boom/bust in the stock markets during the first quarter of COVID-19 pandemic” is tested by employing the Sentana and Wadhwani (1992) framework and using asymmetrical GARCH models (GJRGARCH, EGARCH) in accordance with the empirical literature. Findings The following conclusions can be drawn from the present study; (1) There is no evidence to support a significant distinction between developed, emerging, frontier or standalone markets or high/upper middle, lower middle income economies in the case of feedback trading. It is more likely to be a general phenomenon reflecting the outcomes of general human psychology (2) in the long term (2010–2020) based on the feedback trading results Asian stock markets appear to be far from efficiency. Research limitations/implications Stock markets are selected based on data availability. Practical implications Several inferences can be drawn about overall results. First, investors and portfolio managers should beware of their investment decisions during bearish market conditions where volatility is on the rise and also when there is a strong reaction to bad news/negative shocks in the market. Moreover, investing in Asia stock markets may require more attention since those markets are reputed to be more “idiosyncratic”, less reliant on economic and corporate fundamentals in their pricing. Moreover, the impact of foreign investors on stock market volatility and returns and weaker implementation of regulations also affect the efficiency of the markets (Lipinsky and Ong, 2014). Originality/value To the best of the author’s knowledge, most studies in the field of feedback trading in stock markets have only focused on a small sample of countries and second, the effect of COVID-19 uncertainty on the stock markets have not been addressed in the literature with respect to feedback trading. This paper fills these literature gaps. This study is expected to provide useful insights for understanding the instabilities in stock markets particularly under conditions of high uncertainty and to fill the gap in the literature by comparing the results for a large sample of countries both in the long term and in the pandemic. Highlights for review This study has shown that feedback trading is more prevalent in Asian stock markets in the long run in Europe, America or Middle East for the period 2010–2020. Positive feedback traders generally dominated most of the stock markets during the early period of COVID-19 pandemic. Another major finding was that the stock markets in Malaysia, Japan, the Philippines, Estonia, Portugal and Ukraine are dominated by negative feedback traders which may be interpreted as “disposition effect” meaning that they sell the “past winners”. In Indonesia, New Zealand, China, Austria, Greece, UK, Finland, Spain, Iceland, Norway, Switzerland, Poland, Turkey, Chile and Argentina neither positive nor negative feedback trading exists even under uncertain conditions.
Article
Full-text available
Detecting and responding to information security threats quickly and effectively is becoming increasingly crucial as modern attackers continue to engineer their attacks to operate covertly to maintain long-term access to victims’ systems after the initial penetration. We conducted an experiment to investigate various aspects of decision makers’ behavior in monitoring for threats in systems that potentially have been compromised by intrusions. In checking for threats, decision makers showed a recency effect: they deviated from optimal monitoring behavior by altering their checking pattern in response to recent random incidents. Decision makers’ monitoring behavior was also adversely affected when there was an increase in security, exhibiting a risk compensating behavior through which heightened security leads to debilitated security behaviors. Although the magnitude of the risk compensating behavior was significant, it was not enough to fully offset the benefits from added security. We discuss implications for theory and practice of information security.
Article
Full-text available
Özet: Nöroekonomi, ekonomik kararların sinirsel temellerini belirlemek amacıyla nörobilimsel ölçü araçlarını kullanan disiplinler arası bir alan olarak kabul edilmektedir. Bu çalışmada ekonomik karar verme mekanizması nörobilim açısından incelenmektedir. Rasyonel kararların beynin mantık kısmında alınıp alınmadığı, risk ve belirsizlik ortamında verilen kararların beynin hangi kısımlarını harekete geçirdiği yapılan deneyler çerçevesinde analiz edilmektedir. İlk olarak beynin işlevleri ve iktisadi karar verme süreci üzerindeki etkisi ele alınmaktadır. İkinci olarak, ekonomi teorisinde risk ve belirsizlik altında karar verme sürecini inceleyen "beklenen fayda teorisi"nin anomalileri ve bu anomalileri ortaya çıkaran deneylerin sonuçlarına yer verilmektedir. Son olarak, risk ve belirsizlik altında karar verme süreci nöroekonomi çerçevesinde deneysel örneklerle incelenmektedir. Çalışmanın amacı, karar verme sürecinde duyguların güçlü bir etkiye sahip olup olmadığını ortaya koymaktır. Tüm yapılan deneylerde insanların kararlarında sistematik hatalar yaptığı, risk ve belirsizlik altında beynin evrimsel olarak daha yaşlı kısmının harekete geçtiği bulgularına ulaşılmıştır. Sonuçlar belirsizlikten kaçınma için herhangi bir özel açıklamayı açıkça desteklemese de, açık olan şu ki; insanlar belirsizliğe karşı anında olumsuz bir duygusal tepki vermektedirler. Abstract: Neuroeconomics is an interdisciplinary field that uses neuroscientific measurement tools to determine the neural basis of economic decisions. In this study, economic decision making mechanism is examined in terms of neuroscience. Many experiments have been conducted on whether rational decisions are made in the logic part of the brain, and which parts of the brain are activated by decisions made in an environment of risk and uncertainty. In this study, analysis is made within the framework of these experiments. First, the functions of the brain and its effect on the economic decision-making process are discussed. Secondly, the anomalies of the "expected utility theory" that examines the decision-making process under risk and uncertainty in economic theory and the results of the experiments that reveal these anomalies are included. Finally, the decision-making process under risk and uncertainty is examined with empirical examples within the framework of neuroeconomics. The aim of the study is to reveal whether emotions have a strong influence in the decision-making process. In all experiments, it has been found that humans make systematic mistakes in their decisions, and that the evolutionarily older part of the brain is activated under risk and uncertainty. While the results do not explicitly support any specific explanation for uncertainty avoidance, what is clear is that people have an immediate negative emotional response to uncertainty.
Article
All human beings are limited by their knowledge and interpretative abilities, leading them to rely on more simplifications to make the decision-making more tractable. Kahneman and Tversky’s landmark work recognized that individuals’ choices often systematically deviate from the neo-classical expectations of rationality, and such deviations are known as behavioral biases. This article aims to examining how the behavioral biases relate to each other and impact the investment decision of individuals. The relationships between behavioral biases may be used to develop certain profiles of financial behavior, which the finance agents can use to provide more custom choices to their clients. The population for this study was the individual investors of the Indian financial markets and any individuals who may be prospective investors. A research instrument was created, for the sake of studying such association, the pilot survey of which revealed the items which were most reliable, which were retained and used for the final round of data collection. The analysis of the collected data revealed that there are relationships between the behavioral biases themselves. Based on such relationships, the biases were categorized and investor decision-making profiles were proposed.
Article
Advice from those who have experience with a decision problem is often believed to be beneficial for decision making. However, if predecessors do not properly update their evaluation of options available for them to choose from based on their experience, they may fail to pass the information obtained from their experience to their successors. This could lead to a worse outcome than in the absence of advice, since the entire group of decision-makers may herd on an inferior choice due to bad advice. Such bad advice could be driven by predecessors’ inability to update properly, or a lack of willingness to exert effort to update. In a laboratory experiment, we study how likely predecessors are to give useful advice and identify the possible reasons for giving bad advice. We find that about half of the predecessors did not give useful advice in the sense that they did not update their beliefs properly. However, individuals making the same decision for themselves multiple times were more likely to update correctly. The difference between the advice givers and the individual decision-makers suggests that unwillingness to exert effort may be the main driving force for bad advice. As a result of bad advice, the presence of advice did not improve successors’ decision quality. We also find that the role of being advice givers may change how they made the decision for themselves. But self-selection of advice givers did not improve their effort level or advice quality. Interestingly, narcissistic personality is negatively related to advice givers’ effort level.
Article
People often extrapolate from data samples, inferring properties of the population like the rate of some event, class, or group ‒e.g., the percent of female scientists, the crime rate, the chances to suffer some illness. In many circumstances, though, the sample observed is non-random, i.e., is affected by sampling bias. For instance, news media rarely display (intentionally or not) a balanced view of the state of the world, focusing particularly on dramatic and rare events. In this respect, a recent literature in Economics hints that people often fail to account for sample selection in their inferences. We here offer evidence of this phenomenon at an individual level in a tightly controlled lab setting and explore conditions for its occurrence. We conjecture that people tend to update their beliefs as if no selection issues existed, unless they have extremely strong evidence about the data-generating process and the inference problem is simple enough. In this vein, we find no evidence for selection neglect in an experimental treatment in which subjects must infer the frequency of some event given a non-random sample, knowing the exact selection rule. In two treatments where the selection rule is ambiguous, in contrast, people extrapolate as if sampling were random. Further, they become more and more confident in the accuracy of their guesses as the experiment proceeds, even when the evidence accumulated patently signals a selection issue and hence warrants some caution in the inferences made. This is also true when the instructions give explicit clues about potential sampling issues.
Article
A portion of results that are judged significant on the basis of classical statistical tests will be due to chance. The conditional probability of error in the presence of statistical significance depends upon the significance level employed, the power of the test, and the prior probability that a valid null hypothesis was chosen for testing. Bayesian theory provides a logical model for the design of experiments in which classical hypothesis testing is to be used. In this manner, Bayesian objectives can be realized while the safeguards of classical statistical hypothesis testing are retained. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
A review of literature giving evidence of response preferences, mainly in human Ss, is undertaken. The preferences are taken from experimental work in early psychophysical data, subjective estimates of "chance" sequences, response mechanisms at the threshold, "gambling" situations, probability learning, and the influence of instructions on performance in probability learning. Explanations of response preferences are reviewed and the relationships between subjective uncertainty and preferences are explored with an attempt to give cohesion to a diverse body of experimental evidence. (109 ref.)
EDWARBS, W. Conservatism, in human information processing Formal repre-sentation of human judgment Categories of human learning
  • J Cohen
COHEN, J. Statistical power analysis in the behavioral sciences. New York: Academic Press, 1969. EDWARBS, W. Conservatism, in human information processing. In B. Kleinmunlz (Ed.), Formal repre-sentation of human judgment. New York: Wiley, 1968. ESTES, W. K. Probability learning. In A. W. Melton (Ed.), Categories of human learning. New York: Academic Press, 1964. OVERALL, J. E. Classical statistical hypothesis testing within the context of Bayesian theory. Psycho-logical Bulletin, 1969, 71, 285-292.
The statistical power of abnormal-social psychological research
  • J Cohen
Cohen, J. The statistical power of abnormal-social psychological research. Journal of Abnormal and Social Psychology, 1962, 65, 145-153.
Probability learning Categories of human learning
  • W K Estes
Estes, W. K. Probability learning. In A. W. Melton (Ed.), Categories of human learning. New York: Academic Press, 1964.