Cognitive Biases in Natural Language: Automatically Detecting, Differentiating, and Measuring Bias in Text
Preprints and early-stage research may not have been peer reviewed yet.
Abstract
We examine preliminary results from the first automated system to detect the 188 cognitive biases included in the 2016 Cognitive Bias Codex, as applied to both human and AI-generated text, and compared to a human baseline of performance. The human baseline was constructed from the collective intelligence of a small but diverse group of volunteers independently submitting their detected cognitive biases for each sample in the task used for the first phase. This baseline was used as an approximation of the ground truth on this task, for lack of any prior established and relevant benchmark. Results showed the system's performance to be above that of the average human, but below that of the top-performing human and the collective, with greater performance on a subset of 18 out of the 24 categories in the codex. This version of the system was also applied to analyzing responses to 150 open-ended questions put to each of the top 5 performing closed and open-source Large Language Models, as of the time of testing. Results from this second phase showed measurably higher rates of cognitive bias detection across roughly half of all categories than those observed when analyzing human-generated text. The level of model contamination was also considered for two types of contamination observed, where the models gave canned responses. Levels of cognitive bias detected in each model were compared both to one another and to data from the first phase.
... This objectivity is critical in making unbiased investment decisions, as LLMs rely on data-driven insights rather than subjective judgments. While some biases inherent in their training data can persist, LLMs significantly reduce the influence of human biases, such as overconfidence or confirmation bias, on investment decisions [17,18]. Furthermore, LLMs can process and analyze vast amounts of financial data, transcending the limitations of individuals or team analysts. ...
This paper introduces MarketSenseAI, an innovative framework leveraging GPT-4’s advanced reasoning for selecting stocks in financial markets. By integrating Chain of Thought and In-Context Learning, MarketSenseAI analyzes diverse data sources, including market trends, news, fundamentals, and macroeconomic factors, to emulate expert investment decision-making. The development, implementation, and validation of the framework are elaborately discussed, underscoring its capability to generate actionable and interpretable investment signals. A notable feature of this work is employing GPT-4 both as a predictive mechanism and signal evaluator, revealing the significant impact of the AI-generated explanations on signal accuracy, reliability, and acceptance. Through empirical testing on the competitive S&P 100 stocks over a 15-month period, MarketSenseAI demonstrated exceptional performance, delivering excess alpha of 10–30% and achieving a cumulative return of up to 72% over the period, while maintaining a risk profile comparable to the broader market. Our findings highlight the transformative potential of Large Language Models in financial decision-making, marking a significant leap in integrating generative AI into financial analytics and investment strategies.
... Each of these logging steps may be subject to novel forms of scrutiny and used as new forms of feedback for purposes of bias reduction and improving the granularity of alignment. One cognitive bias detection system our team developed earlier in 2023 already outperformed the average human at the task of detecting cognitive biases using text alone [29], offering one example of a system that could be utilized to process logged intermediate data and provide bias-related feedback. Figure 9. ...
While many in the domain of AI claim that their works are "biologically inspired", most strongly avoid the forms of dynamic complexity that are inherent in all of evolutionary history's more capable surviving organisms. This work seeks to illustrate examples of what introducing human-like forms of complexity into software systems looks like, why it is important, and why humans so frequently seek to avoid such complexity. The complex dynamics of these factors are discussed and illustrated in the context of Chaos Theory, the Three-Body Problem, category concepts, the tension between interacting forces and entities, and cognitive biases influencing how complexity is handled and reduced.
The impact of complexity within government and societal systems is considered relative to the limitations of human cognitive bandwidth, and the resulting reliance on cognitive biases and systems of automation when that bandwidth is exceeded. Examples of how humans and societies have attempted to cope with the growing difference between the rate at which the complexity of systems and human cognitive capacities increase respectively are considered. The potential of and urgent need for systems capable of handling the existing and future complexity of systems, utilizing greater cognitive bandwidth through scalable AGI, are also considered, along with the practical limitations and considerations in how those systems may be deployed in real-world conditions. Several paradoxes resulting from the influence of prolific Narrow Tool AI systems manipulating large portions of the population are also noted.
Is the neuroanatomy of the language structural connectome modulated by the life-long experience of speaking a specific language? The current study compared the brain white matter connections of the language and speech production network in a large cohort of 94 native speakers of two very different languages: an Indo-European morphosyntactically complex language (German) and a Semitic, root-based language (Arabic). Using high-resolution diffusion-weighted MRI and tractography-based network statistics of the language connectome, we demonstrated that German native speakers exhibited stronger connectivity in an intra-hemispheric frontal to parietal/temporal dorsal language network, known to be associated with complex syntax processing. In comparison, Arabic native speakers showed stronger connectivity in the connections between semantic language regions, including the left temporo-parietal network, and stronger inter-hemispheric connections via the posterior corpus callosum connecting bilateral superior temporal and inferior parietal regions. The current study suggests that the structural language connectome develops and is modulated by environmental factors such as the characteristic processing demands of the native language.
A new form of e-governance is proposed based on systems seen in biological life at all scales. This model of e-governance offers the performance of collective superintelligence, equally high ethical quality, and a substantial reduction in resource requirements for government functions. In addition, the problems seen in modern forms of government such as misrepresentation, corruption, lack of expertise, short-term thinking, political squabbling, and popularity contests may be rendered virtually obsolete by this approach. Lastly, this model of government generates a digital ecosystem of intelligent life which mirrors physical citizens, serving to bridge the emotional divide between physical and digital life, while also producing the first form of government able to keep pace with accelerating technological progress.
—Information visualization designers strive to design data displays that allow for efficient exploration, analysis, and
communication of patterns in data, leading to informed decisions. Unfortunately, human judgment and decision making are imperfect
and often plagued by cognitive biases. There is limited empirical research documenting how these biases affect visual data analysis
activities. Existing taxonomies are organized by cognitive theories that are hard to associate with visualization tasks. Based on a
survey of the literature we propose a task-based taxonomy of 154 cognitive biases organized in 7 main categories. We hope the
taxonomy will help visualization researchers relate their design to the corresponding possible biases, and lead to new research that
detects and addresses biased judgment and decision making in data visualization.
The peak-end rule (Fredrickson & Kahneman, 1993) asserts that, when people retrospectively evaluate an experience (e.g., the previous workday), they rely more heavily on the episode with peak intensity and on the final (end) episode than on other episodes in the experience. We meta-analyzed 174 effect sizes and found strong support for the peak-end rule. The peak-end effect on retrospective summary evaluations was: (1) large (r = 0.581, 95% Confidence Interval = 0.487–0.661), (2) robust across boundary conditions, (3) comparable to the effect of the overall average (mean) score and stronger than the effects of the trend and variability across all episodes in the experience, (4) stronger than the effects of the first (beginning) and lowest intensity (trough) episodes, and (5) stronger than the effect of the duration of the experience (which was essentially nil, thereby supporting the idea of duration neglect; Fredrickson & Kahneman, 1993). We provide a future research agenda and practical implications.
As intuitive statisticians, human beings suffer from identifiable biases—cognitive and otherwise. Human beings can also be “noisy” in the sense that their judgments show unwanted variability. As a result, public institutions, including those that consist of administrative prosecutors and adjudicators, can be biased, noisy, or both. Both bias and noise produce errors. Algorithms eliminate noise, and that is important; to the extent that they do so, they prevent unequal treatment and reduce errors. In addition, algorithms do not use mental shortcuts; they rely on statistical predictors, which means that they can counteract or even eliminate cognitive biases. At the same time, the use of algorithms by administrative agencies raises many legitimate questions and doubts. Among other things, algorithms can encode or perpetuate discrimination, perhaps because their inputs are based on discrimination, or perhaps because what they are asked to predict is infected by discrimination. But if the goal is to eliminate discrimination, properly constructed algorithms nonetheless have a great deal of promise for administrative agencies.
Stereotyping is one of the biggest single issues in social psychology, but relatively little is known about how and why stereotypes form. Stereotypes as Explanations is the first book to explore the process of stereotype formation, the way that people develop impressions and views of social groups. Conventional approaches to stereotyping assume that stereotypes are based on erroneous and distorted processes, but the authors of this book take a very different view, namely that stereotypes form in order to explain aspects of social groups and in particular to explain relationships between groups. In developing this view, the authors explore classic and contemporary approaches to stereotype formation and advance new ideas about such topics as the importance of category formation, essentialism, illusory correlation, interdependence, social reality and stereotype consensus. They conclude that stereotypes are indeed explanations, but they are nevertheless highly selective, variable and frequently contested explanations.
Philosophers and psychologists have long worried that the human tendency to anthropomorphize leads us to err in our understanding of nonhuman minds. This tendency, which I call intuitive anthropomorphism, is a heuristic used by our unconscious folk psychology to understand nonhuman animals. The dominant understanding of intuitive anthropomorphism underestimates its complexity. If we want to understand and control intuitive anthropomorphism, we must treat it as a cognitive bias and look to the empirical evidence. This evidence suggests that the most common control for intuitive anthropomorphism, Morgan’s Canon, should be rejected, while others are incomplete. It also suggests new approaches. © 2017 by the Philosophy of Science Association. All rights reserved.