ArticlePDF Available

Myside Bias, Rational Thinking, and Intelligence

Authors:

Abstract and Figures

Myside bias occurs when people evaluate evidence, generate evidence, and test hypotheses in a manner biased toward their own prior opinions and attitudes. Research across a wide variety of myside bias paradigms has revealed a somewhat surprising finding regarding individual differences. The magnitude of the myside bias shows very little relation to intelligence. Avoiding myside bias is thus one rational thinking skill that is not assessed by intelligence tests or even indirectly indexed through its correlation with cognitive ability measures.
Content may be subject to copyright.
A preview of the PDF is not available
... The diversity of beliefs is a vexing problem because people often think that facts support their own views about controversial topics. This tendency has come to be known as the myside bias (Stanovich, West, & Topiak, 2013). For example, Jon Baron (1995) has found that people tend to first think of arguments or reasons that support their own views about abortion. ...
... It is still an open question if knowledge of animals used as food changes behaviors or if education can impart the relevant knowledge to play a role in decisions about animal consumption. As we have noted, there is considerable evidence that people are often motivated to believe things that are consistent with their pre-existing views (Stanovich et al., 2013). Existing literature and some of the results from the current studies might bolster one's skepticism of the power of education. ...
Article
Full-text available
There have been extensive outreach programs to educate people about the realities of animal food production. However, there has been relatively little attention to measuring what people actually know about the production of animal food products and the conditions in which those animals exist. A reliable measure of knowledge of animal products is required to determine if making people more knowledgeable about the condition of animals reduces animal product consumption. In four studies, we developed an objective measure of knowledge of animal consumption — the Knowledge of Animals as Food Scale (KAFS). Study 1 (N = 265) used Item Response Theory to analyze 35 knowledge-based items. Based on Study 1, Study 2 (N = 243) tested the 11 best knowledge-based items and demonstrated that the scale had convergent, discriminant, and criterion validity. The KAFS successfully predicted fewer numbers of days per the average week one consumes meat. Study 3 (N = 289) refined the instrument to nine items and replicated the results of Study 2. Study 4 (N = 201) replicated the results and provided causal evidence that a very brief educational intervention can increase knowledge measured by the scale (d = .28). In Studies 2, 3, and 4, the KAFS was often a unique or best predictor of consumption of animal products compared to other values concerning animals. Having a valid, reliable measure of knowledge of animals used as food has important psychological and ethical implications including providing insight on whether education works and ways to help promote individual autonomy.
... Still, the proponent of the Right Reasons View can say that this objection is consistent with their view. We are sometimes given good reasons to change our beliefs, but don't recognize those reasons as good reasons-we simply make a mistake, or our background beliefs bias us towards judging the presentation of those reasons unfavorably (see Stanovich et al., 2013). So, the proponent of the Right Reasons View can say that the fact that the opponent doesn't recognize the right reasons doesn't mean that there aren't any. ...
Article
Full-text available
What is the epistemological significance of deep disagreement? Part I explored the nature of deep disagreement, while Part II considers its epistemological significance. It focuses on two core problems: the incommensurability and the rational resolvability problems. We critically survey key responses to these challenges, before raising worries for a variety of responses to them, including skeptical, relativist, and absolutist responses to the incommensurability problem, and to certain steadfast and conciliatory responses to the rational resolvability problem. We then pivot to the ethical and political dimensions of deep disagreement. We focus on whether an unwillingness to engage with positions one considers to be immoral or repugnant might be good, and conclude with some reflections on the moral risks of engagement.
... Keith Stanovich and his colleagues have found that even very intellectually able people often are susceptible to the same biases in thinking as are less intellectually able people. A notable problem is "myside bias," or seeing problems from one's own preferred point of view (Stanovich, 2009(Stanovich, , 2021Stanovich & Toplak, 2019;Stanovich et al., 2013). ...
Article
In this article, we present a hierarchical model for teaching scientific thinking to gifted students. This article follows up on an article published 40 years ago in this journal. The problem now, as 40 years ago, is that gifted students often are taught science courses at a more intensive level, but without their truly learning how to think scientifically. We argue that students of science need not only learn the content of science courses, but also learn, at a deep level, how to think scientifically. Our model addresses the issue of what this deep level consists of. Level I involves Teaching Scientific Knowledge. Level II involves Teaching Scientific Problem Solving. Level III involves the deepest level of scientific thinking: Teaching Scientific Problem Finding. We end the article with conclusions about these issues.
... Given that valued beliefs and prior beliefs often correlate it is easy to incorrectly conflate them, but not all prior beliefs are valued beliefs. Thus, information distortion is more like myside bias (Stanovich, West, & Toplak, 2013) than confirmation bias in the process of assessing and gathering information. ...
Chapter
The following paper discusses the status of intelligence research in the Anthropocene. First, I discuss how the transformations we have experienced signal the need to more deeply consider the role of context in our thinking of intelligence. Next, I discuss how the cultural evolution of our symbolic abilities is key to understand the properties of modern-day human intelligence. Then, I comment on how, in the origin of intelligence research, the invention of the theory of general intelligence was marked by a lack of consideration of the role of context, notwithstanding that the British founders of the field were working in the midst of the great transformation provoked by the Industrial Revolution. Finally, I conclude by discussing how intelligence research should be conducted to address the demands of the Anthropocene.
Article
Some nations of the world have fallen into autocracy or outright dictatorship. Others are democracies, anocracies (quasi-democracies with features of both democracies and autocracies), or pseudo-democracies (autocracies pretending to be democracies). Some of these nations still can prevent themselves from falling into the dictatorship trap. They have a choice, but what will they do? This article discusses the current state of governance in the world, how autocrats emerge, and what can be done—especially in our educational systems—to prevent their emergence. Strengthening democracy, which, according to Freedom House, has been on a steady long-term decline, must be a priority for schooling of young people today.
Chapter
Beliefs play a central role in our lives. They lie at the heart of what makes us human, they shape the organization and functioning of our minds, they define the boundaries of our culture, and they guide our motivation and behavior. Given their central importance, researchers across a number of disciplines have studied beliefs, leading to results and literatures that do not always interact. The Cognitive Science of Belief aims to integrate these disconnected lines of research to start a broader dialogue on the nature, role, and consequences of beliefs. It tackles timeless questions, as well as applications of beliefs that speak to current social issues. This multidisciplinary approach to beliefs will benefit graduate students and researchers in cognitive science, psychology, philosophy, political science, economics, and religious studies.
Article
Beliefs play a central role in our lives. They lie at the heart of what makes us human, they shape the organization and functioning of our minds, they define the boundaries of our culture, and they guide our motivation and behavior. Given their central importance, researchers across a number of disciplines have studied beliefs, leading to results and literatures that do not always interact. The Cognitive Science of Belief aims to integrate these disconnected lines of research to start a broader dialogue on the nature, role, and consequences of beliefs. It tackles timeless questions, as well as applications of beliefs that speak to current social issues. This multidisciplinary approach to beliefs will benefit graduate students and researchers in cognitive science, psychology, philosophy, political science, economics, and religious studies.
Article
Full-text available
Natural myside bias is the tendency to evaluate propositions from within one's own perspective when given no instructions or cues (such as within-participants conditions) to avoid doing so. We defined the participant's perspective as their previously existing status on four variables: their sex, whether they smoked, their alcohol consumption, and the strength of their religious beliefs. Participants then evaluated a contentious but ultimately factual proposition relevant to each of these demographic factors. Myside bias is defined between-participants as the mean difference in the evaluation of the proposition between groups with differing prior status on the variable. Whether an individual difference variable (such as cognitive ability) is related to the magnitude of the myside bias is indicated by whether the individual difference variable interacts with the between-participants status variable. In two experiments involving a total of over 1400 university students (n = 1484) and eight different comparisons, we found very little evidence that participants of higher cognitive ability displayed less natural myside bias. The degree of myside bias was also relatively independent of individual differences in thinking dispositions. We speculate that ideas from memetic theory and dual-process theory might help to explain why natural myside bias is quite dissociated from individual difference variables.
Article
The Cognitive Basis of Science concerns the question 'What makes science possible?' Specifically, what features of the human mind and of human culture and cognitive development permit and facilitate the conduct of science? The essays in this volume address these questions, which are inherently interdisciplinary, requiring co-operation between philosophers, psychologists, and others in the social and cognitive sciences. They concern the cognitive, social, and motivational underpinnings of scientific reasoning in children and lay persons as well as in professional scientists. The editors' introduction lays out the background to the debates, and the volume includes a consolidated bibliography that will be a valuable reference resource for all those interested in this area. The volume will be of great importance to all researchers and students interested in the philosophy or psychology of scientific reasoning, as well as those, more generally, who are interested in the nature of the human mind.
Article
Critics of intelligence tests-writers such as Robert Sternberg, Howard Gardner, and Daniel Goleman-have argued in recent years that these tests neglect important qualities such as emotion, empathy, and interpersonal skills. However, such critiques imply that though intelligence tests may miss certain key noncognitive areas, they encompass most of what is important in the cognitive domain. In this book, Keith E. Stanovich challenges this widely held assumption. Stanovich shows that IQ tests (or their proxies, such as the SAT) are radically incomplete as measures of cognitive functioning. They fail to assess traits that most people associate with "good thinking," skills such as judgment and decision making. Such cognitive skills are crucial to real-world behavior, affecting the way we plan, evaluate critical evidence, judge risks and probabilities, and make effective decisions. IQ tests fail to assess these skills of rational thought, even though they are measurable cognitive processes. Rational thought is just as important as intelligence, Stanovich argues, and it should be valued as highly as the abilities currently measured on intelligence tests.
Article
This book attempts to resolve the Great Rationality Debate in cognitive science-the debate about how much irrationality to ascribe to human cognition. It shows how the insights of dual-process theory and evolutionary psychology can be combined to explain why humans are sometimes irrational even though they possess remarkably adaptive cognitive machinery. The book argues that to characterize fully differences in rational thinking, we need to replace dual-process theories with tripartite models of cognition. Using a unique individual differences approach, it shows that the traditional second system (System 2) of dual-process theory must be further divided into the reflective mind and the algorithmic mind. Distinguishing them gives a better appreciation of the significant differences in their key functions: the key function of the reflective mind is to detect the need to interrupt autonomous processing and to begin simulation activities, whereas that of the algorithmic mind is to sustain the processing of decoupled secondary representations in cognitive simulation. The book then uses this algorithmic/reflective distinction to develop a taxonomy of cognitive errors made on tasks in the heuristics and biases literature. It presents the empirical data to show that the tendency to make these thinking errors is not highly related to intelligence. Using a tripartite model of cognition, the book shows how, when both are properly defined, rationality is a more encompassing construct than intelligence, and that IQ tests fail to assess individual differences in rational thought. It then goes on to discuss the types of thinking processes that would be measured if rational thinking were to be assessed as IQ has been.
Article
Much research in the last 2 decades has demonstrated that humans deviate from normative models of decision making and rational judgment. In 4 studies involving 954 participants, the authors explored the extent to which measures of cognitive ability and thinking dispositions can predict discrepancies from normative responding on a variety of tasks from the heuristics and biases literature including the selection task, belief bias in the syllogistic reasoning, argument evaluation, base-rate use, covariation detection, hypothesis testing, outcome bias, if-only thinking, knowledge calibration, hindsight bias, and on false consensus paradigm. Significant relationships involving cognitive ability were interpreted as indicating algorithmic level limitations on the computation of the normative response. Relationships with thinking dispositions were interpreted as indicating that styles of epistemic regulation can predict individual differences in performance of these tasks. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
College-student subjects made notes about the morality of early abortion, as if they were preparing for a class discussion. Analysis of the quality of their arguments suggests that a distinction can be made between arguments based on well-supported warrants and those based on warrants that are easily criticized. The subjects also evaluated notes made by other, hypothetical, students preparing for the same discussion. Most subjects evaluated the set of arguments as better when the arguments were all on one side than when both sides were presented, even when the hypothetical subject was on the opposite side of the issue from the evaluator. Subjects who favored one- sidedness also tended to make one-sided arguments themselves. The results suggests that "myside bias" is partly caused by beliefs about what makes thinking good.
Article
We propose a model of motivated skepticism that helps explain when and why citizens are biased-information processors. Two experimental studies explore how citizens evaluate arguments about affirmative action and gun control, finding strong evidence of a prior attitude effect such that attitudinally congruent arguments are evaluated as stronger than attitudinally incongruent arguments. When reading pro and con arguments, participants (Ps) counterargue the contrary arguments and uncritically accept supporting arguments, evidence of a disconfirmation bias. We also find a confirmation bias—the seeking out of confirmatory evidence—when Ps are free to self-select the source of the arguments they read. Both the confirmation and disconfirmation biases lead to attitude polarization—the strengthening of t2 over t1 attitudes—especially among those with the strongest priors and highest levels of political sophistication. We conclude with a discussion of the normative implications of these findings for rational behavior in a democracy.
Article
The myside bias in written argumentation entails excluding other side information from essays. To determine the locus of the bias, 86 Experiment 1 participants were assigned to argue either for or against their preferred side of a proposal. Participants were given either balanced or unrestricted research instructions. Balanced research instructions significantly increased the use of other side information. Participants’ notes, rather than search patterns, predicted the myside bias. Participants who defined good arguments as those that can be ‘‘proved by facts’’ were more prone towards the myside bias. In Experiment 2, 84 participants of high and low argumentation ability read a text called ‘‘More Than Just the Facts’’ designed to contradict this fact-based argumentation schema. For high argumentation ability participants, the intervention reduced the myside bias, but for low ability participants it increased. The roots of the myside bias are underdeveloped argumentation schemata leading to misconceptions about research and argumentation.