Article

Philosophers' Biased Judgments Persist Despite Training, Expertise, and Reflection

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We examined the effects of framing and order of presentation on professional philosophers' judgments about a moral puzzle case (the "trolley problem") and a version of the Tversky & Kahneman "Asian disease" scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider "different variants of the scenario or different ways of describing the case". Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise. Copyright © 2015 Elsevier B.V. All rights reserved.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... 6 Second, traditionally political philosophers have taken their considered judgements at face value. However, experimental philosophy shows that our reported considered judgements are affected by cognitive biases, e.g., order effects (Schwitzgebel and Cushman 2015), negatively affecting the credence that we should ascribe to those judgements. What should political philosophers do in light of such findings? ...
... Thus, we should expect philosophers' intuitions regarding such thought experiments to be more reliable than those of laypeople. However, much of the force of this position is challenged by findings to the effect that philosophers are vulnerable to the same cognitive illusions as laypeople (Schwitzgebel and Cushman 2015;Tobia et al. 2013;Schulz et al. 2011). 11 Moreover, even if some thought experiments are complex, outlandish, and require extensive training for full comprehension such that only the intuitions of philosophers have greater credence, the sort of thought experiments typically used in political philosophy such as those involved in the discussion of the levelling down objection are not particularly complex, outlandish, or unintelligible to people without specific training (Knobe and Nichols 2008, pp. ...
... Much work in experimental philosophy indicates that different groups of people have different concepts of knowledge, intention, reference, etc., thus casting doubt on the universality of alleged folk concepts (Buckwalter and Stich 2014;Machery et al. 2004;Weinberg et al. 2001). Similarly, studies suggest that philosophers are roughly as prone to various forms of cognitive biases as laypeople are, thus casting doubt on reliability and interpretation (Schwitzgebel and Cushman 2015;Tobia et al. 2013). ...
Article
Full-text available
The last two decades have seen an increasing interest in exploring philosophical questions using methods from empirical sciences, i.e., the so-called experimental philosophy approach. Political philosophy has so far been relatively unaffected by this trend. However, because political philosophers typically rely on traditional philosophical methods—most notably reflective equilibrium in a form which requires neither empirical examination of people’s considered beliefs nor experimental attention to psychological studies of the mechanisms affecting those beliefs—it is as proper a target of the standard challenges from experimental philosophers as any other philosophical discipline. Sometimes experimental philosophers modestly present their approach as a supplement to traditional philosophical methods. I argue that the arguments in favour of experimental philosophy are such that if they are sound, then the use of empirical methods should drastically change how political philosophy is done.
... Similarly, within philosophical methodology and psychology there is an ongoing discussion concerning the role of intuitions in philosophers' moral and political arguments, and of whether philosophers possess an expertise that make them less biased in their intuitions (e.g. Nado 2014; Schwitzgebel and Cushman 2015;Buckwalter 2016). Once more, some of the findings discussed are relevant for the assessments of this paper, even if its main focus lies elsewhere. ...
... Conte 2022), for instance a conclusion about policy choice and intuitions about whether the effects of such policies are defensible. Intuitions can, however, be biased (see Schwitzgebel and Cushman 2015), and although some studies show a 'slight advantage' for philosophers (Horvath and Wiegmann 2021, p. 342), there is little evidence that philosophers' intuitions are significantly less biased than those of non-philosophers (e.g. Nado 2014; Schwitzgebel and Cushman 2015). ...
... Intuitions can, however, be biased (see Schwitzgebel and Cushman 2015), and although some studies show a 'slight advantage' for philosophers (Horvath and Wiegmann 2021, p. 342), there is little evidence that philosophers' intuitions are significantly less biased than those of non-philosophers (e.g. Nado 2014; Schwitzgebel and Cushman 2015). Finally, philosophers' inclination to focus on concepts and argumentative relations before empirical detail (Lamont 2009), along with their sometimes limited historical and contextual knowledge of policy issues (Wolff 2018), may compromise forecasting and approximations of long-term developments. ...
Article
Full-text available
Well-functioning modern democracies depend largely on expert knowledge and expert arrangements, but this expertise reliance also causes severe problems for their legitimacy. Somewhat surprisingly, moral and political philosophers have come to play an increasing role as experts in contemporary policymaking. The paper discusses different epistemic and democratic worries raised by the presence of philosopher experts in contemporary governance, relying on a broad review of existing studies, and suggests measures to alleviate them. It is argued that biases philosophers are vulnerable to may contribute to reducing the quality of their advice, and that the characteristics of philosophers’ expertise, and controversies around what their competences amount to, make it hard to distinguish proper from less proper philosopher experts. Reliance on philosopher experts may also intensify democratic worries not least due to the depoliticization pressures that the introduction of ethics expertise tends to give rise to. Still, philosophers have competences and orientations that policy discussions and democratic deliberations are likely to profit from. Worries about philosopher experts may moreover be mitigated by means of a proper design of expert arrangements. Confronted with the genuine epistemic risks and democratic challenges of contemporary governance any quick fix is obviously unavailable, but when institutionalized in the right way philosophers’ involvement in present-day policymaking bears significant promise.
... It proceeds less from the diversity of intuitive judgements found between populations than from "intra-individual diversity" (Knobe and Nichols 2017, § 2.1.2); the ways that individuals' judgements about philosophical cases, including the judgements of philosophers, appear to be sensitive to philosophically irrelevant contextual factors, such as "the induction of irrelevant emotions (Cameron et al. 2013), the order in which cases are presented (Petrinovich and O'Neill 1996;Swain et al. 2008;Wright 2010), and the way an outcome is described (e.g., Petrinovich and O'Neill 1996;Schwitzgebel and Cushman 2015)" (Knobe and Nichols 2017, § 2.1.2). ...
... Nevertheless, by Machery's "usual characterization" of epistemic peerhood -"x is an epistemic peer of y with respect to a particular claim if and only if x has the same amount of evidence y has, has considered it as carefully as y has, is as intelligent as y, and is immune to the reasoning biases y is immune to" (Machery 2017, p. 130;he cites Kelly 2005, Christensen 2009) -it is not particularly difficult either. One hallmark of transformative experiences, made clear in Paul's original paper (Paul 2015, p. 155), is epistemic transformation; some knowledge, and some capacities for 12 The restrictionist might object here that philosophers' judgements are not different from those of laypeople in at least one relevant respect; they are prone to the same framing and order effects (Schwitzgebel and Cushman 2015), and therefore -as restrictionists claim -unreliable. The problem with this objection is that it presumes just what the argument from expressive responding, presented in section II of this paper, casts doubt upon; that the activity of responding to questionnaires is relevantly similar to the making of judgements in professional settings such as seminars, conferences, journal articles, which are plausibly interpreted precisely as aiming to "filter out" such effects. ...
Article
Full-text available
The Experimental Philosophy (“X-Phi”) movement applies the methodology of empirical sciences – most commonly empirical psychology – to traditional philosophical questions. In its radical, “negative” form, X-Phi uses the resulting empirical data to cast doubt on the reliability of common philosophical methods, arguing for radical reform of philosophical methodology. In this paper I develop two connected methodological worries about this second enterprise. The first concerns the data elicited by questionnaires and other empirical survey methods; recent work in political science suggests that such surveys frequently do not elicit the participants’ candid judgements, but rather their expressions of certain attitudes and identifications. This possibility stymies the arguments from experimental data to a radical overhaul of philosophical methodology. The second builds on recent work by L.A. Paul and Kieran Healy concerning social science methodology, applying it to the use of those methods in X-Phi. It concerns experimental design where the treatment investigated is a “transformative” one; since a philosophical education is plausibly one such treatment, doubt is cast on any claim that apparent differences between the judgements of philosophers and ordinary folk has implications for philosophical methodology.
... Empirically informed engineering ethics research and education are especially important, since interdisciplinary research in the social and behavioral sciences has resulted in the relatively counterintuitive findings about moral judgments and ethical behaviors discussed above. Philosophical speculation is not always a reliable guide to truths regarding human behaviors (Machery, 2017;Schwitzgebel & Cushman, 2015), but could become more reliable if supported by empirical research. ...
Article
Full-text available
This paper describes the motivations and some directions for bringing insights and methods from moral and cultural psychology to bear on how engineering ethics is conceived, taught, and assessed. Therefore, the audience for this paper is not only engineering ethics educators and researchers but also administrators and organizations concerned with ethical behaviors. Engineering ethics has typically been conceived and taught as a branch of professional and applied ethics with pedagogical aims, where students and practitioners learn about professional codes and/or Western ethical theories and then apply these resources to address issues presented in case studies about engineering and/or technology. As a result, accreditation and professional bodies have generally adopted ethical reasoning skills and/or moral knowledge as learning outcomes. However, this paper argues that such frameworks are psychologically “irrealist” and culturally biased: it is not clear that ethical judgments or behaviors are primarily the result of applying principles, or that ethical concerns captured in professional codes or Western ethical theories do or should reflect the engineering ethical concerns of global populations. Individuals from Western educated industrialized rich democratic cultures are outliers on various psychological and social constructs, including self-concepts, thought styles, and ethical concerns. However, engineering is more cross cultural and international than ever before, with engineers and technologies spanning multiple cultures and countries. For instance, different national regulations and cultural values can come into conflict while performing engineering work. Additionally, ethical judgments may also result from intuitions, closer to emotions than reflective thought, and behaviors can be affected by unconscious, social, and environmental factors. To address these issues, this paper surveys work in engineering ethics education and assessment to date, shortcomings within these approaches, and how insights and methods from moral and cultural psychology could be used to improve engineering ethics education and assessment, making them more culturally responsive and psychologically realist at the same time.
... Philosophical speculation is not always a reliable guide to truths regarding human behaviors (Machery, 2017;Schwitzgebel & Cushman, 2015), but could become more reliable if supported by empirical research. ...
Preprint
Full-text available
This paper describes the motivations and some directions for bringing insights and methods from moral and cultural psychology to bear on how engineering ethics is conceived, taught, and assessed. Therefore, the audience for this paper is not only engineering ethics educators and researchers but also administrators and organizations concerned with ethical behaviors. Engineering ethics has typically been conceived and taught as a branch of professional and applied ethics with pedagogical aims, where students and practitioners learn about professional codes and/or Western ethical theories and then apply these resources to address issues presented in case studies about engineering and/or technology. As a result, accreditation and professional bodies have generally adopted ethical reasoning skills and/or moral knowledge as learning outcomes. However, this paper argues that such frameworks are psychologically "irrealist" and culturally biased: it is not clear that ethical judgments or behaviors are primarily the result of applying principles, or that ethical concerns captured in professional codes or Western ethical theories do or should reflect the engineering ethical concerns of global populations. Individuals from WEIRD (Western educated industrialized rich democratic) cultures are outliers on various psychological and social constructs, including self-concepts, thought styles, and ethical concerns. However, engineering is more cross cultural and international than ever before, with engineers and technologies spanning multiple cultures and countries. For instance, different national regulations and cultural values can come into conflict while performing engineering work. Additionally, ethical judgments may also result from intuitions, closer to emotions than reflective thought, and behaviors can be affected by unconscious, social, and environmental factors. To address these Blinded Manuscript issues, this paper surveys work in engineering ethics education and assessment to date, shortcomings within these approaches, and how insights and methods from moral and cultural psychology could be used to improve engineering ethics education and assessment, making them more culturally responsive and psychologically realist at the same time.
... Good self-understanding is a common, if often implicit, assumption of supporters of liberalism and capitalism. However, there is lots of empirical evidence from psychology that most people are bad at introspection (Bayne and Spencer 2010;Schwitzgebel 2008) and that cognitive biases are widespread (MacLean and Dror 2016; Schwitzgebel and Cushman 2015). Being bad at introspection means people typically only have limited awareness of their mental states, their beliefs, and their motives. ...
... Are the former immune to the biases that influence the latter? Although research is still on-going, an impressive body of evidence suggests that philosophers and lay people's concepts and judgments are more similar than one might have thought (Schwitzgebel & Cushman, 2015). ...
... As far as we can see, across all of their publications on the topic, Kumar and Campbell cite a total of three papers (Petrinovich & O'Neill, 1996;Schwitzgebel & Cushman, 2012;2015) to show that 'people actually change their moral opinions in response to consistency reasoning ' (2022, 119). Each of these papers investigated the influence of order effects on people's moral judgments. ...
Article
Full-text available
An important question about moral progress is what causes it. One of the most popular proposed mechanisms is moral reasoning: moral progress often happens because lots of people reason their way to improved moral beliefs. Authors who defend moral reasoning as a cause of moral progress have relied on two broad lines of argument: the general and the specific line. The general line presents evidence that moral reasoning is in general a powerful mechanism of moral belief change, while the specific line tries to establish that moral reasoning can explain specific historical examples of moral progress. In this paper, we examine these lines in detail, using Kumar and Campbell’s (2022, A Better Ape: The Evolution of the Moral Mind and How It Made Us Human. Oxford University Press) model of rational moral progress to sharpen our focus. For each line, we explain the empirical assumptions it makes; we then argue that the available evidence supports none of these assumptions. We conclude that at this point, we have no idea if moral reasoning causes moral progress.
... SETs, notably, attempt to enhance ethical reasoning better than traditional methods of teaching ethics in trainings and courses. As research has shown, even those academics highly trained in philosophy can succumb to the same biases in ethical reasoning that most others do [29]. This means that the model of solely relying on the efficacy of ethics training and expertise might still be deficient and can be augmented through structured ways of applying ethical judgments. ...
Article
Full-text available
Despite many experts’ best intentions, technology ethics continues to embody a commonly used definition of insanity—by repeatedly trying to achieve ethical outcomes through the same methods that don’t work. One of the most intractable problems in technology ethics is how to translate ethical principles into actual practice. This challenge persists for many reasons including a gap between theoretical and technical language, a lack of enforceable mechanisms, misaligned incentives, and others that this paper will outline. With popular and often contentious fields like artificial intelligence (AI), a slew of technical and functional (used here to mean primarily “non-technical”) approaches are continually developed by diverse organizations to bridge the theoretical-practical divide. Technical approaches and coding interventions are useful for programmers and developers, but often lack contextually sensitive thinking that incorporates project teams or a wider group of stakeholders. Contrarily, functional approaches tend to be too conceptual and immaterial, lacking actionable steps for implementation into product development processes. Despite best efforts, many current approaches are therefore impractical or challenging to use in any meaningful way. After surveying a variety of different fields for current approaches to technology ethics, I propose a set of originally developed methods called Structured Ethical Techniques (SETs) that pull from best practices to build out a middle ground between functional and technical methods. SETs provide a way to add deliberative ethics to any technology’s development while acknowledging the business realities that often curb ethical deliberation, such as efficiency concerns, pressures to innovate, internal resource limitations, and more.
... While future work ought to drill down on this specific hypothesis, we chose not to do so in this paper for two different reasons. First, it is not clear whether there's expertise in moral reasoning [38,69]. Second, while there is evidence of expertise in legal decision-making [35,42], several of the field's most prominent studies have been conducted only with laypeople, [e.g. ...
Preprint
Full-text available
Large language models have been used as the foundation of highly sophisticated artificial intelligences, capable of delivering human-like responses to probes about legal and moral issues. However, these models are unreliable guides to their own inner workings, and even the engineering teams behind their creation are unable to explain exactly how they came to develop all of the capabilities they currently have. The emerging field of machine psychology seeks to gain insight into the processes and concepts that these models possess. In this paper, we employ the methods of psychology to probe into GPT-4's moral and legal reasoning. More specifically, we investigate the similarities and differences between GPT-4 and humans when it comes to intentionality ascriptions, judgments about causation, the morality of deception, moral foundations, the impact of moral luck on legal judgments, the concept of consent, and rule violation judgments. We find high correlations between human and AI responses, but also several significant systematic differences between them. We conclude with a discussion of the philosophical implications of our findings.
... W badaniach z ich udziałem zaobserwowano efekt kolejności i efekt ramy odniesienia (por. Schwitzgebel, Cushman 2012, 2015, co powinno być niemożliwe, a przynajmniej mocno utrudnione, jeśli domyśliliby się celu badania 14 . Pozostaje jednak kwestia zagrożenia związanego z samą próbą odgadywania przez respondentów celu badania. ...
Article
Full-text available
Surveys are the most widely used research tool in experimental philosophy. In this paper, I analyze two types of criticism of the questionnaire method put forward in the literature. The first type is metaphilosophical. It asserts that the surveys used by experimental philosophers are based on a flawed and unproductive method of cases. The second type is methodological. It argues that currently used questionnaires are unfit to measure philosophically relevant phenomena. I show that these objections can be met by a) improving questionnaires used in research practice and b) expanding the methodological repertoire of experimental philosophy.
... First, it is unclear whether judgments occurring in "cool" moments in a laboratory will match judgments in an actual moral dilemma. For example, the presentation mode is known to affect moral judgments [22][23][24]. Second, even if thought experiments reflect moral judgments, they may not reflect moral behavior, as the two acts can come apart [25][26][27][28]. ...
Article
Full-text available
Hypothetical thought experiments allow researchers to gain insights into widespread moral intuitions and provide opportunities for individuals to explore their moral commitments. Previous thought experiment studies in virtual reality (VR) required participants to come to an on-site laboratory, which possibly restricted the study population, introduced an observer effect, and made internal reflection on the participants’ part more difficult. These shortcomings are particularly crucial today, as results from such studies are increasingly impacting the development of artificial intelligence systems, self-driving cars, and other technologies. This paper explores the viability of deploying thought experiments in commercially available in-home VR headsets. We conducted a study that presented the trolley problem, a life-and-death moral dilemma, through SideQuestVR, a third-party website and community that facilitates loading applications onto Oculus headsets. Thirty-three individuals were presented with one of two dilemmas: (1) a decision to save five lives at the cost of one life by pulling a switch and (2) a decision to save five lives at the cost of one life by pushing a person onto train tracks. The results were consistent with those of previous VR studies, suggesting that a “VR-at-a-distance” approach to thought experiments has a promising future while indicating lessons for future research.
... One might consider using professional moral philosophers' opinions as training data for ML-based VA (Anderson & Anderson, 2011), but recent research shows both expert judgment generally and ethical expert judgment in particular to be frequently biased. Professional ethicists' moral intuitions and specific judgements turn out to be as vulnerable to biases or irrelevant factors as those of lay persons (Schwitzgebel & Cushman, 2012;Wiegmann, Horvath, & Meyer, 2020;Tobia, Buckwalter, & Stich, 2013;Schwitzgebel & Cushman, 2015;Egler & Ross, 2020). Because any attempt to use the ML-based VA system to generate the principles would be viciously circular, ML-based systems stand in need of independently defensible principles in order to evaluate even the training data to be used. ...
Article
Full-text available
An important step in the development of value alignment (VA) systems in artificial intelligence (AI) is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing “naturalistic fallacy,” which is an attempt to derive “ought” from “is,” and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified model logic, we precisely formulate principles derived from deontological ethics and show how they imply particular “test propositions” for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles. This article is part of the special track on AI and Society.
... We should not be surprised that the intuitions that regular people have about quantum mechanics are easily manipulated; what matters are the intuitions of professional physicists. In reply, experimental philosophers have produced evidence that (they claim) shows that professional philosophers have corruptible and unstable intuitions too, even when it comes to philosophical topics (e.g., Schwitzgebel and Cushman 2015;Tobia et al. 2013;Schulz et al. 2011). ...
Chapter
Full-text available
... We should not be surprised that the intuitions that regular people have about quantum mechanics are easily manipulated; what matters are the intuitions of professional physicists. In reply, experimental philosophers have produced evidence that (they claim) shows that professional philosophers have corruptible and unstable intuitions too, even when it comes to philosophical topics (e.g., Schwitzgebel and Cushman 2015;Tobia et al. 2013;Schulz et al. 2011). ...
Chapter
Talent development is an approach to education for advanced achievement. Traditionally, many models of advanced education focus on the identification of young people who are already performing at high levels, interpreting that performance as evidence that the student needs additional advanced services. In contrast, talent development focuses on providing opportunity to young people who may have the potential to perform at those advanced levels. This talent development approach, with its focus on potential rather than performance, tends to incorporate inclusive, developmental, and sociocultural perspectives that emphasize striving toward the possible over living with current realities. The theory of mind (ToM) refers to how people understand their own thoughts and feelings and those of other beings. It is a crucial cognitive mechanism for social interactions and communication. It helps us to predict, to explain, and to manipulate behaviors or mental states. Moreover, this skill is shared by almost all human beings beyond early childhood. The literature presents different explicit false-belief tasks as a means of investigating ToM in children (e.g., one of the most famous is known as the Sally-Anne task). Although children younger than 4 years usually fail in these explicit tasks, it cannot be excluded that some less complex forms of understanding mental states develop earlier. So, in order to investigate the precursors that anticipate the emergence of a more mature representational system, many recent studies on infants’ beliefs have demonstrated, in the last decade, a very early sensitivity specifically to the false beliefs of others by using implicit looking-time tasks. This entry starts with the definition of the theory of mind and its history, before moving on to summarize developmental research in this area. Finally, it focuses on the relation between theory of mind and the possible with some reflections on how an increasing consciousness of the variety of situations that the possible presents to us could allow people to choose the best alternative for themselves and others. The present chapter aims to describe tolerance of ambiguity (TA) to make it more tangible, to disambiguate without reducing, and indeed expand the concept. The concept depends on and is shaped by perspectives related to cultural contexts, points to how humans relate to an immediate or distant future, and has emergent properties. TA seems to share meanings and properties with the concept of “Possibilities.” TA might in fact encompass both “Possibilities” as well as “Uncertainties,” representing two sides of the same coin. Broadly, all these concepts are found in the spaces between nature and humans’ relation with them. Above all, the TA concept is amorphous, abstract, and covers relationships with other concepts in a wide array of domains. Aspects are discussed in relation with existent literature, time issues, levels and lenses of research, and an application in the domain of creativity. Throughout the chapter, possibilities are suggested for further directions of research. Central to understanding and measuring TA is its cultural embeddedness, also of the researcher her/himself. Transdisciplinarity is a practice that transcends disciplines and fields, extending the notion of what is known and knowable and what is possible to discover and create across, between, and beyond all our disciplines. As such, it is a practice that takes place in the emergent spaces between disciplines, which some writers believe is the future of discovery (Johansson F, The Medici effect. Harvard Business School Press, 2004). It is being hailed as a new way to tackle our most complex, networked challenges, yet it is also one of the most ancient ways of seeing the world as a connected whole, as evidenced by Indigenous cultures that do not separate their ethics from their geography, or their religion from their science (Yunkaporta T, Sand talk, how indigenous thinking can save the world. Text, Melbourne, 2019). It excludes no discipline, field, stakeholder, or country and is therefore described as an attempt at a unified field of knowledge – an inherently spiritual notion for some (Nuñez MC, Transdisciplinary Journal of Engineering & Science 2, 2011): “The keystone of transdisciplinarity is the semantic and practical unification of the meanings that traverse and lie beyond different disciplines” (Nicolescu B, Manifesto of Transdisciplinarity. Suny Press, 2002). This entry presents the construct of transformational creativity – creativity that makes a positive, meaningful, and potentially enduring difference to the world. People who are transformationally creative seek to make the world a better place. I first discuss creativity and then positive creativity, reviewing their strengths and drawbacks. Then I discuss three types of transformational creativity – fully transformational creativity, self-transformational creativity, and other-transformational creativity. I further discuss pseudotransformational creativity – creativity that is presented with the pretense of making the world a better place but that really is intended only to enhance the prospects of the creators. There are three types of pseudotransformational creativity – fully pseudotransformational creativity, self-destructive pseudotransformational creativity, and other-destructive pseudotransformational creativity. It has been a mistake, I believe, merely to teach for creativity, because so much of creativity has been put to bad uses. We should instead focus on teaching for the transformational creativity that makes the world better, not worse. Personal change is generally considered as a gradual and linear process, which occurs either as a result of maturation over the lifespan or as the result of a therapeutic interventions. However, research from different disciplines – including anthropology, philosophy and psychology – suggests the existence of a second type of change – transformative or transformational – which involves a radical and long-lasting shift in the individual’s core beliefs, values, and attitudes. In this contribution, I will review key definitions and conceptualizations of transformative experience, discuss the scientific and practical relevance of this construct, and suggest some future directions for research.
... We should not be surprised that the intuitions that regular people have about quantum mechanics are easily manipulated; what matters are the intuitions of professional physicists. In reply, experimental philosophers have produced evidence that (they claim) shows that professional philosophers have corruptible and unstable intuitions too, even when it comes to philosophical topics (e.g., Schwitzgebel and Cushman 2015;Tobia et al. 2013;Schulz et al. 2011). ...
Chapter
The theory of mind (ToM) refers to how people understand their own thoughts and feelings and those of other beings. It is a crucial cognitive mechanism for social interactions and communication. It helps us to predict, to explain, and to manipulate behaviors or mental states. Moreover, this skill is shared by almost all human beings beyond early childhood. The literature presents different explicit false-belief tasks as a means of investigating ToM in children (e.g., one of the most famous is known as the Sally-Anne task). Although children younger than 4 years usually fail in these explicit tasks, it cannot be excluded that some less complex forms of understanding mental states develop earlier. So, in order to investigate the precursors that anticipate the emergence of a more mature representational system, many recent studies on infants’ beliefs have demonstrated, in the last decade, a very early sensitivity specifically to the false beliefs of others by using implicit looking-time tasks. This entry starts with the definition of the theory of mind and its history, before moving on to summarize developmental research in this area. Finally, it focuses on the relation between theory of mind and the possible with some reflections on how an increasing consciousness of the variety of situations that the possible presents to us could allow people to choose the best alternative for themselves and others.
... For example, researchers have shown well-replicated effects of culture or heritable personality traits on philosophical case judgments in philosophy of language and action theory (Beebe & Undercoffer, 2016;Feltz & Cokely, 2012;Machery et al., 2004). Researchers have shown that presentation, framing, and order effects persist in foundational case judgments in ethics, such as trolley problem and Asian disease scenarios, with or without professional philosophical training (Liao et al., 2012;Schwitzgebel & Cushman, 2015;Wiegmann et al., 2012). These things are important to consider and can be difficult to detect when conducting thought experiments. ...
Article
Full-text available
The replication crisis is perceived by many as one of the most significant threats to the reliability of research. Though reporting of the crisis has emphasized social science, all signs indicate that it extends to many other fields. This paper investigates the possibility that the crisis and related challenges to conducting research also extend to philosophy. According to one possibility, philosophy inherits a crisis similar to the one in science because philosophers rely on unreplicated or unreplicable findings from science when conducting philosophical research. According to another possibility, the crisis likely extends to philosophy because philosophers engage in similar research practices and face similar structural issues when conducting research that have been implicated by the crisis in science. Proposals for improving philosophical research are offered in light of these possibilities.
Chapter
The Cambridge Handbook of Moral Psychology is an essential guide to the study of moral cognition and behavior. Originating as a philosophical exploration of values and virtues, moral psychology has evolved into a robust empirical science intersecting psychology, philosophy, anthropology, sociology, and neuroscience. Contributors to this interdisciplinary handbook explore a diverse set of topics, including moral judgment and decision making, altruism and empathy, and blame and punishment. Tailored for graduate students and researchers across psychology, philosophy, anthropology, neuroscience, political science, and economics, it offers a comprehensive survey of the latest research in moral psychology, illuminating both foundational concepts and cutting-edge developments.
Article
Full-text available
Folk psychology’s usefulness extends beyond its role in explaining and predicting behavior, that is, beyond the intentional stance. In this article, I critically examine the concept of phenomenal stance. According to this idea, attributions of phenomenal mental states impact laypeople’s perception of moral patiency. The more phenomenal states we ascribe to others, the more we care about their well-being. The perception of moral patients—those affected by moral actions—is hypothesized to diverge from the perception of moral agents, those who perform moral actions. Despite its appeal, especially considering its exploration of the established relationship between folk psychology and moral cognition, the idea of the phenomenal stance faces significant challenges. It relies on laypeople recognizing the phenomenality of experience, yet experimental philosophy of consciousness suggests that there is no folk concept of phenomenal consciousness. Moreover, proponents of the phenomenal stance often conflate phenomenal states with emotional states despite the existence of both nonemotional conscious states and, arguably, nonconscious emotional states. Additionally, attributions of conscious mental states impact the perception of both moral agency and patiency. I report on experimental results indicating that some of these attributions lower the perceived moral patiency. Besides providing reasons to reject the idea of the phenomenal stance, I argue that the perception of moral patiency is guided by attributions of affective states (affects, emotions, moods). I call such attributions the affective stance and explore this concept’s relationship with empathy and other psychological concepts.
Book
This Element engages with the epistemic significance of disagreement, focusing on its skeptical implications. It examines various types of disagreement-motivated skepticism in ancient philosophy, ethics, philosophy of religion, and general epistemology. In each case, it favors suspension of judgment as the seemingly appropriate response to the realization of disagreement. One main line of argument pursued in the Element is that, since in real-life disputes we have limited or inaccurate information about both our own epistemic standing and the epistemic standing of our dissenters, personal information and self-trust can rarely function as symmetry breakers in favor of our own views.
Article
Full-text available
Moral progress is often modeled as an increase in moral knowledge and understanding, with achievements in moral reasoning seen as key drivers of progressive moral change. Contemporary discussion recognizes two (rival) accounts: knowledge-based and understanding-based theories of moral progress, with the latter recently contended as superior (Severini 2021). In this article, we challenge the alleged superiority of understanding-based accounts by conducting a comparative analysis of the theoretical advantages and disadvantages of both approaches. We assess them based on their potential to meet the following criteria: (i) moral progress must be possible despite evolutionary and epistemic constraints on moral reasoning; (ii) it should be epistemically achievable to ordinary moral agents; and (iii) it should be explainable via doxastic change. Our analysis suggests that both accounts are roughly equally plausible, but knowledge-based accounts are slightly less demanding and more effective at explaining doxastic change. Therefore, contrary to the prevailing view, we find knowledge-based accounts of moral progress more promising.
Article
Full-text available
The concept of a good life is usually assumed by philosophers to be equivalent to that of well-being, or perhaps of a morally good life, and hence has received little attention as a potentially distinct subject matter. In a series of experiments participants were presented with vignettes involving socially sanctioned wrongdoing toward outgroup members. Findings indicated that, for a large majority, judgments of bad character strongly reduce ascriptions of the good life, while having no impact at all on ascriptions of happiness or well-being. Taken together with earlier findings these results suggest that the lay concept of a good life is clearly distinct from those of happiness, well-being, or morality, likely encompassing both morality and well-being, and perhaps other values as well: whatever matters in a person’s life. Importantly, morality appears not to play a fundamental role in either happiness or well-being among the folk.
Article
Many philosophers think that descriptive uncertainty is relevant to what we subjectively ought to do. This leads to a further question: is what we subjectively ought to do sensitive to our moral uncertainty as well? Includers say yes—what we subjectively ought to do is sensitive to both descriptive uncertainty and moral uncertainty. Excluders say no—only descriptive uncertainty matters to what we subjectively ought to do (i.e., moral uncertainty is irrelevant). Excluders argue that common motivations for the subjective ought only give us reason to think that descriptive uncertainty matters. This paper focuses on one motivation: accessibility. Excluders argue that accessibility does not motivate the Includers’ view because moral truths are always accessible – unlike descriptive facts which are not always accessible. My goal is to defend the Includers’ view by arguing that moral truths are not always accessible in the relevant sense.
Article
Full-text available
Despite a voluminous literature on happiness and well‐being, debates have been stunted by persistent dissensus on what exactly the subject matter is. Commentators frequently appeal to intuitions about the nature of happiness or well‐being, raising the question of how representative those intuitions are. In a series of studies, we examined lay intuitions involving happiness‐ and well‐being‐related terms to assess their sensitivity to internal (psychological) versus external conditions. We found that all terms, including ‘happy’, ‘doing well’ and ‘good life’, were far more sensitive to internal than external conditions, suggesting that for laypersons, mental states are the most important part of happiness and well‐being. But several terms, including ‘doing well’, ‘good life’ and ‘enviable life’ were substantially more sensitive to external conditions than others, such as ‘happy’, consistent with dominant philosophical views of well‐being. Interestingly, the expression ‘happy’ was completely insensitive to external conditions for about two thirds of our participants, suggesting a purely psychological concept among most individuals. Overall, our findings suggest that lay thinking in this domain divides between two concepts, or families thereof: a purely psychological notion of being happy, and one or more concepts equivalent to, or encompassing, the philosophical concept of well‐being. In addition, being happy is dominantly regarded as just one element of well‐being. These findings have considerable import for philosophical debates, empirical research and public policy.
Article
Large language models (LLMs) have taken centre stage in debates on Artificial Intelligence. Yet there remains a gap in how to assess LLMs' conformity to important human values. In this paper, we investigate whether state-of-the-art LLMs, GPT-4 and Claude~2.1 (Gemini Pro and LLAMA 2 did not generate valid results) are moral hypocrites. We employ two research instruments based on the Moral Foundations Theory: (i) the Moral Foundations Questionnaire (MFQ), which investigates which values are considered morally relevant in abstract moral judgements; and (ii) the Moral Foundations Vignettes (MFVs), which evaluate moral cognition in concrete scenarios related to each moral foundation. We characterise conflicts in values between these different abstractions of moral evaluation as hypocrisy. We found that both models displayed reasonable consistency within each instrument compared to humans, but they displayed contradictory and hypocritical behaviour when we compared the abstract values present in the MFQ to the evaluation of concrete moral violations of the MFV.
Article
Full-text available
Greene's influential dual-process model of moral cognition (mDPM) proposes that when people engage in Type 2 processing, they tend to make consequentialist moral judgments. One important source of empirical support for this claim comes from studies that ask participants to make moral judgments while experimentally manipulating Type 2 processing. This paper presents a meta-analysis of the published psychological literature on the effect of four standard cognitive-processing manipulations (cognitive load; ego depletion; induction; time restriction) on moral judgments about sacrificial moral dilemmas [n = 44; k = 68; total N = 14, 003; M(N) = 194.5]. The overall pooled effect was in the direction predicted by the mDPM, but did not reach statistical significance. Restricting the dataset to effect sizes from (high-conflict) personal sacrificial dilemmas (a type of sacrificial dilemma that is often argued to be best suited for tests of the mDPM) also did not yield a significant pooled effect. The same was true for a meta-analysis of the subset of studies that allowed for analysis using the process dissociation approach [n = 8; k = 12; total N = 2, 577; M(N) = 214.8]. I argue that these results undermine one important line of evidence for the mDPM and discuss a series of potential objections against this conclusion.
Article
Full-text available
This paper explores the concept of moral expertise in the contemporary philosophical debate, with a focus on three accounts discussed across moral epistemology, bioethics, and virtue ethics: an epistemic authority account, a skilled agent account, and a hybrid model sharing key features of the two. It is argued that there are no convincing reasons to defend a monistic approach that reduces moral expertise to only one of these models. A pluralist view is outlined in the attempt to reorient the discussion about moral expertise.
Article
Full-text available
In defense of the psychological approach to judicial intuition. A polemic. Współczesne badania dowodzą, że sędziowie, podejmując decyzje prawnicze, nie zawsze kierują się wyłącznie deliberatywnymi kryteriami prawnymi. Wpływ na kształt finalnego wyroku sędziego może mieć wiele nieuświadomionych czynników o charakterze tak indywidualnym, jak i związanym z oto-czeniem: emocjonalnych, psychologicznych, instytucjonalnych, politycznych czy osobistych. Jednym z najbardziej kontrowersyjnych w naukach prawnych pojęć odnoszących się do decyzji podejmowanych przez sędziów jest intuicja sędziowska, będąca przedmiotem coraz liczniejszych analiz w domenie prawniczej. Przykład takowej stanowi artykuł Anny Tomzy Intuicja sędziowska w aktualnym dyskursie amerykańskiej jurysprudencji-przegląd stanowisk. Autorka przedstawia w nim oryginalną krytykę psychologicznego podejścia do intuicji u sędziów, w zamian podkreślając konieczność poszukiwania nowego rozumienia intuicji, w szczególności uwzględnienia perspektywy logicznej. Artykuł stanowi odpowiedź na pracę Anny Tomzy. Wskazujemy w nim niektóre problemy wyni-kające z proponowanych przez nią tez oraz przedstawiamy odmienną perspektywę, stanowiącą prze-ciwwagę dla głównych tez Autorki, takich jak odchodzenie od perspektywy psychologicznej przez amerykańską naukę prawa, niecelowość formułowania psychologicznych koncepcji prawniczej intu-icji, nieracjonalność intuicji czy każdorazowa konieczność badania definicji i rozumienia pojęć intu-icji innych niż prawne i psychologiczne. Słowa kluczowe intuicja prawnicza, hunch, realizm prawniczy, naturalizm prawniczy 1 Praca powstała w wyniku realizacji projektów badawczych nr 2018/29/N/HS5/01324 (TZ) oraz 2017/25/N/HS5/00944 (MP) finansowanych ze środków Narodowego Centrum Nauki. Dziękujemy Piotro-wi Bystranowskiemu, Adamowi Dyrdzie, Bartoszowi Janikowi oraz Jakubowi Kretowi za cenne uwagi do-tyczące tekstu.
Article
Full-text available
Which social decisions are influenced by intuitive processes? Which by deliberative processes? The dual-process approach to human sociality has emerged in the last decades as a vibrant and exciting area of research. Yet a perspective that integrates empirical and theoretical work is lacking. This review and meta-analysis synthesizes the existing literature on the cognitive basis of cooperation, altruism, truth telling, positive and negative reciprocity, and deontology and develops a framework that organizes the experimental regularities. The meta-analytic results suggest that intuition favors a set of heuristics that are related to the instinct for self-preservation: people avoid being harmed, avoid harming others (especially when there is a risk of harm to themselves), and are averse to disadvantageous inequalities. Finally, this article highlights some key research questions to further advance our understanding of the cognitive foundations of human sociality.
Article
While standard forms of discrimination are widely considered morally wrong, philosophers disagree about what makes them so. Two accounts have risen to prominence in this debate: One stressing how wrongful discrimination disrespects the discriminatee, the other how the harms involved make discrimination wrong. While these accounts are based on carefully constructed thought experiments, proponents of both sides see their positions as in line with and, in part, supported by the folk theory of the moral wrongness of discrimination. This article presents a vignette-based experiment to test empirically what, in the eyes of “folks”, makes discrimination wrong. Interestingly, we find that, according to folks, both disrespect and harm make discrimination wrong. Our findings offer some support for a pluralistic account of the wrongness of discrimination over both monist respect-based and monist harm-based accounts.
Article
Full-text available
According to the expertise defense, practitioners of the method of cases need not worry about findings that ordinary people’s philosophical intuitions depend on epistemically irrelevant factors. This is because, honed by years of training, the intuitions of professional philosophers likely surpass those of the folk. To investigate this, we conducted a controlled longitudinal study of a broad range of intuitions in undergraduate students of philosophy (n = 226), whose case judgments we sampled after each semester throughout their studies. Under the assumption, made by proponents of the expertise defense, that formal training in philosophy gives rise to the kind of expertise that accounts for changes in the students’ responses to philosophically puzzling cases, our data suggest that the acquired cognitive skills only affect single case judgments at a time. There does not seem to exist either a general expertise that informs case judgments in all areas of philosophy, or an expertise specific to particular subfields. In fact, we argue that available evidence, including the results of cross-sectional research, is best explained in terms of differences in adopted beliefs about specific cases, rather than acquired cognitive skills. We also investigated whether individuals who choose to study philosophy have atypical intuitions compared to the general population and whether students whose intuitions are at odds with textbook consensus are more likely than others to drop out of the philosophy program.
Article
Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest. Expected final online publication date for the Annual Review of Psychology, Volume 75 is January 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Full-text available
What are the philosophical views of professional philosophers, and how do these views change over time? The 2020 PhilPapers Survey surveyed around 2000 philosophers on 100 philosophical questions. The results provide a snapshot of the state of some central debates in philosophy, reveal correlations and demographic effects involving philosophers' views, and reveal some changes in philosophers' views over the last decade.
Chapter
The present paper is concerned with the question of whether scientific conceptual analysis provides better justification than armchair conceptual analysis. In order to address this question, I provide exact definitions of armchair conceptual analysis and scientific conceptual analysis. Furthermore, I use a certain criticism of armchair conceptual analysis, raised by experimental philosophers, as a basis for an argument to the conclusion that scientific conceptual analysis provides better justification than armchair conceptual analysis, and consider the expertise defence as a possible response to this argument. The argument is based on the idea that the concept of a common usage implies a certain degree of uniformity among different speakers, and can be called ‘argument from uniformity of agreement’. The expertise defence can be understood as an attack of one of the premises of this argument. Finally, I present and discuss the results from an empirical study in which scientific conceptual analysis was used in order to gather evidence as regards the soundness of the argument from uniformity of agreement and the expertise defence.
Article
We report four experiments that investigate explicit reasoning and moral judgments. In each experiment, some subjects responded to the "footbridge" version of the trolley problem (which elicits stronger moral intuitions), whereas others responded to the "switch" version (which elicits weaker moral intuitions). Experiments 1-2 crossed the type of trolley problem with four reasoning conditions: control, counterattitudinal, pro-attitudinal, and mixed reasoning (both types of reasoning). Experiments 3-4 examine whether moral judgments vary based on (a) when reasoners engage in counterattitudinal reasoning, (b) when they make the moral judgment, and (c) by the type of moral dilemma. These two experiments comprised five conditions: control (judgment only), delay-only (2-minute wait then judgment), reasoning-only (reasoning then judgment), reasoning-delay (reasoning, then 2-minute delay, then judgment), and delayed-reasoning (2-minute delay, then reasoning, then judgment). These conditions were crossed with the type of trolley problem. We find that engaging in some form of counterattitudinal reasoning led to less typical judgments (regardless of when it occurs), but this effect was mostly restricted to the switch version of the dilemma (and was strongest in the reasoning-delay conditions). Furthermore, neither pro-attitudinal reasoning nor delayed judgments on their own impacted subjects' judgments. Reasoners therefore seem open to modifying their moral judgments when they consider opposing perspectives, but might be less likely to do so for dilemmas that elicit relatively strong moral intuitions.
Article
Full-text available
This article presents a theory of intuitive skill in terms of three constitutive elements: getting things right intuitively, not getting things wrong intuitively, and sceptical ability. The theory draws on work from a range of psychological approaches to intuition and expertise in various domains, including arts, business, science, and sport. It provides a general framework that will help to further integrate research on these topics, for example building bridges between practical and theoretical domains or between such apparently conflicting methodologies as a heuristics and biases approach on the one hand and one based on naturalistic decision-making on the other. In addition, the theory provides a clearer and more precise account of relevant concepts, which will help to inspire new directions for future research. Intuitive skill is defined as a high level of intuitive ability, that is, the ability to make good use of intuition; specifically, a high level of ability at either getting things right intuitively, not getting things wrong intuitively, or sceptical ability, where the latter is the ability to detect instances of getting things wrong intuitively so as to avoid forming incorrect intuitive judgements, which may itself be partly intuitive.
Article
Full-text available
In this paper, we report the results of three high-powered replication studies in experimental philosophy, which bear on an alleged instability of folk philosophical intuitions: the purported susceptibility of epistemic intuitions about the Truetemp case (Lehrer, Theory of knowledge. Westview Press, Boulder, 1990) to order effects. Evidence for this susceptibility was first reported by Swain et al. (Philos Phenomenol Res 76(1):138–155, 2008); further evidence was then found in two studies by Wright (Cognition 115(3):491–503. https://doi.org/10.1016/j.cognition.2010.02.003, 2010) and Weinberg et al. (Monist 95(2):200–222, 2012). These empirical results have been quite influential in the recent metaphilosophical debate about the method of cases. However, none of Swain et al.’s (2008) predictions concerning order effects with Truetemp cases could be consistently and robustly replicated in our three experiments, and it is thus at best unclear whether Truetemp intuitions are in fact unstable. So, if proponents of the negative program in experimental philosophy still want to use order effects to challenge the reliability of philosophical case judgments, they would be well advised to look elsewhere instead. In any case, given the more robust empirical evidence that we present in this paper, the metaphilosophical flurry created by Swain et al. (2008) and Wright’s (2010) influential studies looks like mere alarmism in hindsight.
Conference Paper
In the past couple of years, more and more companies have been trying to integrate, within their ethical infrastructure, a varying degree and amount of ethical concerns regarding the development and deployment of Artificial Intelligence (AI) systems. As a result, it would not be an exaggeration to say that we have witnessed an explosion of ethics codes concerning AI. The main purpose of this paper is to explore the way in which business organizations have dealt with such concerns. In particular, we first aim to analyze whether companies have a genuine interest in AI ethics or whether it is nothing more than a case of ethics washing, making this whole enterprise virtually useless. We will conclude by advancing an agenda regarding how AI ethical regulations could be empirically assessed, by highlighting a few of the downsides and explaining how different experimental methods could help us close the empirical knowledge gap.
Article
Full-text available
The development of reasoning skills is often regarded as a central goal of ethics and philosophy classes in school education. In light of recent studies from the field of moral psychology, however, it could be objected that the promotion of such skills might fail to meet another important objective, namely the moral education of students. In this paper, I will argue against such pessimism by suggesting that the fostering of reasoning skills can still contribute to the aims of moral education. To do so, I will engage with the concept of moral education, point out different ways in which reasoning skills play an essential role in it, and support these considerations by appealing to further empirical studies. My conclusion will be that the promotion of ethical reasoning skills fulfils two important aims of moral education: First, it enables students to critically reflect on their ethical beliefs. Second, it allows them to explore ethical questions in a joint conversation with others. Lastly, I will refer to education in the field of sustainable development in order to exemplify the importance of these abilities.
Article
We study a decision-framing design problem: a principal faces an agent with frame-dependent preferences and designs an extensive form with a frame at each stage. This allows the principal to circumvent incentive compatibility constraints by inducing dynamically inconsistent choices of the sophisticated agent. We show that a vector of contracts can be implemented if and only if it can be implemented using a canonical extensive form, which has a simple high-low-high structure using only three stages and the two highest frames, and employs unchosen decoy contracts to deter deviations. We then turn to the study of optimal contracts in the context of the classic monopolistic screening problem and establish the existence of a canonical optimal mechanism, even though our implementability result does not directly apply. In the presence of naive types, the principal can perfectly screen by cognitive type and extract full surplus from naifs.
Article
A recent empirical study has argued that experts in the ethics or the law of war cannot reach reasonable convergence on dilemmas regarding the number of civilian casualties who may be killed as a side effect of attacks on legitimate military targets. This article explores the philosophical implications of that study. We argue that the wide disagreement between experts on what in bello proportionality means in practice casts serious doubt on their ability to provide practical real-life guidance. We then suggest viewing in bello proportionality through the prism of virtue ethics.
Article
Full-text available
Many philosophers hold that experts’ semantic intuitions are more reliable and provide better evidence than lay people’s intuitions – a thesis commonly called “the Expertise Defense.” Focusing on the intuitions about the reference of proper names, this article critically assesses the Expertise Defense.
Article
Full-text available
The “expertise defense” is the claim that philosophers have special expertise that allows them to resist the biases suggested by the findings of experimental philosophers. Typically, this defense is backed up by an analogy with expertise in science or other academic fields. Recently, however, studies have begun to suggest that philosophers' intuitions may be just as subject to inappropriate variation as those of the folk. Should we conclude that the expertise defense has been debunked? I'll argue that the analogy with science still motivates a default assumption of philosophical expertise; however, the expertise so motivated is not expertise in intuition, and its existence would not suffice to answer the experimentalist challenge. I'll also suggest that there are deep parallels between the current methodological crisis in philosophy and the decline of introspection-based methods in psychology in the early twentieth century. The comparison can give us insight into the possible future evolution of philosophical methodology.
Article
Full-text available
Document Type: Research Article Affiliations: Email: kevin.tobia@gmail.com Publication date: January 1, 2013 (document).ready(function() { var shortdescription = (".originaldescription").text().replace(/\\&/g, '&').replace(/\\, '<').replace(/\\>/g, '>').replace(/\\t/g, ' ').replace(/\\n/g, ''); if (shortdescription.length > 350){ shortdescription = "" + shortdescription.substring(0,250) + "... more"; } (".descriptionitem").prepend(shortdescription);(".descriptionitem").prepend(shortdescription); (".shortdescription a").click(function() { (".shortdescription").hide();(".shortdescription").hide(); (".originaldescription").slideDown(); return false; }); }); Related content In this: publication By this: publisher In this Subject: Psychology , Political Science By this author: Tobia, Kevin ; Chapman, G. ; Stich, S. GA_googleFillSlot("Horizontal_banner_bottom");
Article
Full-text available
Five university-based research groups competed to recruit forecasters, elicit their predictions, and aggregate those predictions to assign the most accurate probabilities to events in a 2-year geopolitical forecasting tournament. Our group tested and found support for three psychological drivers of accuracy: training, teaming, and tracking. Probability training corrected cognitive biases, encouraged forecasters to use reference classes, and provided forecasters with heuristics, such as averaging when multiple estimates were available. Teaming allowed forecasters to share information and discuss the rationales behind their beliefs. Tracking placed the highest performers (top 2% from Year 1) in elite teams that worked together. Results showed that probability training, team collaboration, and tracking improved both calibration and resolution. Forecasting is often viewed as a statistical problem, but forecasts can be improved with behavioral interventions. Training, teaming, and tracking are psychological interventions that dramatically increased the accuracy of forecasts. Statistical algorithms (reported elsewhere) improved the accuracy of the aggregation. Putting both statistics and psychology to work produced the best forecasts 2 years in a row.
Article
Full-text available
Intelligence agents make risky decisions routinely, with serious consequences for national security. Although common sense and most theories imply that experienced intelligence professionals should be less prone to irrational inconsistencies than college students, we show the opposite. Moreover, the growth of experience-based intuition predicts this developmental reversal. We presented intelligence agents, college students, and postcollege adults with 30 risky-choice problems in gain and loss frames and then compared the three groups' decisions. The agents not only exhibited larger framing biases than the students, but also were more confident in their decisions. The postcollege adults (who were selected to be similar to the students) occupied an interesting middle ground, being generally as biased as the students (sometimes more biased) but less biased than the agents. An experimental manipulation testing an explanation for these effects, derived from fuzzy-trace theory, made the students look as biased as the agents. These results show that, although framing biases are irrational (because equivalent outcomes are treated differently), they are the ironical output of cognitively advanced mechanisms of meaning making.
Article
Full-text available
Framing effects have long been viewed as compelling evidence of irrationality in human decision making, yet that view rests on the questionable assumption that numeric quantifiers used to convey the expected values of choice options are uniformly interpreted as exact values. Two experiments show that when the exactness of such quantifiers is made explicit by the experimenter, framing effects vanish. However, when the same quantifiers are given a lower bound (at least) meaning, the typical framing effect is found. A 3rd experiment confirmed that most people spontaneously interpret the quantifiers in standard framing tests as lower bounded and that their interpretations strongly moderate the framing effect. Notably, in each experiment, a significant majority of participants made rational choices, either choosing the option that maximized expected value (i.e., lives saved) or choosing consistently across frames when the options were of equal expected value. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Article
Full-text available
In recent years, a number of philosophers have conducted empirical studies that survey people's intuitions about various subject matters in philosophy. Some have found that intuitions vary accordingly to seemingly irrelevant facts: facts about who is considering the hypothetical case, the presence or absence of certain kinds of content, or the context in which the hypothetical case is being considered. Our research applies this experimental philosophical methodology to Judith Jarvis Thomson's famous Loop Case, which she used to call into question the validity of the intuitively plausible Doctrine of Double Effect. We found that intuitions about the Loop Case vary according to the context in which the case is considered. We contend that this undermines the supposed evidential status of intuitions about the Loop Case. We conclude by considering the implications of our findings for philosophers who rely on the Loop Case to make philosophical arguments and for philosophers who use intuitions in general.
Article
Full-text available
The theory of formal discipline—that is, the view that instruction in abstract rule systems can affect reasoning about everyday-life events—-has been rejected by 20th century psychologists on the basis of rather scant evidence. We examined the effects of graduate training in law, medicine, psychology, and chemistry on statistical reasoning, methodological reasoning about confounded variables, and reasoning about problems in the logic of the conditional. Both psychology and medical training produced large effects on statistical and methodological reasoning, and psychology, medical, and law training produced effects on ability to reason about problems in the logic of the conditional. Chemistry training had no effect on any type of reasoning studied. These results seem well understood in terms of the rule systems taught by the various fields and indicate that a version of the formal discipline hypothesis is correct. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and P(B), because the extension (or the possibility set) of the conjunction is included in the extension of its constituents. Judgments under uncertainty, however, are often mediated by intuitive heuristics that are not bound by the conjunction rule. A conjunction can be more representative that one of its constituents, and instances of a specific category can be easier to imagine or to retrieve than instances of a more inclusive category. The representativeness and availability heuristics therefore can make a conjunction appear more probable than one of its constituents. This phenomenon is demonstrated in a variety of contexts, including estimation of word frequency, personality judgment, medical prognosis, decision under risk, suspicion of criminal acts, and political forecasting. Systematic violations of the conjunction rule are observed in judgments of lay people and of experts in both between- and within-Ss comparisons. Alternative interpretations of the conjunction fallacy are discussed, and attempts to combat it are explored. (48 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Third person approaches to thought experiments and conceptual analysis through the method of surveys are motivated by and motivate skepticism about the traditional first person method. I argue that such surveys give no good ground for skepticism, that they have some utility, but that they do not represent a fundamentally new way of doing philosophy, that they are liable to considerable methodological difficulties, and that they cannot be substituted for the first person method, since the a priori knowledge which is our object in conceptual analysis can be acquired only from the first person standpoint.
Chapter
Stanford Encyclopedia Entry on the nature and moral significance of the difference between doing and allowing harm.
Article
The theory of formal discipline-that is, the view that instruction in abstract rule systems can affect reasoning about everyday-life events-has been rejected by 20th century psychologists on the basis of rather scant evidence. We examined the effects of graduate training in law, medicine, psychology, and chemistry on statistical reasoning, methodological reasoning about confounded variables, and reasoning about problems in the logic of the conditional. Both psychology and medical training produced large effects on statistical and methodological reasoning, and psychology, medical, and law training produced effects on ability to reason about problems in the logic of the conditional. Chemistry training had no effect on any type of reasoning studied. These results seem well understood in terms of the rule systems taught by the various fields and indicate that a version of the formal discipline hypothesis is correct.
Article
Experimental philosophers have challenged friends of the expertise defense to show that (a) the intuitive judgments of professional philosophers are different from the intuitive judgments of nonphilosophers, and (b) the intuitive judgments of professional philosophers are better than the intuitive judgments of nonphilosophers, in ways that are relevant to the truth or falsity of such judgments. Friends of the expertise defense have responded by arguing that the burden of proof lies with experimental philosophers. This article sketches three arguments which show that both (a) and (b) are probably false. If its arguments are cogent, then shifting the burden of proof is a futile move, since philosophical training makes no difference so far as making intuitive judgments in response to hypothetical cases is concerned.
Article
Recent empirical work appears to suggest that the moral intuitions of professional philosophers are just as vulnerable to distorting psychological factors as are those of ordinary people. This paper assesses these recent tests of the ‘expertise defense’ of philosophical intuition. I argue that the use of familiar cases and principles constitutes a methodological problem. Since these items are familiar to philosophers, but not ordinary people, the two subject groups do not confront identical cognitive tasks. Reflection on this point shows that these findings do not threaten philosophical expertise—though we can draw lessons for more effective empirical tests.
Article
Experimental philosophers have empirically challenged the connection between intuition and philosophical expertise. This paper reviews these challenges alongside other research findings in cognitive science on expert performance and argues for three claims. First, evidence taken to challenge philosophical expertise may also be explained by the well-researched failures and limitations of genuine expertise. Second, studying the failures and limitations of experts across many fields provides a promising research program upon which to base a new model of philosophical expertise. Third, a model of philosophical expertise based on the limitations of genuine experts may suggest a series of constraints on the reliability of professional philosophical intuition. Even when the experts all agree, they may well be mistaken. — Bertrand Russell, On the Value of Scepticism
Article
We examined the effects of order of presentation on the moral judgments of professional philosophers and two comparison groups. All groups showed similar‐sized order effects on their judgments about hypothetical moral scenarios targeting the doctrine of the double effect, the action‐omission distinction, and the principle of moral luck. Philosophers' endorsements of related general moral principles were also substantially influenced by the order in which the hypothetical scenarios had previously been presented. Thus, philosophical expertise does not appear to enhance the stability of moral judgments against this presumably unwanted source of bias, even given familiar types of cases and principles.
Article
Skepticism about the epistemic value of intuition in theoretical and philosophical inquiry fueled by the empirical discovery of irrational bias (e.g., the order effect) in people's judgments has recently been challenged by research suggesting that people can introspectively track intuitional instability. The two studies reported here build upon this, the first by demonstrating that people are able to introspectively track instability that was experimentally induced by introducing conflicting expert opinion about certain cases, and the second by demonstrating that it was the presence of instability—not merely the presence of conflicting information—that resulted in changes in the relevant attitudinal states (i.e., confidence and belief strength). The paper closes with the suggestion that perhaps the best explanation for these (and other) findings may be that intuitional instability is not actually “intuitional.”
Article
In framing studies, logically equivalent choice situations are differently described and the resulting preferences are studied. A meta-analysis of framing effects is presented for risky choice problems which are framed either as gains or as losses. This evaluates the finding that highlighting the positive aspects of formally identical problems does lead to risk aversion and that highlighting their equivalent negative aspects does lead to risk seeking. Based on a data pool of 136 empirical papers that reported framing experiments with nearly 30,000 participants, we calculated 230 effect sizes. Results show that the overall framing effect between conditions is of small to moderate size and that profound differences exist between research designs. Potentially relevant characteristics were coded for each study. The most important characteristics were whether framing is manipulated by changing reference points or by manipulating outcome salience, and response mode (choice vs. rating/judgment). Further important characteristics were whether options differ qualitatively or quantitatively in risk, whether there is one or multiple risky events, whether framing is manipulated by gain/loss or by task-responsive wording, whether dependent variables are measured between- or within- subjects, and problem domains. Sample (students vs. target populations) and unit of analysis (individual vs. group) was not influential. It is concluded that framing is a reliable phenomenon, but that outcome salience manipulations, which constitute a considerable amount of work, have to be distinguished from reference point manipulations and that procedural features of experimental settings have a considerable effect on effect sizes in framing experiments.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
In a recent paper Weinberg (200737. Weinberg , J . 2007 . How to challenge intuitions empirically without risking skepticism . Midwest Studies in Philosophy , 31 : 318 – 343 . [CrossRef]View all references) claims that there is an essential mark of trustworthiness which typical sources of evidence as perception or memory have, but philosophical intuitions lack, namely that we are able to detect and correct errors produced by these “hopeful” sources. In my paper I will argue that being a hopeful source isn’t necessary for providing us with evidence. I then will show that, given some plausible background assumptions, intuitions at least come close to being hopeful, if they are reliable. If this is true, Weinberg's new challenge comes down to the claim that philosophical intuitions are not reliable since they are significantly unstable. In the second part of my paper I will argue that and why the experimentally established instability of folk intuitions about philosophical cases does not show that philosopher's expert intuitions about these cases are instable.
Article
Recent experimental philosophy arguments have raised trouble for philosophers’ reliance on armchair intuitions. One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there's no reason to think philosophers will make the same mistakes. But this deploys a substantive empirical claim, that philosophers’ training indeed inculcates sufficient protection from such mistakes. We canvass the psychological literature on expertise, which indicates that people are not generally very good at reckoning who will develop expertise under what circumstances. We consider three promising hypotheses concerning what philosophical expertise might consist in: (i) better conceptual schemata; (ii) mastery of entrenched theories; and (iii) general practical know-how with the entertaining of hypotheticals. On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense.
Article
This chapter is on framing effects in moral judgment. Framing effects are, basically, variations in beliefs as a result of variations in wording and order. Sinnott-Armstrong emphasizes that, when such variations in wording and order cannot affect the truth of beliefs, framing effects signal unreliability. Hence, if framing effects are widespread enough in moral intuitions, moral believers have reason to suspect that those intuitions are unreliable. Sinnott-Armstrong cites empirical evidence that framing effects are surprisingly common in moral judgments, so he concludes that moral believers have reason to suspect that their moral intuitions are unreliable. This result creates a need for confirmation and thereby undermines traditional moral intuitionism as a response to the skeptical regress problem in moral epistemology. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Recently psychologists and experimental philosophers have reported findings showing that in some cases ordinary people’s moral intuitions are affected by factors of dubious relevance to the truth of the content of the intuition. Some defend the use of intuition as evidence in ethics by arguing that philosophers are the experts in this area, and philosophers’ moral intuitions are both different from those of ordinary people and more reliable. We conducted two experiments indicating that philosophers and non-philosophers do indeed sometimes have different moral intuitions, but challenging the notion that philosophers have better or more reliable intuitions.
Article
Some proponents of “experimental philosophy” criticize philosophers' use of thought experiments on the basis of evidence that the verdicts vary with truth-independent factors. However, their data concern the verdicts of philosophically untrained subjects. According to the expertise defence, what matters are the verdicts of trained philosophers, who are more likely to pay careful attention to the details of the scenario and track their relevance. In a recent article, Jonathan M. Weinberg and others reply to the expertise defence that there is no evidence for such expertise. They now receive a reply in this article, which argues that they have misconstrued the dialectical situation. Since they have produced no evidence that philosophical training is less efficacious for thought experimentation than for other cognitive tasks for which they acknowledge that it produces genuine expertise, such as informal argumentation, they have produced no evidence for treating the former more sceptically than the latter.
Article
Many philosophers appeal to intuitions to support some philosophical views. However, there is reason to be concerned about this practice as scientific evidence has documented systematic bias in philosophically relevant intuitions as a function of seemingly irrelevant features (e.g., personality). One popular defense used to insulate philosophers from these concerns holds that philosophical expertise eliminates the influence of these extraneous factors. Here, we test this assumption. We present data suggesting that verifiable philosophical expertise in the free will debate-as measured by a reliable and validated test of expert knowledge-does not eliminate the influence of one important extraneous feature (i.e., the heritable personality trait extraversion) on judgments concerning freedom and moral responsibility. These results suggest that, in at least some important cases, the expertise defense fails. Implications for the practice of philosophy, experimental philosophy, and applied ethics are discussed.
Article
Skepticism about the epistemic value of intuition in theoretical and philosophical inquiry has recently been bolstered by empirical research suggesting that people's concrete-case intuitions are vulnerable to irrational biases (e.g., the order effect). What is more, skeptics argue that we have no way to "calibrate" our intuitions against these biases and no way of anticipating intuitional instability. This paper challenges the skeptical position, introducing data from two studies that suggest not only that people's concrete-case intuitions are often stable, but also that people have introspective awareness of this stability, providing a promising means by which to assess the epistemic value of our intuitions.
Article
Many philosophers have worried about what philosophy is. Often they have looked for answers by considering what it is that philosophers do. Given the diversity of topics and methods found in philosophy, however, we propose a different approach. In this article we consider the philosophical temperament, asking an alternative question: what are philosophers like? Our answer is that one important aspect of the philosophical temperament is that philosophers are especially reflective: they are less likely than their peers to embrace what seems obvious without questioning it. This claim is supported by a study of more than 4,000 philosophers and non-philosophers, the results of which indicate that even when we control for overall education level, philosophers tend to be significantly more reflective than their peers. We then illustrate this tendency by considering what we know about the philosophizing of a few prominent philosophers. Recognizing this aspect of the philosophical temperament, it is natural to wonder how philosophers came to be this way: does philosophical training teach reflectivity or do more reflective people tend to gravitate to philosophy? We consider the limitations of our data with respect to this question and suggest that a longitudinal study be conducted.
Article
Two views have dominated theories of deductive reasoning. One is the view that people reason using syntactic, domain-independent rules of logic, and the other is the view that people use domain-specific knowledge. In contrast with both of these views, we present evidence that people often reason using a type of knowledge structure termed pragmatic reasoning schemas. In two experiments, syntactically equivalent forms of conditional rules produced different patterns of performance in Wason's selection task, depending on the type of pragmatic schema evoked. The differences could not be explained by either dominant view. We further tested the syntactic view by manipulating the type of logic training subjects received. If people typically do not use abstract rules analogous to those of standard logic, then training on abstract principles of standard logic alone would have little effect on selection performance, because the subjects would not know how to map such rules onto concrete instances. Training results obtained in both a laboratory and a classroom setting confirmed our hypothesis: Training was effective only when abstract principles were coupled with examples of selection problems, which served to elucidate the mapping between abstract principles and concrete instances. In contrast, a third experiment demonstrated that brief abstract training on a pragmatic reasoning schema had a substantial impact on subjects' reasoning about problems that were interpretable in terms of the schema. The dominance of pragmatic schemas over purely syntactic rules was discussed with respect to the relative utility of both types of rules for solving real-world problems.
Article
The psychological principles that govern the perception of decision problems and the evaluation of probabilities and outcomes produce predictable shifts of preference when the same problem is framed in different ways. Reversals of preference are demonstrated in choices regarding monetary outcomes, both hypothetical and real, and in questions pertaining to the loss of human lives. The effects of frames on preferences are compared to the effects of perspectives on perceptual appearance. The dependence of preferences on the formulation of decision problems is a significant concern for the theory of rational choice.
Article
Is moral judgment accomplished by intuition or conscious reasoning? An answer demands a detailed account of the moral principles in question. We investigated three principles that guide moral judgments: (a) Harm caused by action is worse than harm caused by omission, (b) harm intended as the means to a goal is worse than harm foreseen as the side effect of a goal, and (c) harm involving physical contact with the victim is worse than harm involving no physical contact. Asking whether these principles are invoked to explain moral judgments, we found that subjects generally appealed to the first and third principles in their justifications, but not to the second. This finding has significance for methods and theories of moral psychology: The moral principles used in judgment must be directly compared with those articulated in justification, and doing so shows that some moral principles are available to conscious reasoning whereas others are not.
Article
This paper introduces a three-item "Cognitive Reflection Test" (CRT) as a simple measure of one type of cognitive ability--the ability or disposition to reflect on a question and resist reporting the first response that comes to mind. The author will show that CRT scores are predictive of the types of choices that feature prominently in tests of decision-making theories, like expected utility theory and prospect theory. Indeed, the relation is sometimes so strong that the preferences themselves effectively function as expressions of cognitive ability--an empirical fact begging for a theoretical explanation. The author examines the relation between CRT scores and two important decision-making characteristics: time preference and risk preference. The CRT scores are then compared with other measures of cognitive ability or cognitive "style." The CRT scores exhibit considerable difference between men and women and the article explores how this relates to sex differences in time and risk preferences. The final section addresses the interpretation of correlations between cognitive abilities and decision-making characteristics.
Doctrine of double effect. Stanford Encyclopedia of Philosophy
  • A Mcintyre
Moral intuitions: Are philosophers experts?
  • Tobia