Article

Representativeness Revisited: Attribute Substitution in Intuitive Judgment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recent studies have applied dual-system theories to understand the mechanisms underlying fairness preference in UG (Halali et al., 2014;Hochman et al., 2015;Sanfey & Chang, 2008;Wei et al., 2022). According to these ideas, a person's decision-making behavior is regulated by the interaction of two separate systems: System 1, which is quick, intuitive, automatic, and mostly effortless, and System 2, which is slower, more controlled, effortful, and deliberate (Kahneman & Frederick, 2002;Sloman, 1996;Stanovich & West, 2000). Specifically, System 1 engages in automatic processing, which involves rapid, unconscious responses triggered by stimuli. ...
... In Study 1, we examined the effect of cognitive load on the contribution of System 1. According to earlier studies, cognitive load might exhaust one's attentional resources and encourage the adoption of automatic System 1 strategies (Halali et al., 2014;Kahneman & Frederick, 2002;Liu et al., 2015). Therefore, we hypothesized that participants under cognitive load would show an increase in the contribution of System 1 (PA). ...
... Following this logic, we discovered that in UG decisions, the contributions of automatic System 1 and controlled System 2 can be specifically affected by experimental manipulations, supporting the independence assumption. These findings are in line with the assumptions of dual-system theories (Kahneman & Frederick, 2002) and social heuristics (Bear & Rand, 2016;Rand et al., 2014), providing behavioral evidence for dual-system theories in UG. ...
... Conversely, System 2, often termed as "Slow Thinking," is deliberate, slower, and more rational, requires more conscious effort (Kahneman, 2011), and allows for self-regulation and thoughtful consideration before making decisions (Baumeister et al., 1998). The interplay between these systems influences everything from mundane to critical decisions, highlighting the complexity of human emotional and cognitive processing (Kahneman and Frederick, 2002;Kahneman, 2011). ...
... On the other hand, the Causal Hypothesis 1 (C1) refers to the case where X → Y, namely the review X causing the sentiment Y. It is an instance of Slow Thinking (Baumeister et al., 1998;Kahneman and Frederick, 2002), which deliberately uses conscious efforts to list out the up-and downsides of an experience in the review X, and come up with a thoughtful final decision as the rating Y. Table 7.1: Statistics of the entire datasets and their C1 and C2 subsets for Yelp, Amazon, and App Review. We can see that a roughly balanced number of reviews aligning with the C1 and C2 processes. ...
Preprint
Full-text available
Causal reasoning is a cornerstone of human intelligence and a critical capability for artificial systems aiming to achieve advanced understanding and decision-making. This thesis delves into various dimensions of causal reasoning and understanding in large language models (LLMs). It encompasses a series of studies that explore the causal inference skills of LLMs, the mechanisms behind their performance, and the implications of causal and anticausal learning for natural language processing (NLP) tasks. Additionally, it investigates the application of causal reasoning in text-based computational social science, specifically focusing on political decision-making and the evaluation of scientific impact through citations. Through novel datasets, benchmark tasks, and methodological frameworks, this work identifies key challenges and opportunities to improve the causal capabilities of LLMs, providing a comprehensive foundation for future research in this evolving field.
... If decisions are made in a vacuum, it is natural to prioritize self-oriented information over other-oriented information. Since individuals' responses or impressions about given stimuli arise instantly and automatically (Kahneman and Frederick 2002), they do not need to devote extra attention or time to process them. However, when social cues intervene, people often consciously shift their focus from themselves to others (Gorlin and Dhar 2012). ...
... Notably, I reveal the reliance on reasons and feelings as a key driver of joint consumption choices. Since Kahneman and Frederick (2002) developed the construct of an individual's processing system into intuitive System 1 (i.e., "feelings" in the current research) and re ective System 2 (i.e., "reasons"), the literature has explored various antecedent factors and subsequent behaviors. For example, self-construal in uences preference for affectively or cognitively superior options through the adoption of feeling-or reason-based decision strategies (Hong and Chang 2015). ...
... However, this process is reliant on all information being readily available and is too cognitively taxing to use in all decision-making (Kahneman & Frederick, 2002;Simon, 1990). In turn, we must work within bounded cognitive resources to make routine decisions more efficiently and effectively-this is referred to as bounded rationality (Simon, 1990). ...
... Heuristics are mental tools-or "shortcuts"-used to simplify the act of decision-making (Shah & Oppenheimer, 2008). They are often utilized when information regarding a decision closely associates to an already established attribute readily available (Kahneman & Frederick, 2002). To improve decision-making efficiency, one's cognitive process may (consciously or unconsciously) ignore pieces of potentially relevant information to conserve mental energy and increase the decision-making speed within a variety of settings (Mardar & Pina-Sánchez, 2020). ...
... Pendekatan intuitif adalah menyampaikan jawaban intuitif sebagai pemicu jawaban formal tanpa ada pembenaran (Epstein, 1994;Evans, 2014;Kahneman & Frederick, 2002). Pendekatan konkret adalah menyelidiki permasalahan dari berbagai sudut pandang menggunakan benda konkret, selanjutnya pendekatan representasi adalah membuat berbagai macam representasi dari penyelidikan masalah pada aktivitas konkret (Bruner, 1974;Bruner, 2009;Major & Mangope, 2012). ...
... Pendekatan konkret adalah menyelidiki permasalahan dari berbagai sudut pandang menggunakan benda konkret, selanjutnya pendekatan representasi adalah membuat berbagai macam representasi dari penyelidikan masalah pada aktivitas konkret (Bruner, 1974;Bruner, 2009;Major & Mangope, 2012). Pendekatan abstrak adalah menemukan aturan secara abstrak (Epstein, 1994;Evans, 2014;Kahneman & Frederick, 2002;Robbins, 2011) Keempat pendekatan tersebut dapat mendukung fleksibitas berpikir kreatif karena intuisi (pikiran bawah sadar) memungkinkan seseorang untuk mengatur masalah dengan cara baru (Segal, 2004;Sio & Ormerod, 2009), pemahaman konkret akan membentuk pemikiran yang feksibel dari satu konsep ke konsep lain dalam memecahkan masalah (Kang & Liu, 2018), representasi berkaitan erat ketika seseorang mengeksplorasi ide-idenya dan menghasilkan berbagai macam ide yang berbeda secara konseptual (Delice & Kertil, 2015), dan berpikir secara abstrak mengakibatkan terjadinya proses penyatuan ide sehingga menghasilkan ide-ide lain yang berbeda (Barr et al., 2015). ...
Article
Full-text available
Fleksibilitas dalam berpikir kreatif menjadi sangat penting untuk dibahas karena berpotensi menghasilkan cara-cara yang beragam secara konseptual. Ketika fleksibilitas berpikir dikuasai, maka tercipta kepercayaan diri dan keberanian untuk mengeksplorasi gagasan baru. Dalam konteks matematika, fleksibilitas berpikir memungkinkan siswa untuk merepresentasikan masalah matematika dengan berbagai cara, seperti diagram, grafik, tabel, atau persamaan sesuai dengan pemahaman siswa kebutuhan masalah. Artikel ini membahas secara mendalam konsep fleksibilitas dalam berpikir kreatif matematis meliputi metode penilaian, cara-cara untuk menunjukkan fleksibilitas pada masalah geometri, dan penerapannya dalam situasi praktis. Artikel ini merupakan studi literatur yang mengkaji berbagai jenis artikel jurnal dan buku sehingga dihasilkan deskripsi yang mendalam. Hasil studi literatur ini menunjukkan bahwa fleksibilitas berkaitan dengan kapasitas mengubah cara dengan melihat suatu masalah dari perspektif yang berbeda-beda, Fleksibilitas diukur dengan aktivitas mengubah fokus, mencoba strategi yang berbeda, memanfaatkan representasi yang berbeda, dan menghubungkan berbagai cabang matematika. Beberapa kriteria masalah yang dapat mendukung fleksibilitas dalam berpikir, yaitu masalah terbuka, terkoneksi, visual, dan menantang. Salah satu contoh aplikasi praktis yang mendukung fleksibilitas berpikir kreatif matematis siswa adalah menggunakan pendekatan intuitif, konkret, representasi, dan abstrak.
... Studies 3 and 4 aimed to explore the heuristic nature of moral judgment and its boundary conditions. Consistent with the concept of attribute substitution (Kahneman & Frederick, 2002), the WMM posits that people assess the severity of harm based on available contextual cues, such as whether others were also harmed. However, attribute substitution should occur only under conditions of uncertainty; specifically, when there is ambiguity about why the harm occurred (otherwise, people would not rely on the cue of the victim's singularity). ...
... According to Kahneman and Frederick (2002), a judgment is mediated heuristically when people assess a specified attribute ("the target attribute") of an object by replacing it with a property which comes more readily to mind ("the heuristic attribute"). In the case of harm assessments, the immorality of an act, as well as the severity of the harm experienced by someone because of this act, are difficult to perceive in absolute terms. ...
Article
Full-text available
Our "Why Me?" Model (WMM), explaining why people often judge harm toward one victim more harshly than similar harm to several victims, is presented and supported in 5 studies.
... System 1 (also System-X, intuitive or heuristic system), which operates quickly and automatically with little to no effort and no sense of voluntary control, can have an outsized impact on decisions. It facilitates rapid sensemaking and DM in complex situations where immediate action is required (Epstein, 1994;Stanovich, 1999;Kahneman and Frederick, 2002;Satpute and Lieberman, 2006;Evans and Stanovich, 2013;Gonzalez-Loureiro and Vlačić, 2016;Kahneman, 2017). This capacity for fast, intuitive judgment allows coaches to react swiftly to changes and exploit DM challenges, critical situations, and problems that arise, which can be particularly advantageous in different competitions and training settings. ...
... On the other hand, System 2 (also System C or analytic system) is characterized by slower, more deliberate, and conscious thinking. It is the system we engage when we need to do complex computations, weigh options judiciously, or when we need to control ourselves (Epstein, 1994;Stanovich, 1999;Kahneman and Frederick, 2002;Satpute and Lieberman, 2006;Evans and Stanovich, 2013;Gonzalez-Loureiro and Vlačić, 2016;Kahneman, 2017). In the context of DM, System 2 is crucial in systematically evaluating the long-term implications of decisions, assessing risks, and ensuring that the choices made align with the overarching strategic goals of the team or individual athlete (Papadakis and Barwise, 1998, p. 127;Elbanna and Child, 2007, pp. ...
Article
Full-text available
Introduction A coach’s managerial and pedagogical tasks in the sports training process constitute the substantive core of their work, while decision-making serves as the fundamental method underpinning these tasks. Some decisions made by coaches result from deliberate, analytical thinking, which involves extensive information gathering, analysis, and discussion. Others, however, are made quickly and spontaneously, triggered by unforeseen situations during training or competition that demand immediate action. Consequently, the purpose of this study is to develop a conceptual framework for understanding coaches’ decision-making behavior in conventional sports. This framework aims to establish appropriate relationships between the various decisions coaches make during the training process and theoretical concepts related to decision-making, both in general and within the coaching context. Methods To design the research, we used the methodology of a conceptual paper and a “model paper” approach, which seeks to build a theoretical framework that predicts relationships between distinct research concepts and scientific disciplines, aiming to integrate them into a cohesive model of coaches’ decision-making behavior. Results The proposed conceptual framework encompasses a comprehensive range of situations that may arise during the sports training process and potential ways to address them. This framework identifies different types of decisions and characteristics associated with coaches’ decision-making behavior. It incorporates various sport-specific and general theories of decision-making and cognitive functioning to offer a deeper understanding of how coaches process and execute decisions in diverse contexts. Discussion The developed conceptual framework outlines three primary types of decisions—strategic, tactical, and operational—each playing a distinct role in the broader sports training process. These decisions are based on different cognitive processes, which manifest in varied decision-making behaviors and are reinforced by specific leadership styles. The practical value of this framework lies in its potential application for selecting appropriate experts to address the diverse decision-making scenarios encountered in sports training. This ensures the alignment of decision-making styles with the requirements of specific training situations, thereby enhancing the effectiveness and outcomes of the coaching process.
... The interplay between Type 1 and Type 2 processes in reasoning can readily be appreciated when considering the typical responding that is observed when people tackle items that make up the cognitive reflection test (CRT ;Frederick 2005;Kahneman and Frederick 2002). A good example of a CRT item is the well-known bat-and-ball problem: A bat and a ball cost $1.10 in total. ...
... When reasoning, it is often of critical importance for a person to be able to engage in metacognitive control to resist adhering to an initial, default intuitive response, and instead to apply more effortful and reflective analytic thinking that can potentially lead to the generation of a normatively correct response (e.g., Stanovich 2013a, 2013b). In their study, Morsanyi and Hamilton aimed to investigate whether closely matched autistic and neurotypical individuals share the same normatively incorrect intuitions that typically dominate responses to the cognitive reflection test (CRT ;Frederick 2005;Kahneman and Frederick 2002), or whether individuals with autism show increased analytic processing, as suggested by some researchers (e.g., De Martino et al. 2008). The CRT represents a good testbed for investigating the balance of intuitive versus analytic processes that arise in autistic and neurotypical individuals, given that the problems that make up the test tend to elicit an intuitive but incorrect response, whereas the correct response requires effortful reflection. ...
Article
Full-text available
This Special Issue aims to capture current theoretical and methodological developments in the field of metareasoning, which is concerned with the metacognitive processes that monitor and control our ongoing thinking and reasoning [...]
... For example, in one study, participants overestimated their degree of control over outcomes when the outcomes were desirable but underestimated their control when the outcomes were undesirable (Alloy & Abramson, 1979). Alternatively, in rating control, participants might have been susceptible to attribute substitution (Kahneman & Frederick, 2002), where people answer a question that is difficult to answer by providing the answer to a related question that can be answered more readily. In our experiments, the agent didn't have a typical form of physical control, whereas her role in raising her odds of success was tangible, and therefore accessible. ...
Article
Intentionality judgments can depend on probability raising—people are more likely to see a desired outcome as intentional if the agent who produced it did something to increase its odds. However, intentionality also depends on related factors such as the agent’s skill, ability, and control over the outcome. In three experiments (total N = 1074), we investigated how probability raising relates to these factors, and whether it makes distinct contributions to judgments of intentionality. Participants saw vignettes where an agent got a winning ball from a lottery machine. In all experiments, participants gave higher ratings of both intentionality and control in conditions where the agent increased her odds of success than in conditions where she did not. This pattern suggests that probability raising and control are closely linked. The findings of our third experiment, though, also suggest that probability raising may uniquely contribute to attributions of intentionality. In this experiment, the agent received a winning ball after taking an action that unpredictably either increased or decreased her odds of success. Participants gave higher intentionality ratings when this action happened to increase the odds. But participants also showed this pattern when rating control, even though the agent’s control did not vary across conditions. These results suggest that probability raising contributes to intentionality even when control does not, and moreover suggest that people may use probability raising to inform attributions of control. However, we also discuss the possibility that control and probability raising are not distinct, and amount to the same thing.
... System 2, on the other hand, deliberatively and analytically processes numerical evidence. System 2 should have the ability to monitor and correct System 1 judgments, but in reality, deliberation justifies System 1 judgments, and corrections are seldom made (Kahan, 2013;Kahneman & Frederick, 2002;Slovic, 2007). Therefore, with respect to the perceived effectiveness of EEW, it is expected that statistical information processed by System 2 will be less effective than the case information processed by System 1. ...
... Consequently, companies may opt to limit the disclosure of risk information [75,83,84]. Attribute substitution theory provides a suggestion [85]. It suggests that individuals use more readily available heuristic attributes. ...
Article
Full-text available
Digitalization is anticipated to substantially improve information transparency and enhance the capital market’s information environment. However, during the initial phase of digital transformation, companies face challenges in achieving high-quality information disclosure. This is because digital technology implementation and organizational adjustment are still at early stages. Concurrently, investors face difficulties in assessing the accuracy of disclosed information. This challenge provides management with the opportunity to overstate the benefits of digital transformation. This study investigates the impact of corporate digital transformation on management’s tone manipulation behavior, using a sample of Chinese A-share listed companies from 2012 to 2021. Findings indicate that corporate digital transformation significantly fosters management’s tone manipulation during its exploration phase. Media slant positively moderates this relationship. Further analysis supports the paper’s hypothesis: companies with weaker financial flexibility and lower risk information disclosure levels show a stronger positive correlation between digital transformation and tone manipulation. Concurrently, mechanism analysis reveals that management overconfidence partially mediates the relationship. This suggests that digital transformation increases managerial overconfidence, thereby promoting tone manipulation. The conclusion offers new insights into enhancing management discussion and analysis information disclosure quality from a corporate strategic transformation perspective. It serves as a valuable reference for accurately identifying misleading management signals.
... From a CritT standpoint, 'good thinking' is conducted void of emotion (as much as it can be possible), given that it is an artefact of bias and experience. Indeed, a large body of research indicates a negative impact of emotion on decision-making and higher-order cognition (e.g., see Anticevic et al. 2011;Chuah et al. 2010;Denkova et al. 2010;Dolcos and McCarthy 2006;Kahneman and Frederick 2002;Slovic et al. 2002;Strack et al. 1988). However, it remains that no individual's thinking can ever be completely void of emotion or associated bias, given the subjective, individualistic nature of a person's thinking. ...
Article
Full-text available
Though both critical thinking and creative thinking are often cited and promoted as important cognitive processes in personal, academic and professional settings, common misconceptualisation of both often leads to confusion for non-experts; for example, with respect to lumping them together erroneously as much the same thing, completely confusing them for one another or, on the other hand, distinguishing them to such an extent that all genuine overlap is ignored. Given the importance of these processes in real-world scenarios, quality education and ‘metaeducation’ is necessary to ensure their appropriate translation to students and educators, alike. Thus, the aim of the current review is to discuss various perspectives on both critical and creative thinking with particular focus paid to addressing such common misconceptualisation. Detailed discussion is provided with respect to important ways in which they are distinct and overlap. Recommendations for what contexts application of each are appropriate are also provided. Furthermore, implications for such enhanced understanding are discussed, in light of both theory and research.
... (2) representativeness, and (3) anchoring and framing effects, all of which helped explain judgement under uncertainty. Kahneman and Tversky's blow to logical rationality fuelled the broader research around many more heuristics, which went on to become foundational work for behavioural economics (for example, Kahneman & Frederick, 2002;Thaler & Shefrin, 1981;Tversky & Kahneman, 1981). Prior to this work, economics did not use much psychology in theory building and testing. ...
Preprint
Full-text available
Behavioural public policy (BPP) applies behavioural insights to aid policy-making and implementation. Emerging from the burgeoning interest in nudges in the late 2000s, BPP has matured over the last decade. It has produced novel concepts and rich empirical data, helping researchers and policy-makers understand and explain human behaviour with a multidisciplinary lens. Even though BPP is an established field of endeavour, it is at a crucial juncture when it is important to assess how it is likely to grow as a field. To help understand the dynamics of knowledge innovation and utilisation, this article discusses the past, present, and future of behavioural public policy. We draw on the substantive achievements of nudge and BPP, the engagement of policy-makers in nudge units and commissioned BPP policies, the growth of empiricism via experiments, and an emergent ethical dimension illustrated by new tools, such as boost and nudge+. We highlight the increasingly prominent role of agency and reasonableness in future initiatives, and we outline benefits that newer methods, such as computational social sciences, can have on the field in making sense of heterogeneity. BPP shows a sustained attention over time, with growing richness in its academic and policy agenda, and is not a cyclical or temporary phenomenon.
... In the past, some researchers have categorized the theoretical processes of the time discount model and the attribute comparison model into the analytical system and the heuristic system, respectively, based on the dual-system theory of decision-making (Kahneman & Frederick, 2002;Sun et al., 2007). According to this theory, human decisions are driven by two systems: System 1, which is fast, intuitive, and automatic, and System 2, which is slow, deliberate, and analytical. ...
Article
Full-text available
Intertemporal choice, a pervasive phenomenon in social life, involves evaluating and balancing gains and losses occurring at different times. Research into the mechanisms underlying intertemporal choice has yielded varied findings. Time discounting models propose an alternative-wise strategy, where decisions are driven by the discounting of future rewards. In contrast, attribute comparison models advocate an attribute-wise strategy, emphasizing the comparison of specific attributes across different options. Therefore, we adopted a comparative paradigm combining evidence from both process and outcome using the MouselabWEB and the eye-tracking techniques in three experiments. In Experiment 1 (N = 37), the mouselabWEB process data indicates that intertemporal choice processes are more similar to those on the time discount task. These results were supported in Experiment 2 (N = 37), in which eye-tracking data was analyzed. The drift-diffusion model analysis suggested that the diffusion velocity under the free answer strategy condition was significantly lower than in the attribute comparison strategy condition. In Experiment 3 (N = 35), participants tended to use an attribute-wise strategy and chose more delayed options when the choices were presented with a date rather than a number of days. In conclusion, evidence from both the processes and the outcomes supported the time discount model explanation of intertemporal choice. Moreover, the findings indicated that people could be nudged into using an attribute-wise strategy with a slight change in the time representation.
... Another potentially relevant framing is that of the dual-process theory of cognition, which distinguishes two main systems that are claimed to direct judgment and decision making (Dolan & Dayan, 2013;Evans, 2008;Kahneman & Frederick, 2002). "Type 1" processing is construed as largely automatic and intuitive, while "Type 2" processing is said to consist of analytical, effortful consideration. ...
Article
Full-text available
The ability to discover patterns or rules from our experiences is critical to science, engineering, and art. In this article, we examine how much people’s discovery of patterns can be incentivized by financial rewards. In particular, we investigate a classic category learning task for which the effect of financial incentives is unknown (Shepard et al., 1961). Across five experiments, we find no effect of incentive on rule discovery performance. However, in a sixth experiment requiring category recognition but not learning, we find a large effect of incentives on response time and a small effect on task performance. Participants appear to apply more effort in valuable contexts, but the effort is disproportionate with the performance improvement. Taken together, the results suggest that performance in tasks that require novel inductive insights is relatively immune to financial incentives, while tasks that require rote perseverance of a fixed strategy are more malleable.
... A related issue in training-and part of our quasi-dispute-is whether people make choices between risky prospects through intuitive judgments or through analytic, reflective thinking. Dual-process notions of cognition distinguish a fast, intuitive system ('unconscious interence', ID, or System 1) and a slower, reflective system ('conscious thinking', Super-Ego, or System 2) (Kahneman, Frederick, et al., 2002;Kahneman and Frederick, 2005;Sloman, 1996;Stanovich and West, 2000). Our dispute may relate to how intuitive computations of value, generated by biological mechanisms of the kind described by Helmholtz (1866Helmholtz ( /1962, Freud (Ellenberger, 1956), and Shepard (Shepard, 2004)-what some now call 'System 1' can persist in generating perception-based computations, despite efforts to engage language-based 'System 2' processes. ...
Article
Full-text available
First-order stochastic dominance is a core principle in rational decision-making. If lottery A has a higher or equal chance of winning an amount x or more compared to lottery B for all x , and a strictly higher chance for at least one x , then A should be preferred over B . Previous research suggests that violations of this principle may result from failures in recognizing coalescing equivalence. In Expected Utility Theory (EUT) and Cumulative Prospect Theory (CPT), gambles are represented as probability distributions, where probabilities of equivalent events can be combined, ensuring stochastic dominance. In contrast, the Transfer of Attention Exchange (TAX) model represents gambles as trees with branches for each probability and outcome, making it possible for coalescing and stochastic dominance violations to occur. We conducted two experiments designed to train participants in identifying dominance by splitting coalesced gambles. By toggling between displays of coalesced and split forms of the same choice problem, participants were instructed to recognize stochastic dominance. Despite this training, violations of stochastic dominance were only minimally reduced, as if people find it difficult—or even resist—shifting from a trees-with-branches representation (as in the TAX model) to a cognitive recognition of the equivalence among different representations of the same choice problem.
... To learn the effects of different nudges, we first categorize them. One classic way of categorization is the dichotomous System 1 versus System 2 (Kahneman & Frederick, 2002). System 1 nudges are nearly effortless, heuristic-based, and intuitive, while System 2 nudges are cognitively burdensome, rule-based, and analytical. ...
Article
Restaurants can nudge their customers toward more sustainable actions. However, implementing nudges may affect consumer patronage. Using a discrete choice experiment, we propose a means‐ends framework to shed light on the conditional (on patronage) impact of sustainability nudges on consumer preference for restaurants. The findings reveal that while preserving the subtleness of a specific sustainability nudge, disclosing the means or the mechanism of the nudges leads to a negative impact on consumer restaurant patronage, while revealing the ends or the goodwill behind the sustainability nudges may suggest positive effects on patronage which may in turn lead to an unconditional gain in sustainability. Furthermore, some nudges may only improve patronage if both the means and the ends of nudges are revealed. By examining two types of restaurants, we show that the type of restaurant also significantly influences the effects of sustainability nudges on patronage. This research quantifies the nuanced dynamics of how revealing the means and the ends of nudges may affect the restaurant business and provides insights for designing effective sustainability strategies in the restaurant industry.
... Dual-process theory states that there are two types of mental processes: System 1 and System 2. The activation of System 1 is characterized by automatic processes based on subjective empirical experiences and often occurs unconsciously (Darmawan, 2019;Kahneman, 2003;Kahneman & Frederick, 2012). Automatic processes occur when humans solve problems spontaneously based on internalized information, for example, memorizing multiplications and drawing a shape spontaneously without looking at its size. ...
Article
Full-text available
This research aims to describe students' dual-process in solving number pattern questions and scaffolding that support students' dual-process. This research uses a qualitative research approach with a case study type. The subjects of this research were two 10th grade high school students. The subjects' written answers, recorded interviews, and researcher notes were used as the basis to analyze the subjects' dual-process and scaffolding. There are three findings from this research. First, subjects tend to activate mental processes that are automatic, subjective-empirical, and often occur unconsciously in solving number pattern questions. Second, errors occur due to the subject's tendency towards one of the mental processes which are automatic, subjective-empirical, and often occur unconsciously. Third, scaffolding in the form of questions and instructions by the researcher supports the activeness of the subject's mental processes which are empirically accurate and involve deeper consideration more thorough analysis, and greater consciousness in decision-making, thereby helping the subject correct his mistakes when solving number pattern questions. The findings of this research indicate that the tendency to activate mental processes that are automatic, subjective-empirical, and often occur unconsciously can cause disadvantages. Therefore, the learning process needs strategies that support students to optimize both mental processes. Teacher training that focuses on learning strategies that support students to optimize the use of both mental processes needs to be held regularly.
... La teoría del procesamiento dual, propuesta por Kahneman y Tversky [13], establece que el cerebro opera mediante dos sistemas: uno rápido, intuitivo y automático, y otro lento, deliberado y analítico. En el contexto de la neurodidáctica, esta teoría resalta la importancia de equilibrar actividades que activen ambos sistemas, como tareas que estimulen la creatividad y la intuición, junto con aquellas que requieran análisis profundo y reflexión. ...
Article
Full-text available
This paper presents an evaluation of neurodidactic strategies in the social sciences. To this end, two groups of analyses have been arranged, one control and the other experimental. On the one hand, traditional strategies were applied to the control group, and on the other, neurodidactic strategies were applied to the experimental group. Both groups maintained a permanent relationship with the teacher. In addition, a pre-test and a post-test were carried out to know the effects of the designed strategy, and to compare it with the classic methods. The main results showed a significant difference between both groups in all dimensions, however, it was not observed that this difference was large, demonstrating the need to incorporate both methods in the teaching and learning process, to achieve better results.
... Although fundamental, the human decision-making process is indeed difficult to model, due (among other factors) to the complexity of brain processes [2,3], the influence of cognitive biases [4,5], emotions [6,7], social and environmental factors [8,9], unconscious mechanisms [10,11], and individual differences [12,13]. The study of human decision-making [14,15] is still an open and vast multidisciplinary field that draws on insights from various scientific disciplines, including psychology [16,17], neuroscience [18,19], economics [20,21] and public health [22,23]. ...
Preprint
Full-text available
As humans perceive and actively engage with the world, we adjust our decisions in response to shifting group dynamics and are influenced by social interactions. This study aims to identify which aspects of interaction affect cooperation-defection choices. Specifically, we investigate human cooperation within the Prisoner's Dilemma game, using the Drift-Diffusion Model to describe the decision-making process. We introduce a novel Bayesian model for the evolution of the model's parameters based on the nature of interactions experienced with other players. This approach enables us to predict the evolution of the population's expected cooperation rate. We successfully validate our model using an unseen test dataset and apply it to explore three strategic scenarios: co-player manipulation, use of rewards and punishments, and time pressure. These results support the potential of our model as a foundational tool for developing and testing strategies aimed at enhancing cooperation, ultimately contributing to societal welfare.
... Is Greene merely talking about differences on the behavioral level, i.e., two different types of how people typically make moral judgments, or is Greene talking about two fundamentally different processes on the subpersonal level (e.g., the brain)? This is an important distinction because one might endorse a dual-process theory (of moral judgment) on the behavioral level without being committed to a modular view as to how these different behaviors may be related to differences in neurophysiology (see, e.g., Craigie, 2011;Kahneman & Frederick, 2002). However, depending on the kind of empirical data that is interpreted as evidence in favor of any dual-process theory, dual-process theories and a modular account of cognition and the brain often become a package deal. ...
Article
Full-text available
Joshua Greene has famously argued for two distinct processes of how humans make moral judgments. Despite a lively controversy around potential normative implications of this view, less attention has been paid to those philosophical assumptions that are fundamental to Greene’s dual-process theory itself. In this paper, I argue that Greene’s dual-process theory hinges on a modular account of cognition and the brain, and I critically discuss the plausibility of Greene’s view in light of increasing popularity of dynamical systems accounts in cognitive science. If we reject modularity and adopt a dynamical systems perspective instead, we can still hope to find relative differences in the functional specialization of dynamic brain networks within one interconnected system, but Greene’s original theory in terms of two asymmetrically independent processes will no longer be tenable. This imposes constraints on the kind of explanations that we can expect from an empirically informed ethics in that only non-exclusive dual-process theories would be compatible with a dynamical systems account. Ultimately, however, the controversy around the modularity of mind should not be misconceived as a purely empirical question, but rather as a matter of conflicting epistemic standards as to what qualifies as a good explanation in cognitive science.
... Previous research has suggested that individuals' preference judgments are stable (Chen & Risen, 2010), allowing them to quickly and relatively automatically identify their preferred option amongst the given choices (e.g., Alós-Ferrer et al., 2012). For instance, Glöckner and Betsch (2008) found that in a product preference task, participants took an average of 1.53 s to choose what they prefer, whereas when participants were instructed to make a decision using a deliberate strategy (i.e., using slow, integrated, and cognitive demanding strategies, Kahneman & Frederick, 2002), the average decision time increased to 20.5 s. Considering that deliberate choices took approximately four times longer in previous research (Glöckner & Betsch, 2008) than the time allotted to participants in this study (five seconds), we reasoned that the EMST may tap into individuals' automatic decision processes rather than deliberate processes. ...
Article
Full-text available
Some evidence indicates that a person’s attitude towards emotion (ATE; measured as liking/disliking of a discrete emotion) predicts the person’s situation selection (e.g., choice of emotional images), which is an essential part of emotion regulation. Would the association between ATE and stimulus selection be generalized to other types of ATE and stimuli? This study aimed to examine whether implicit and explicit fear of emotion (i.e., fear of experiencing an emotion due to concerns about losing control over the emotion) would predict the avoidance of emotion-inducing music. Seventy-three female university students, among whom approximately one third showed mild or above depressive symptoms, reported implicit and explicit fear of positive emotion and depression. Participants also completed the music choice task in which they were presented with a pair of emotional music excerpts and chose one they preferred to listen to. Results showed that those with greater implicit fear of positive emotion avoided the choice of high-arousal happiness-inducing music excerpts, particularly when the music was paired with high-arousal sadness and neutral-inducing excerpts. These results emphasise the need to consider both implicit and explicit processes, as well as arousal and valence, in understanding the role of fear of emotion in situation selection.
Chapter
Critical thinking (CT) is a metacognitive process that, through self-regulatory reflective judgment, facilitates the increased likelihood of developing strong, logical, and credible conclusions or solutions to arguments and problems. Despite some differences in conceptualisation, a vast majority of research agrees that CT consists of both skills and dispositions, with growing recognition of a reflective judgment element. However, most CT education focuses on the skill development aspect of the conceptualisation and may, perhaps, lack focus on other aspects associated with ‘self-regulation’. This chapter explores self-regulation in the context of CT training, across a broad range of perspectives, as well as providing discussion of barriers to CT and how self-regulation may help overcome them.
Article
Relatively little work has examined how metalinguistic awareness about sociolinguistic features can impact processes of sociolinguistic memory, which are crucial to the formation of cognitive sociolinguistic representations. This article explores how metalinguistic commentary can bias listeners’ memory of a linguistic feature, uptalk , that is ideologically linked with women in popular meta‐discourses. A novel contour‐recognition paradigm tests how listeners mis‐remember uptalk depending on the perceived gender of the speaker. We provided participants with top‐down metalinguistic information about which gendered speakers were most likely to use uptalk to induce metalinguistic bias toward associations between rising contours and speakers of particular genders. Results show a speaker's perceived gender, as well as metalinguistic information provided to a listener, can bias recognition of prosodic contours, but only when this information reinforces listeners’ pre‐existing beliefs. Overall, we suggest that linguistic ideologies can shape how listeners interpret and even remember both metalinguistic statements and sociolinguistic variants.
Chapter
Hate crimes are bias motivated acts committed against a person because of their race, ethnicity, gender, gender identity, sexual orientation, religion, and disability. There is no specific crime type that constitutes a hate crime and can be committed against either a person or their property. Similar to how the type of crime can vary, there is ambiguity in who is a hate crime offender and victim. A hate crime offender can be a single individual (e.g., Buffalo grocery store shooter) or a group of people (e.g., Proud Boys). A hate crime victim can also be a single individual (e.g., Ahmaud Arbery) or a group of people (e.g., Pulse nightclub shooting). This ambiguity in what constitutes a hate crime could cause confusion for jurors and subsequently affect whether they render a legally sound verdict. This chapter will begin by summarizing current hate crime legislation. Next, we will describe how psychological theories and concepts (i.e., biases, schemas/stereotypes, lay theories, social cognitive theory, priming, group categorization) can form inaccurate perceptions of hate crimes and affect jurors’ decisions. Finally, the chapter will include suggestions for future research and propose methods to reduce hate crime misconceptions in the courtroom.
Chapter
This volume provides the most comprehensive and up-to-date compendium of theory and research in the field of human intelligence. Each of the 42 chapters is written by world-renowned experts in their respective fields, and collectively, they cover the full range of topics of contemporary interest in the study of intelligence. The handbook is divided into nine parts: Part I covers intelligence and its measurement; Part II deals with the development of intelligence; Part III discusses intelligence and group differences; Part IV concerns the biology of intelligence; Part V is about intelligence and information processing; Part VI discusses different kinds of intelligence; Part VII covers intelligence and society; Part VIII concerns intelligence in relation to allied constructs; and Part IX is the concluding chapter, which reflects on where the field is currently and where it still needs to go.
Chapter
This volume provides the most comprehensive and up-to-date compendium of theory and research in the field of human intelligence. Each of the 42 chapters is written by world-renowned experts in their respective fields, and collectively, they cover the full range of topics of contemporary interest in the study of intelligence. The handbook is divided into nine parts: Part I covers intelligence and its measurement; Part II deals with the development of intelligence; Part III discusses intelligence and group differences; Part IV concerns the biology of intelligence; Part V is about intelligence and information processing; Part VI discusses different kinds of intelligence; Part VII covers intelligence and society; Part VIII concerns intelligence in relation to allied constructs; and Part IX is the concluding chapter, which reflects on where the field is currently and where it still needs to go.
Chapter
This volume provides the most comprehensive and up-to-date compendium of theory and research in the field of human intelligence. Each of the 42 chapters is written by world-renowned experts in their respective fields, and collectively, they cover the full range of topics of contemporary interest in the study of intelligence. The handbook is divided into nine parts: Part I covers intelligence and its measurement; Part II deals with the development of intelligence; Part III discusses intelligence and group differences; Part IV concerns the biology of intelligence; Part V is about intelligence and information processing; Part VI discusses different kinds of intelligence; Part VII covers intelligence and society; Part VIII concerns intelligence in relation to allied constructs; and Part IX is the concluding chapter, which reflects on where the field is currently and where it still needs to go.
Article
Full-text available
Contingent valuation surveys have used a variety of formats for the valuation question, including open-ended (OE) and single-bounded closed-ended (CE) elicitation. The OE format assesses maximum WTP directly, whereas the CE format assesses it indirectly. When the two formats have been compared, responses have often been different: the CE format often generated a larger estimate of WTP than the OE format. That finding has spawned a large literature in economics and marketing. We propose an explanation for the difference in OE and CE responses rooted in the psychology of questionnaire design, which focuses on a phenomenon pointed out long ago by Benjamin Franklin for a private good that could also apply to a public good. The hypothesis is that respondents are averse to paying more for something than they think it should cost them. To test this hypothesis, an experiment embedded in a representative sample survey measured respondents’ perceptions of the cost of providing a public good, as well as their valuations using an OE or CE valuation format. Findings are consistent with Ben Franklin-like aversion to overpaying in the OE format, which leads OE responses to understate true willingness to pay. The CE format yielded more valid measurements of economic value. This paper introduces a new non-parametric approach to the estimation of WTP as a function of covariates using CE data.
Article
Gendered terms are omnipresent in language and operate as semantic primes. In this article, we examine the influence of gender primes on gender representations. Social-cognitive theories suggest that even irrelevant primes assimilate judgments. We thus hypothesize that gender primes bias representations even when the factual gender composition is simultaneously provided. In two experiments, participants received information about fictitious groups, including their gender compositions (e.g., “30% are women”). We orthogonally manipulated which gender was mentioned and which was more prevalent. Independent of prevalence, judging follow-up statements about the primed (vs. unprimed) gender (Experiment 1) and identifying the primed (vs. unprimed) gender as the majority (Experiment 2) were facilitated. However, congruent gender primes did not facilitate identifying the minority (Experiment 2), suggesting that they do not merely render the gender category accessible but also heuristically convey its prevalence. We discuss implications for social-cognitive theory and practical consequences of using gendered language.
Chapter
The Cambridge Handbook of Moral Psychology is an essential guide to the study of moral cognition and behavior. Originating as a philosophical exploration of values and virtues, moral psychology has evolved into a robust empirical science intersecting psychology, philosophy, anthropology, sociology, and neuroscience. Contributors to this interdisciplinary handbook explore a diverse set of topics, including moral judgment and decision making, altruism and empathy, and blame and punishment. Tailored for graduate students and researchers across psychology, philosophy, anthropology, neuroscience, political science, and economics, it offers a comprehensive survey of the latest research in moral psychology, illuminating both foundational concepts and cutting-edge developments.
Article
This study investigates position bias in tourist decision-making, focusing on how response time influences preferences for sequentially presented information online. Using a best–worst scaling experiment and discrete choice modeling, findings reveal two distinct classes of decision-makers based on cognitive engagement. Individuals with shorter response latency exhibit a pronounced preference for top-listed attributes, indicating a primacy effect; those with longer response latency show a more balanced evaluation of options. Moreover, the time spent on deliberation correlates with preferences, suggesting that response time is a significant factor in position bias. Findings provide valuable insights into the cognitive mechanisms underlying position bias in the online decision-making process. Study results also shed light on the contextual and individual factors associated with position bias, which provide managerial implications to enhance marketing strategies by tailoring them to decision-making styles.
Article
Full-text available
The default–interventionist model of dual-process theories proposes that stereotype descriptions in base-rate problems are processed using Type 1 processing, while the evaluation of base rates depends on Type 2 processing. The logical intuition view posits that people can process base-rate information using Type 1 processing. This study examined the logical intuition view using the instructional manipulation paradigm. Participants judged the probability that a character in a base-rate problem belonged to a particular group based on either their beliefs or statistics and then rated their confidence in their responses. Results showed that a belief–statistics conflict affected both statistics- and belief-based judgments, resulting in lower probability estimates, longer response times, and lower confidence ratings for conflict items compared to no-conflict items, suggesting participants intuitively processed base rates such that they influenced rapid belief judgments. This intuitive logic effect was observed for extreme base rates, moderate base rates, and moderate base rates with small absolute values. These findings are inconsistent with the default–interventionist model but align with dual-process theories emphasizing logical intuition. The study provides additional evidence for human rationality.
Article
Fifty years ago, Tversky and Kahneman (Cognitive Psychology, 5[2], 207–232, 1973) reported that people’s speeded estimations of 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 were notably higher than their estimations for the equivalent expression in the opposite order, 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8 (Median = 2,250 vs. 512, respectively). On top of this order effect, both groups grossly underestimated the correct value (40,320). The differential effect of the two orders on estimation has become famous as an early demonstration of the anchoring effect, where people’s judgments under uncertainty are unduly influenced by an initial reference point (or “anchor”). Despite this fame, to the best of our knowledge, this effect has never been replicated. In a sample of 253 U.S. adults, the current study provides the first replication of this foundational example of anchoring. It extends this effect for the first time to a within-participants design, revealing its relative robustness even among participants who see the descending order first. Drawing on procedures from the mathematical cognition literature, it shows how the anchoring effect can be mitigated: calibrating to the correct value of 6! reduces this effect, and calibrating to 10! eliminates it altogether. An individual differences analysis measures the arithmetic fluency of participants and their accuracy on a new estimation assessment, and finds that higher estimation ability may be a “protective factor” against some anchoring effects. These findings affirm the anchoring effect of Tversky and Kahneman (1973, Study 6) while suggesting that calibration may be an effective strategy for helping to improve people’s estimation of superlinear functions that are important in real-life contexts.
Article
This study investigates the effects of proenvironment photographs and graphs in environmental disclosures on potential investors' perceptions and willingness to invest, considering the moderating role of a company's environmental performance. An online experiment was conducted with experienced investors, where participants evaluated a hypothetical company with varying environmental performance outcomes (negative or positive) presented in different formats (text only, text with photographs, or text with graphs). The results indicate that proenvironment images enhance perceived environmental responsibility for companies with negative environmental performance, whereas processing fluency is improved when the image type aligns with environmental performance. Moreover, perceived environmental responsibility mediates the impact of images on the willingness to invest in negative performance contexts. In contrast, processing fluency mediates this effect for graphs in positive performance contexts. These findings offer guidance for company managers in effectively presenting environmental data and highlight the potential risks for policy‐makers regarding the misuse of images.
Article
Savings and investments are vital not only for families but also for the advancement of the country. In many economies investment decisions are not based upon borrowing rather the character of Investments changed in the labor consumerist society where investments became the norm and every person in some or the other form try to invest through his savings or even through his current income. This paper mainly aims the study is to analyse what neural behaviour is required in case of trade-offs between consumption investments and savings decision and whether individuals take into account like a rational agent. Correlation study is used to determine the relationship between two or more continuous variables among a single group of people. The low correlation coefficient between per capita net national product and savings-investment shows that the conversion ratio of growth to investment has been quite low with only the richer segment of the society. Another factor which affects the proper relationship between savings and investments is the functioning of the Black Economy, because it acts as a lure for people to diversify their holdings Keywords: Savings, consumer, borrowings, regression, investment & correlation.
Preprint
Full-text available
In recent years there has been growing evidence that even after teaching designed to address the learning difficulties dictated by literature, many physics learners fail to create the proper reasoning chains that connect the fundamental principles and lead to reasoned predictions. Even though students have the required knowledge and skills, they are often based on a variety of intuitive reasoning that leads them to wrong conclusions. This research studies students' reasoning on science problems through heuristic - analytical thought processes (System 1 - System 2). System 1 operates automatically and quickly with little or no effort and no sense of voluntary control, while System 2 focuses on the demanding mental activities that require it and is slow based on rules. Specifically, we seek to study those cognitive processes and information available to students when they face science problems and, therefore, to explore the various heuristic processes that students use when solving physics problems. Our results indicated four intuitive heuristics in students' minds when they solve problems in Mechanics and especially in the unit projectile motion: associative activation, processing fluency, attribute substitution and anchoring effect. These heuristics prevent students from applying knowledge and methods that they already possess to solve a physics problem.
Article
Full-text available
The reason why people endorse belief in a just world (BJW), in which each person gets what he deserves and deserves what he gets, is mainly attributed to motivational processes. BJW is also an inter-individual difference, and cognitive preference for less reflective and more intuitive thinking may be a complementary explanation of why people endorse this explicit version of BJW. In a preregistered and highly powered study (N = 46,815), we investigated and showed for the first time a relationship between individuals’ scores on a cognitive performance task and BJW, when controlling for relevant demographics. Our result showed that individuals with a less reflective orientation were more likely to endorse explicit just world beliefs. This is consistent with previous studies based on cognitive reflection test showing that individuals with a lower preference for logical rationality endorsed simplifying worldviews. Our results illustrate that individual differences in cognitive style may produce preferences for specific justice beliefs.
Chapter
While many in the domain of AI claim that their works are "biologically inspired", most strongly avoid the forms of dynamic complexity that are inherent in all of evolutionary history's more capable surviving organisms. This work seeks to illustrate examples of what introducing human-like forms of complexity into software systems looks like, why it is important, and why humans so frequently seek to avoid such complexity. The complex dynamics of these factors are discussed and illustrated in the context of Chaos Theory, the Three-Body Problem, category concepts, the tension between interacting forces and entities, and cognitive biases influencing how complexity is handled and reduced. For the full-text pre-print see: http://dx.doi.org/10.13140/RG.2.2.11390.56641
Article
Full-text available
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following Herbert Simon's notion of satisficing, this chapter proposes a family of algorithms based on a simple psychological mechanism: one-reason decision making. These fast-and-frugal algorithms violate fundamental tenets of classical rationality: It neither looks up nor integrates all information. By computer simulation, a competition was held between the satisficing "take-the-best" algorithm and various "rational" inference procedures (e.g., multiple regression). The take-the-best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference.
Article
Full-text available
Presents a theory of norms and normality and applies the theory to phenomena of emotional responses, social judgment, and conversations about causes. Norms are assumed to be constructed ad hoc by recruiting specific representations. Category norms are derived by recruiting exemplars. Specific objects or events generate their own norms by retrieval of similar experiences stored in memory or by construction of counterfactual alternatives. The normality of a stimulus is evaluated by comparing it with the norms that it evokes after the fact, rather than to precomputed expectations. Norm theory is applied in analyses of the enhanced emotional response to events that have abnormal causes, of the generation of predictions and inferences from observations of behavior, and of the role of norms in causal questions and answers. (3 p ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Close counterfactuals are alternatives to reality that "almost happened." A psychological analysis of close counterfactuals offers insights into the underlying representation of causal episodes and the inherent uncertainty attributed to many causal systems. The perception and representation of causal episodes is organized around possible focal outcomes, evoking a schema of causal forces competing over time. A distinction between 2 kinds of assessments of outcome probability is introduced: dispositions, based on causal information available prior to the episode, and propensities, based on event cues obtained from the episode itself. The distinction is critical to the use of almost, which requires the attribution of a strong propensity to the counterfactual outcome. The final discussion focuses on characteristic differences between psychological and philosophical approaches to the analysis of counterfactuals, causation, and probability. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Two experiments examined the effects of answering a question about a specific component of life satisfaction on respondents' assessment of their overall satisfaction with life. The results suggest that the use of primed information in forming subsequent judgments is determined by Grice's conversational norms. In general, answering the specific question increases the accessibility of information relevant to that question. However, the effect that this has on the general judgment depends on the way in which the two questions are presented. When the two questions are merely placed in sequence without a conversational context, the answer to the subsequent general question is based in part on the primed specific information. As a result, the answer to the general question becomes similar to that for the specific question (i.e. assimilation). However, this does not occur when the two questions are placed in a communication context. Conversational rules dictate that communicators should be informative and should avoid redundancy in their answers. Therefore, when a specific and a general question are perceived as belonging to the same conversational context, the information on which the answer to the specific question was based is disregarded when answering the general one. This attenuates the assimilation effect. The conditions under which these different processes occur are identified and experimentally manipulated, and the implications of these findings for models of information use in judgment are discussed.
Article
Full-text available
The question of when people rely on stereotypic preconceptions in judging others was investigated in two studies. As a person's motivation or ability to process information systematically is diminished, the person may rely to an increasing extent on stereotypes, when available, as a way of simplifying the task of generating a response. It was hypothesized that circadian variations in arousal levels would be related to social perceivers' propensity to stereotype others by virtue of their effects on motivation and processing capacity. In support of this hypothesis, subjects exhibited stereotypic biases in their judgments to a much greater extent when the judgments were rendered at a nonoptimal time of day (i.e., in the morning for “night people” and in the evening for “morning people”). In Study One, this pattern was found in probability judgments concerning personal characteristics; in Study Two, the pattern was obtained in perceptions of guilt in allegations of student misbehavior. Results generalized over a range of different types of social stereotypes and suggest that biological processes should be considered in attempts to conceptualize the determinants of stereotyping.
Article
Full-text available
Magical thinking has perplexed anthropological theorists for nearly a century. At least three perspectives are extant: (1) Magic is a form of science, a relatively effective set of canons and procedures for acquiring knowledge and exercising control (see, e. g., Levi-Strauss 1966); (2) magic is a form of fantasy, an irrational symbolic attempt to influence uncontrollable events (see, e.g., Malinowski 1954); (3) magic is a form of rhetoric, a persuasive communication designed to arouse sentiments rather than make truth claims about what goes with what in experience (see, e.g., Tambiah 1973). This study presents an alternative, cognitive-processing, account of magical thinking. Magical thinking is an expansion of a universal disinclination of normal adults to draw correlational lessons from their experiences. Correlation and contingency are relatively complex concepts that are not spontaneously available to human thought and are not to be found in the reasoning of most normal adults in all cultures. In the ...
Article
Full-text available
People have erroneous intuitions about the laws of chance. In particular, they regard a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics.
Article
Full-text available
A description of a theory of problem-solving in terms of information processes amenable for use in a digital computer. The postulates are: "A control system consisting of a number of memories, which contain symbolized information and are interconnected by various ordering relations; a number of primitive information processes, which operate on the information in the memories; a perfectly definite set of rules for combining these processes into whole programs of processing." Examples are given of how processes that occur in behavior can be realized out of elementary information processes. The heuristic value of this theory is pertinent to theories of learning, perception, and concept formation.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Chapter
INTRODUCTION The concept of utility has carried two different meanings in its long history. As Bentham (1789) used it, utility refers to the experiences of pleasure and pain, the “sovereign masters” that “point out what we ought to do, as well as determine what we shall do.” In modern decision research, however, the utility of outcomes refers to their weight in decisions: utility is inferred from observed choices and is in turn used to explain choices. To distinguish the two notions I refer to Bentham’s concept as experienced utility and to the modern usage as decision utility. Experienced utility is the focus of this chapter. Contrary to the behaviorist position that led to the abandonment of Bentham’s notion (Loewenstein 1992), the claim made here is that experienced utility can be usefully measured. The chapter has three main goals: (1) to present a detailed analysis of the concept of experienced utility and of the relation between the pleasure and pain of moments and the utility of more extended episodes; (2) to argue that experienced utility is best measured by moment-based methods that assess current experience; (3) to develop a moment-based conception of an aspect of well-being that I will call “objective happiness.” The chapter also introduces several unfamiliar concepts that will be used in later chapters. Pleasure and pain are attributes of a moment of experience, but the outcomes that people value extend over time. It is therefore necessary to establish a concept of experienced utility that applies to temporally extended outcomes. Two approaches to this task will be compared here.
Article
Psychophysics is a lively account by one of experimental psychology's seminal figures of his lifelong scientific quest for general laws governing human behavior. It is a landmark work that captures the fundamental themes of Steven's experimental research and his vision of what psychophysics and psychology are and can be. The context of this modern classic is detailed by Lawrence Mark's pungent and highly revealing introduction. The search for a general psychophysical law-a mathematical equation relating sensation to stimulus-pervades this work, first published in 1975. Stevens covers methods of measuring human psychophysical behavior; magnitude estimation, magnitude production, and cross-modality matching are used to examine sensory mechanisms, perceptual processes, and social consensus. The wisdom in this volume lies in its exposition of an approach that can apply generally to the study of human behavior.
Article
Nonuse value is well-defined and represents nothing more than the value individuals place on a particularly pure form of public good. Moreover, when cast in this light, nonuse value fits into the existing structure of contemporary welfare theory, athough a rigourous incorporation has yet to appear in the published literature. Incorporating existence value in cost-benefit studies is a large and burdensome task, but policy makers implicitly incorporate these values into decisions on the basis of no quantitative evidence. Analysis should be undertaken to provide evidence on the size and distribution of these values. Nonuse values are consistent with neoclassical welfare theory and this theory provides a solid foundation upon which to help evaluate the desirability of many public policies.
Article
Dual process theory postulates two representational processes: Activation of a representation makes the mental event more easily and automatically accessible. Elaboration of a mental event produces access to the representation through search and retrieval processes. We explored word priming in recognition and in a stem completion task where the primed word was one of several possible completions for a 3-letter stem. The main hypothesis was that priming has similar and parallel effects in the two tasks. The initial presentation (priming) of items was under conditions of either semantic or non-semantic processing. Priming was either direct, by the presentation of the target word, or indirect, by the presentation of phonologically related (rhyming) or semantically related (categorical) items. When priming occurred, RTs increased from direct to phonologically primed and to semantically primed items for both completion and recognition tests. One additional experiment confirmed the absence of semantic processing in the non-semantic condition, and another experiment showed that when response requirements for recognition and completion responses are equated, RTs to the two tests are comparable.
Article
Both temporary and long-term sources of construct accessibility have been found to play an important role in person perception and memory. Yet the two effects heretofore have been studied in isolation from each other. We examined the joint influence of long- and short-term sources of accessibility on impression formation. Subjects with or without a long-term, chronically accessible construct for either kindness or shyness were exposed subliminally to either 0 or 80 trait-related words in a first task. Next, subjects read a behavioral description that was ambiguously relevant to the primed trait dimension, and they then rated the target on several trait scales. For both the kind and the shy trait conditions, both chronic accessibility and subliminal priming reliably and independently increased the extremity of the impression ratings. The results supported a model in which long- and short-term sources of accessibility combine additively to increase the likelihood of the construct's use. Moreover, the subliminal priming effect appeared to be a quite general and pervasive phenomenon, insofar as it occurred for both an evaluatively positive and an evaluatively neutral trait dimension and for subjects without as well as with a chronically accessible construct for the primes. Implications of these findings for the nature of construct accessibility and the generality of automatic influences on social perception are discussed.
Article
Foreword Preface 1. Valuing Public Goods Using the Contingent Valuation Method 2. Theoretical Basis of the Contingent Valuation Method 3. Benefits and Their Measurement 4. Variations in Contingent Valuation Scenario Designs 5. The Methodological Challenge 6. Will Respondents Answer Honestly? 7. Strategic Behavior and Contingent Valuation Studies 8. Can Respondents Answer Meaningfully? 9. Hypothetical Values and Contingent Valuation Studies 10. Enhancing Reliability 11. Measurement Bias
Article
This paper re-examines the commonly observed inverse relationship between per- ceived risk and perceived benefit. We propose that this relationship occurs because people rely on aÄect when judging the risk and benefit of specific hazards. Evidence supporting this proposal is obtained in two experimental studies. Study 1 investigated the inverse relationship between risk and benefit judgments under a time-pressure condition designed to limit the use of analytic thought and enhance the reliance on aÄect. As expected, the inverse relationship was strengthened when time pressure was introduced. Study 2 tested and confirmed the hypothesis that providing information designed to alter the favorability of one's overall aÄective evaluation of an item (say nuclear power) would systematically change the risk and benefit judgments for that item. Both studies suggest that people seem prone to using an 'aÄect heuristic' which improves judgmental eÅciency by deriving both risk and benefit evaluations from a common source — aÄective reactions to the stimulus item. Copyright # 2000 John Wiley & Sons, Ltd.
Article
Subjects were exposed to two aversive experiences: in the short trial, they immersed one hand in water at 14 °C for 60 s; in the long trial, they immersed the other hand at 14 °C for 60 s, then kept the hand in the water 30 s longer as the temperature of the water was gradually raised to 15 °C, still painful but distinctly less so for most subjects. Subjects were later given a choice of which trial to repeat. A significant majority chose to repeat the long trial, apparently preferring more pain over less. The results add to other evidence suggesting that duration plays a small role in retrospective evaluations of aversive experiences; such evaluations are often dominated by the discomfort at the worst and at the final moments of episodes.
Article
Most so-called “errors” in probabilistic reasoning are in fact not violations of probability theory. Examples of such “errors” include overconfidence bias, conjunction fallacy, and base-rate neglect. Researchers have relied on a very narrow normative view, and have ignored conceptual distinctions—e.g. single case versus relative frequency—fundamental to probability theory. By recognizing and using these distinctions, however, we can make apparently stable “errors” disappear, reappear, or even invert. I suggest what a reformed understanding of judgments under uncertainty might look like.
Article
This paper explores a heuristic-representativeness-according to which the subjective probability of an event, or a sample, is determined by the degree to which it: (i) is similar in essential characteristics to its parent population; and (ii) reflects the salient features of the process by which it is generated. This heuristic is explicated in a series of empirical examples demonstrating predictable and systematic errors in the evaluation of un- certain events. In particular, since sample size does not represent any property of the population, it is expected to have little or no effect on judgment of likelihood. This prediction is confirmed in studies showing that subjective sampling distributions and posterior probability judgments are determined by the most salient characteristic of the sample (e.g., proportion, mean) without regard to the size of the sample. The present heuristic approach is contrasted with the normative (Bayesian) approach to the analysis of the judgment of uncertainty.
Article
Previous research has uncovered many conditions that encourage base-rate use. The present research investigates how base-rates are used when conditions are manipulated to encourage their use in the lawyer/engineer paradigm. To examine the functional form of the response to base-rate, a factorial design was employed in which both base-rate and the individuating information were varied within-subject. We compared the performance of several models of base-rate use, including a model that allows base-rate and individuating information to be com- bined in a strictly additive fashion, and a model which presumes that respondents use Bayes' Rule in forming their judgments. Results from 1493 respondents showed that the additive model is a stronger predictor of base-rate use than any other model considered, suggesting that the base-rate and individuating information are processed independently in the lawyer/engineer paradigm. A possible mechanism for this finding is discussed. Copyright #1999 John Wiley & Sons, Ltd.
Article
The foundations of adult reasoning about probabilities are found in children's reasoning about frequencies. Adult probabilistic reasoning is impaired by heuristics based on typicality or representativeness, but development of these heuristics in childhood has not been studied. Previously, typicality has only been shown to enhance children's reasoning. In Experiment 1, however, children in Grades 3, 5, and 7 overwhelmingly judged subclasses to be larger than inclusive classes when the subclasses were typical of a given scenario. In Experiment 2 children explained these incorrect responses by reference to the frequency of the subclass in the context or by reference to the method by which the subclass is generated, in accordance with the representativeness heuristic. The same children explained correct responses by describing the inclusion relationship. Even when asked to select the most frequent class from three alternatives including typical and atypical subclasses and an inclusive class, children frequently selected the incorrect typical subclass. Experiments 3 and 4 investigated whether children could be taught to reason logically and to suppress the use of the representativeness heuristic. Both immediately, and 10 days after a brief training, children responded correctly far more often. Furthermore, children's explanations of their answers when they were untrained emphasized the typicality of the representative subclass, but after training they emphasized logical relationships. The results are attributed to a representativeness heuristic that is acquired early, and overcome through experience and training in logical thinking.
Article
An investigation was made of the role played by verbal structure in the problems used to study the base-rate fallacy, which has traditionally been attributed to the role of heuristics (e.g. causality, specificity). It was hypothesized that elements of the verbal form of text problems led to a misunderstanding of the question or the specific information, rendering obscure the independence of the sets of data (specific information is obtained independently from the base rate). Nine texts were presented to various groups of subjects: four were taken from Tversky and Kahneman (1980) and used as controls; five were obtained by modifying the verbal form of the original in order to reveal or conceal the links between the sets of data. The percentage of base-rate fallacies was greatly reduced with texts in which the independence of the data was clear, regardless of the causality and specificity of the information they contained (which was not changed). This result suggests that there is a need to consider the rules of natural language in order to move towards a better understanding of observed phenomena.
Article
This investigation examines the extent to which intelligent young adults seek (i) confirming evidence alone (enumerative induction) or (ii) confirming and discontinuing evidence (eliminative induction), in order to draw conclusions in a simple conceptual task. The experiment is designed so that use of confirming evidence alone will almost certainly lead to erroneous conclusions because (i) the correct concept is entailed by many more obvious ones, and (ii) the universe of possible instances (numbers) is infinite. Six out of 29 subjects reached the correct conclusion without previous incorrect ones, 13 reached one incorrect conclusion, nine reached two or more incorrect conclusions, and one reached no conclusion. The results showed that those subjects, who reached two or more incorrect conclusions, were unable, or unwilling to test their hypotheses. The implications are discussed in relation to scientific thinking.
Article
Individuals with varying levels of chronic accessibility for the construct "conceited" read about a target person and gave their spontaneous impressions of the target′s behaviors. The construct "conceited" was either contextually primed or not, and the priming-to-stimulus delay was either short or long. The stimulus behaviors also varied in applicability to the construct "conceited," with three different types of non-"unambiguous" stimuli being examined. The stimulus behaviors were either only weakly related to "conceited" (vague), strongly and equally related to both "conceited" and "self-confident" (ambiguous), or more strongly related to self-confident than to "conceited" (contrary). We found that the extremely vague target behaviors yielded conceited-related spontaneous impressions when the accessibility of the construct conceited was maximized - contextual priming [without awareness], short priming-to-stimulus delay, and relatively high levels of chronic accessibility. This result supports the "activation rule" that strong accessibility can compensate for weak applicability. Two other activation rules were suggested by the results for the ambiguous and the contrary stimuli, respectively: (a) higher accessibility can yield stronger judgments even when perceivers are aware of contextual priming events if the additional contribution to activation from applicability and chronic accessibility is sufficiently great, and (b) the relation between higher accessibility and stronger judgments is constrained when the applicability of a competing alternative construct is both strong and stronger than the target construct′s applicability.