A Theory of Medical Decision Making and Health: Fuzzy Trace Theory

Departments of Human Development and Psychology, Cornell University, Ithaca, New York 14850, USA.
Medical Decision Making (Impact Factor: 3.24). 11/2008; 28(6):850-65. DOI: 10.1177/0272989X08327066
Source: PubMed


The tenets of fuzzy trace theory are summarized with respect to their relevance to health and medical decision making. Illustrations are given for HIV prevention, cardiovascular disease, surgical risk, genetic risk, and cancer prevention and control. A core idea of fuzzy trace theory is that people rely on the gist of information, its bottom-line meaning, as opposed to verbatim details in judgment and decision making. This idea explains why precise information (e.g., about risk) is not necessarily effective in encouraging prevention behaviors or in supporting medical decision making. People can get the facts right, and still not derive the proper meaning, which is key to informed decision making. Getting the gist is not sufficient, however. Retrieval (e.g., of health-related values) and processing interference brought on by thinking about nested or overlapping classes (e.g., in ratio concepts, such as probability) are also important. Theory-based interventions that work (and why they work) are presented, ranging from specific techniques aimed at enhancing representation, retrieval, and processing to a comprehensive intervention that integrates these components.

10 Reads
  • Source
    • "However, this account suggests that changing the absolute number of narratives reporting the occurrence of a focal event while keeping their relative proportion constant will affect judgments or decisions. A similar notion can be found in research on the ratio-bias or denominator neglect—i.e., the phenomenon that individuals tend to prefer a gamble with a 9 100 likelihood of winning over a gamble with a 1 10 likelihood, because they tend to ignore the denominator (Denes-Raj & Epstein, 1994; Reyna & Brainerd, 2008). Thus, the second goal of this paper is to clarify whether the narrative bias relies on the relative or absolute number of narratives reporting the critical event. "
    [Show abstract] [Hide abstract]
    ABSTRACT: When people judge risk or the probability of a risky prospect, single case narratives can bias judgments when a statistical base-rate is also provided. In this work we investigate various methodological and procedural factors that may influence this narrative bias. We found that narratives had the strongest effect on a non-numerical risk measure, which was also the best predictor of behavioral intentions. In contrast, two scales for subjective probability reflected primarily statistical variations. We observed a negativity bias on the risk measure, such that the narratives increased rather than decreased risk perceptions, whereas the effect on probability judgments was symmetric. Additionally, we found no evidence that the narrative bias is solely produced by adherence to conversational norms. Finally, changing the absolute number of narratives reporting the focal event, while keeping their relative frequency constant, had no effect. Thus, individuals extract a representation of likelihood from a sample of single-case narratives, which drives the bias. These results show that the narrative bias is in part dependent on the measure used to assess it and underline the conceptual distinction between subjective probability and perceived risk.
    Judgment and decision making 05/2015; 10(3):241-264. · 2.62 Impact Factor
  • Source
    • ", details ) . That is , an individual with higher gist rea - soning skills may encode details more efficiently when compared to an individual with lower gist reasoning ability ( Reyna , 2008 ) . Empirically , distinctions between higher - order and lower - level language skills have proven to be clinically informative when elucidating impair - ments in TBI ( Gamino et al . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Often, standard aphasia batteries do not fully characterize higher-order cognitive-linguistic sequelae associated with a traumatic brain injury (TBI). Limited understanding and detection of complex linguistic deficits have thwarted efforts to comprehensively remediate higher-order language deficits that persist even in chronic stages of recovery post-TBI. This chapter reviews key precursor metrics that have motivated efforts to elucidate higher-order language proficiencies after a TBI. The chapter further expounds on a paradigmatic shift away from sole focus on lower level basic skills, towards a more top-down cognitive control approach to measure, retrain, and strengthen complex language abilities in TBI. The intricate relations between complex language abilities and cognitive control functions are also discussed. The concluding section offers promising directions for future research and clinical management based on new discoveries of higher-order language impairments and their modifiability in TBI populations. © 2015 Elsevier B.V. All rights reserved.
    Handbook of Clinical Neurology 02/2015; 128:497-510. DOI:10.1016/B978-0-444-63521-1.00031-5
  • Source
    • "This is also consistent with the notions of 'bounded rationality' (Gigerenzer & Goldstein, 1996) or 'intellectual outsourcing' (Appiah, 2005), which suggest that it some circumstance it can be rational not to spend valuable time making a personal decision, but rather to delegate the process by following the advice of a trusted source. Fuzzy Trace Theory also suggests that this is the way people often make decisions, using categorical 'gist' information to inform choices, rather than more detailed 'verbatim' information (Reyna, 2008). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The poor outcomes for cancers diagnosed at an advanced stage have been the driver behind research into techniques to detect disease before symptoms are manifest. For cervical and colorectal cancer, detection and treatment of "precancers" can prevent the development of cancer, a form of primary prevention. For other cancers-breast, prostate, lung, and ovarian-screening is a form of secondary prevention, aiming to improve outcomes through earlier diagnosis. International and national expert organizations regularly assess the balance of benefits and harms of screening technologies, issuing clinical guidelines for population-wide implementation. Psychological research has made important contributions to this process, assessing the psychological costs and benefits of possible screening outcomes (e.g., the impact of false positive results) and public tolerance of overdiagnosis. Cervical, colorectal, and breast screening are currently recommended, and prostate, lung, and ovarian screening are under active review. Once technologies and guidelines are in place, delivery of screening is implemented according to the health care system of the country, with invitation systems and provider recommendations playing a key role. Behavioral scientists can then investigate how individuals make screening decisions, assessing the impact of knowledge, perceived cancer risk, worry, and normative beliefs about screening, and this information can be used to develop strategies to promote screening uptake. This article describes current cancer screening options, discusses behavioral research designed to reduce underscreening and minimize inequalities, and considers the issues that are being raised by informed decision making and the development of risk-stratified approaches to screening. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
    American Psychologist 02/2015; 70(2):119-133. DOI:10.1037/a0037357 · 6.87 Impact Factor
Show more


10 Reads
Available from