Content uploaded by Harini Dharanikota
Author content
All content in this area was uploaded by Harini Dharanikota on Nov 22, 2024
Content may be subject to copyright.
Review Article
Human Factors
2024, Vol. 0(0) 1–21
© 2024 Human Factors
and Ergonomics Society
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/00187208241292897
journals.sagepub.com/home/hfs
Debiasing Judgements Using a
Distributed Cognition Approach: A
Scoping Review of Technological
Strategies
Harini Dharanikota
1
, Emma Howie
1,2
, Lorraine Hope
3
,
Stephen J. Wigmore
1,2
, Richard J. E. Skipworth
1,2
, and Steven Yule
1,2
Abstract
Objective: To review and synthesise research on technological debiasing strategies across domains,
present a novel distributed cognition-based classification system, and discuss theoretical implications for
the field.
Background: Distributed cognition theory is valuable for understanding and mitigating cognitive biases in
high-stakes settings where sensemaking and problem-solving are contingent upon information repre-
sentations and flows in the decision environment. Shifting the focus of debiasing from individuals to
systems, technological debiasing strategies involve designing system components to minimise the negative
impacts of cognitive bias on performance. To integrate these strategies into real-world practices ef-
fectively, it is imperative to clarify the current state of evidence and types of strategies utilised.
Methods: We conducted systematic searches across six databases. Following screening and data charting,
identified strategies were classified into (i) group composition and structure, (ii) information design and
(iii) procedural debiasing, based on distributed cognition principles, and cognitive biases, classified into
eight categories.
Results: Eighty articles met the inclusion criteria, addressing 100 debiasing investigations and 91 cognitive
biases. A majority (80%) of the identified debiasing strategies were reportedly effective, whereas fourteen
were ineffective and six were partially effective. Information design strategies were studied most, followed
by procedural debiasing, and group structure and composition. Gaps and directions for future work are
discussed.
Conclusion: Through the lens of distributed cognition theory, technological debiasing represents a
reconceptualisation of cognitive bias mitigation, showing promise for real-world application.
Application: The study results and debiasing classification presented can inform the design of high-stakes
work systems to support cognition and minimise judgement errors.
1
The University of Edinburgh, UK
2
Royal Infirmary of Edinburgh, UK
3
University of Portsmouth, UK
Corresponding Author:
Harini Dharanikota, Centre for Medical Informatics, Usher Building, The University of Edinburgh, 3 Little France Road, Edinburgh,
Scotland EH16 4UX, UK; e-mail: l.h.dharanikota@sms.ed.ac.uk
Keywords
Cognitive bias, decision making, cognitive engineering, reasoning, performance
Introduction
Expertise and training are only a part of sustained
optimal performance in many real-world settings.
High-stakes decision-making literature has long
demonstrated the impact of the spatial, temporal,
social and information context on outcomes. In es-
sence, cognitive processes are distributed across the
environment within which they occur (Hutchins,
1995). These processes can be degraded by other-
wise adaptive cognitive biases due to inadequate
information representations through different com-
ponents of the system. Cognition has traditionally
been conceptualised in terms of information pro-
cessing within the mind, placing the burden of errors
solely on the individual. Fischhoff (1982) first
classified debiasing efforts based on whether the
cognitive bias originates from “the judge, the task, or
some mismatch between the two”(p. 424). Later
works, including Soll et al. (2015) and Larrick
(2004), have discussed the importance of broaden-
ing the scope of debiasing to the environment.
In this paper, we expand on reorienting cog-
nition from the individual to the system and its
relevance to understanding and mitigating biases
in reasoning (i.e., debiasing) in practice. To
achieve this, we draw on distributed cognition
theory, defining and reviewing technological de-
biasing strategies from the literature, classifying
them along the principles of the distributed cog-
nition framework, and discussing implications for
enhancing human factors and performance in real-
world settings.
Cognitive biases represent cognitive processes
that influence the application of specificreasoning
rules to information, when forming judgements or
making decisions. First introduced by Tversky and
Kahneman (1974), cognitive biases are defined as
systematic patterns of deviation from a pre-
determined rational ideal of reasoning. While eco-
logically adaptive (Haselton et al., 2015), they have
the potential to lead to erroneous judgements and
negatively consequential decisions. Cognitive biases
are well-documented in a variety of critical perfor-
mance contexts including surgery (Armstrong et al.,
2023), crisis management (Comes, 2016)and
aviation (Murata et al., 2015). These situations are
often characterised by high cognitive load, uncer-
tainty, task saturation and environmental disruptors
which may exacerbate the prevalence and impact of
cognitive biases.
Debiasing for Human Factors
Lilienfield et al. (2009) wrote of debiasing as en-
compassing “not only techniques that eliminate
biases but also those that diminish their intensity or
frequency”(p. 391). Although not yet formally de-
fined in the context of human factors, debiasing is
understood as efforts or techniques to eliminate the
presence of bias or the undesirable consequences of
biases in judgement and decision-making tasks. The
instrumental view of rationality is the most appli-
cable to human factors; the goal is to improve human
and system capacity to maximise utility, accuracy
and goal-achievement, acknowledging that biases
may be adaptive in achieving these ends. Experts
may utilise simplifying heuristics to guide their
judgement, but their use alone does not define expert
performance. Optimal outcomes are achieved when
there is coherence between the heuristics applied and
the conditions of performance (Kahneman & Klein,
2009). The goal of debiasing should be to reduce
errors and inefficiencies and promote the achieve-
ment of optimal outcomes given the opportunities
and constraints of the decision environment and
human capabilities. This means, the decision-making
environment must be aligned to fit adaptive human
reasoning, as opposed to attempting to debias the
human judge to completely eliminate any biased
tendencies.
Widely studied, cognitive and motivational
strategies (Arkes, 1991;Larrick, 2004), that aim to
“modify the decision-maker”(Solletal.2015;
p. 926), maintain the individual at the core of biased
decision-making. Cognitive strategies attempt to
modify individuals’reasoning through instruction
(e.g., Fahsing et al., 2023)ortraining(e.g.,Dunbar
et al., 2014), whereas motivational strategies utilise
incentives and accountability (e.g., Finkelstein et al.,
2022). However, from a human factors perspective,
2Human Factors 0(0)
there are limitations to the real-world application of
such approaches.
Cognitive and motivational strategies assume
that humans have unlimited cognitive resources at
their disposal to support analytic and rational
reasoning. They further require that humans can
recognise when they are at risk of being biased.
Both cognitive and motivational strategies largely
rely on humans’conscious will and ability. The
efficacy of these strategies is further limited by the
“bias blind spot,”our tendency to see and exag-
gerate the impact of biases in others, while denying
the existence of our own (Pronin, 2009).
Optimal debiasing strategies should instead
be context-specific and minimise reliance on
individuals’cognitive resources in order to at-
tenuate the trade-off between cognitive effort
and accuracy (Larrick, 2004). In many critical
contexts, it is unlikely that individuals can al-
ways allocate the motivation and cognitive re-
sourcestoengageinaccurateandelaborate
reasoning to reach optimal decisions. In order to
model performance environments where human
sensemaking and environmental factors function
cohesively as a unified system, further consid-
eration of the broader contextual factors that
influence judgement in critical situations is
necessary. Distributed cognition provides a
framework to reorient debiasing applications to
high-stakes contexts, taking into consideration
environmental constraints and opportunities.
Distributed Cognition and
Technological debiasing
The distributed cognition approach views cogni-
tion not as confined within an individual, but as an
emergent property of the system within which the
decisional task is performed. This contrasts with
the classical approach to cognition that places the
emphasis solely on the individual’s own capacity.
Hutchins (1995) proposed the view of cognitive
activity as organised across (i) different members
of teams (i.e., social environment), (ii) the tools
and artefacts (i.e., material environment) and (iii)
time (i.e., flow and transformation of information
over time). In this view, human sensemaking is one
part of the whole system.
In the distributed cognition approach, posi-
tive outcomes are attributed to the smooth and
effective coordination between human and
nonhuman contributors to the system, and not
merely to human skill. In this vein, biased de-
cision processes and outcomes can be considered
a product of ineffective information represen-
tation, transformation and assimilation through
different components of the system. As such,
debiasing strategies in high-performance con-
texts must address all these system-level factors,
beyond individual human cognition.
Soll et al. (2015) referred to these as “modify
the environment”strategies. Larrick (2004) con-
ceptualised these strategies as “technological de-
biasing strategies.”Building on existing work and
incorporating distributed cognition theory, we
define technological debiasing strategies as use or
modification of systems, processes, artefacts and
agents, both human and nonhuman, that are ex-
ternal to individual decision-maker(s) for the
purpose of minimising biased reasoning. In this
context, technology is defined broadly as any
process, artefact, tool or component of the decision
environment (including humans), that is external
to the individual mind. It includes but is not limited
to digital technology.
Further extending this notion, technological
debiasing strategies can be broadly classified into
three categories: (i) Information design, (ii) Pro-
cedural debiasing and (iii) Group composition and
structure, based on the three processes of dis-
tributed cognition introduced by Hutchins (2000)
and further elaborated on by Hollan et al. (2000).
Information design. Information design strategies
pertain to how we make sense of information
structures and stimuli from the decision environ-
ment. Cognitive processes “involve coordination
between internal and external (material or envi-
ronmental) structures”(Hollan et al., 2000, p. 175).
Externally presented information interacts with
humans’internal cognitive structures, resulting in
sensemaking. However, biases may occur when
information stimuli do not align with internal
representations. Addressing the flow or interpre-
tation of information from information sources in
the environment into human cognitive processing,
information design debiasing strategies modify the
organisation, structure and nature of the infor-
mation available or selectively present specific
types of information (e.g., Cook & Smallman,
Dharanikota et al. 3
2008) to ensure accurate mental representations of
information used in decision-making.
Procedural debiasing. The term procedural debiasing
was first introduced by Lopes (1987),whosework
aimed to modify the cognitive procedures within the
judge (or individual decision-maker) to debias
judgements. Here, however, procedural debiasing
pertains to strategies that improve the nature of tasks
in order to fit human cognition for optimal judgement
or decision outcomes. According to distributed
cognition theory, cognitive processes are “distributed
through time such that the outcomes of earlier events
can alter later events”(Hollan et al., 2000,p.175).
Biases can, hence, occur when the sequence of in-
formation gathering or decision subtasks are or-
ganised in ways that promote inaccurate weighting or
processing of information based on when in the
decision workflow information is presented or pro-
cessed. Further, they can potentially accumulate
throughout the system’s cognitive workflow. Ad-
dressing the temporally interdependent nature of
decision tasks, procedural debiasing strategies in-
clude modification of task components, tools and
sequences of subtasks (e.g., Bhandari et al., 2008).
Group composition and structure. Group composi-
tion and structure modifying strategies include the
use of groups to debias judgements, including
replacing individuals with groups (e.g., O’Leary,
2011) but also modifying group structures to de-
bias group decisions (e.g., Meissner & Wulf,
2017). This category corresponds to the idea
that cognitive processes are “distributed across
members of a social group”(p. 175). Biases can
occur at the group-level, due to interactional
properties of social cognition that may facilitate or
hinder different types of reasoning. Modifying
social antecedents of cognitive biases in collective
decision-making, these strategies address infor-
mation exchange between members of a team
occurring as a product of various collective and
individual attributes.
The objective of this scoping review is to
characterise evidence on technological debiasing
strategies across domains according to the three
distributed cognition principles (Hollan et al.,
2000;Hutchins, 1995) for human factors and
identify considerations for determining their ap-
plicability to real-world settings.
Methods
The scoping review method is best suited to the
nature of the research that constitutes this inquiry,
since the literature is broad, fragmented across dis-
ciplines and uses varied terminology and con-
ceptualisations of bias and debiasing. We synthesised
the literature and identified research gaps, following
Arksey and O’Malley’s (2005) recommendations for
scoping reviews, modified by Levac et al. (2010),
that comprises the five following stages:
Identifying the Research Question
Following the PCC (Participant, Concept, Con-
text) framework recommended by the Joanna
Briggs Institute (JBI; Pollock et al., 2023), the
primary research question guiding the scoping
review is “What types of technological debiasing
strategies have been employed to debias human
judgement and decision-making across domains?”
Identifying Relevant Studies
Systematic searches were conducted across six da-
tabases: IEEE Xplore, ACM Digital Library, Psy-
cINFO, Emerald, Web of Science and PubMed.
Search strings captured three elements of the research
question: (i) Adult human judgement and decision-
making (Participant), (ii) Use of technological
strategies (Concept) and (iii) Debiasing or cognitive
bias mitigation (Context). The search strings were
adapted to each database. A pilot search was first
conducted followed by a narrow search using key-
words that retrieved the most relevant articles. Search
terms utilised are presented in Tab l e 1. Hand searches
of the reference lists of relevant papers were carried
out. Articles retrieved by the searches were exported
to Covidence software (Veritas Health Innovation,
2023) to manage study selection.
Study Selection
Studies were selected if they met the following
inclusion criteria: published peer-reviewed, ex-
perimental studies in English that explicitly aimed
to minimise cognitive bias in human decision-
making or judgements using any technological
strategy and addressed at least one type of cog-
nitive bias. This included studies across all
4Human Factors 0(0)
industry contexts, disciplines and healthy adult
populations. Studies were excluded if they (i)
addressed debiasing for behaviour change or
learning, (ii) employed cognitive or motivational
strategies, (iii) involved nonadult populations or
(iv) populations with mental health conditions, (v)
not peer-reviewed publications and (vi) if full texts
were not available. There were no limits on the
year of publication; all relevant studies published
before September 2023 were included.
Charting the Data
Information extracted from the identified articles
included authorship, year of publication, sample size
and type, experimental design, level of analysis
(individual vs. group), industry domain, biases as
named in the studies, cognitive bias category, de-
biasing strategies tested, debiasing category, expo-
sure variables, outcome variables and effectiveness
of the debiasing strategy (effective, ineffective or
partially effective) as reported by the studies.
Collating and Summarising Data
To categorise the cognitive biases studied,
Fleischmann et al.’s(2014)taxonomy of eight
cognitive biases in information systems literature
was used as it was derived from the analysis of
literature in an applied performance domain. The
Table 1. Key Search Terms Utilised to Identify Articles Based on the PCC Framework.
Participant Concept Context
Adult decision-
makers
AND The use of technology in debiasing
strategies
AND Debiasing and cognitive bias
mitigation
Combined with OR Combined with OR Combined with OR
Human AND “Decision support”AND Debias*
Cognitive error “Decision aid*”De-bias*
Human error Computer Anchoring
Decision-making “DSS”Availability
Judgement “GDSS”Base-rate
Tool Confirmation
Nudg* Framing
Technolog* Representative*
Checklist Disposition
Visuali?* “illusion of control”
“Visual analytics”Omission
“Choice architecture”Commission
“Information presentation”Satisf*
Graph* Hindsight
Team Overconfidence
Group Bandwagon
“Cognitive diversity”Ambiguity
Workflow Ascertainment
Bias
Fallacy
Heuristic
Minimi?*
Reduce*
Mitigat*
Alleviat*
Address*
Overcom*
Remediat*
Counter*
Dharanikota et al. 5
taxonomy comprises (i) perception biases, (ii)
pattern-recognition biases, (iii) memory biases,
(iv) decision biases, (v) action-orientated biases,
(vi) stability biases, (vii) social biases and (viii)
interest biases. The debiasing strategy categories
described in the included studies were derived
basedondistributedcognitiontheoryprinciples
and classified into the three aforementioned
categories: (i) information design, (ii) procedural
debiasing and (iii) group composition and
structure. Descriptive statistics were used to
summarise the data. Analysed data are presented
in tabular and graphical formats as appropriate to
aid synthesis.
Results
Keyword and hand searches resulted in a total of
2667 records. After the removal of 305 duplicates,
two independent reviewers screened 2362 ab-
stracts. Of the 184 full texts screened, 80 articles
met the inclusion criteria and were included for
data extraction. See Figure 1 for a flow chart of
screening methods. Identified studies reported
100 debiasing strategies, addressing 91 cognitive
biases. Of these, 71 studies (88.8%) investigated
one bias, eight studies (10.2%) investigated two
biases, and one study addressed three biases within
one debiasing intervention. Fifty-eight studies
Figure 1. Methods flow chart based on Preferred Reporting Items for Systematic reviews and Meta-Analyses
extension for Scoping Reviews (PRISMA-ScR; Tricco et al., 2018).
6Human Factors 0(0)
(72.5%) used a single debiasing strategy and 20
(25%) employed more than one strategy to debias
one or more biases.
Domains of Included Studies
The most represented domains were healthcare
(n= 26, 32.5%), management (n= 13, 16.3%) and
legal or forensic decision-making (n= 9, 11.3%).
Three (3.8%) studies examined transport-related
judgements. Other domains studied include fi-
nance, transport, construction, software devel-
opment, intelligence analysis, political and sport-
related judgements. Seventeen studies (21.3%)
did not study debiasing within a specialised in-
dustry or domain, instead recruiting lay partici-
pants to lab-based decision-making tasks and
experimental paradigms with no specific domain-
focus.
Study Sample Characteristics
Sample sizes varied widely, ranging from 27 to
4101 participants. Thirteen studies recruited par-
ticipants from the relevant judgement context, that
is, samples representative of those making the
decisions in real-world settings, such as navy in-
telligence analysts (Cook & Smallman, 2008),
physicians (Arkes et al., 2022) and crime scene
investigators (de Gruijter et al., 2017). Thirty-three
studies included student samples. Of these,
28 studied student-only samples. Three involved
student samples in educational training relevant to
the industry or context of application, such as law
students (Lid´
en et al., 2019;Schmittata et al.,
2022) and medical students (Almashat et al.,
2008). Four studies included a mixture of stu-
dent and professional samples. These included
judges and law students (Lid´
en et al., 2019) and
managers and management students (Hodgkinson
et al., 1999;Ni et al., 2019;Pitaloka et al., 2019).
Twenty studies involved unspecified samples
drawn from the general population.
Biases Addressed
Of the 91 cognitive biases addressed, pattern-
recognition biases were the most studied cate-
gory (n= 27, 29.8%), followed by decision
biases (n= 17, 18.7%) and perception biases
(n= 17, 18.7%). Of the specificbiasesad-
dressed, confirmation bias (n= 12, 13.2%) was
the most common, followed by framing effect
(n= 7, 7.7%), then anchoring effect (n=6,
6.6%). There were no studies on interest biases,
defined as “biases that lead to suboptimal
evaluations and/or decisions owing to an indi-
vidual’s preferences, ideas, or sympathy for
other people or arguments”(Fleischmann et al.,
2014,p.5).Table 2 describes the cognitive
biases addressed in the identified studies.
Debiasing Strategies Investigated
Of the 100 debiasing strategies, information design
(i.e., presentation, restructuring or selective pre-
sentation of information) accounted for 59%,
followed by 34 studies examining procedural
debiasing, and 7 exploring group composition and
structure. Several studies used a single debiasing
strategy to address multiple biases, and some used
multiple strategies to address a single bias. There
were 114 individual debiasing-bias investigation
pairs (Figure 2). See Appendix 1 for overview of
included papers classified by debiasing strategy
and cognitive bias type.
Reported Effectiveness of Technological
Debiasing Strategies
Of the 100 debiasing strategies, study authors
reported 80 to be effective in minimising bias.
Strategies were considered effective using a
single criterion: the identification of a statisti-
cally significant difference between experi-
mental groups, wherein one group’s responses
were significantly less prone to cognitive bias
than the comparator (Figure 3). Fourteen strat-
egies were reported as ineffective. Ineffective
strategies were those that either found no sig-
nificant difference between groups (n= 12/14),
or a statistically significant increase in biased
reasoning (n= 2/14). Six strategies were par-
tially effective, that is, either effective under
specific conditions (Abyankar et al., 2014;Ni
et al., 2019;Rieger, 2012;Sezer et al., 2016;
Shaikh, 2022;van Dogen et al., 2005)oref-
fective in mitigating one of the biases studied but
not another (Bhasker & Kumaraswamy, 1991).
Dharanikota et al. 7
Table 2. Overview of Biases Addressed by Included Studies, Categorised by Bias Type (n= 91).
Bias category Definitions Biases addressed Studies
Pattern-
recognition
biases
(n= 27)
“Pattern recognition biases
occur when, in the evaluation
of alternative patterns of
thinking, barely known
information or unknown
information is discarded in
favour of familiar patterns of
thinking or information that
currently happen to be
present in the mind.”
Confirmation bias (n= 12) Chuanromanee & Metoyer, 2022;
Cook & Smallman, 2008;de
Gruijter et al., 2017;Hayes et al.,
2016;Hernandez & Preston, 2013;
Huang et al., 2012;Kayhan, 2013;
Kostopoulou et al., 2021;Lid´
en
et al., 2019;Mojzisch et al., 2008;
van Dongen et al., 2005;van Swol
et al., 2023
Availability bias (n=4) Shaikh (2022),Benbasat and Lim
(2000),Liang et al. (2022),Küper
et al. (2022)
Contextual bias (n=4) MacLean & Read (2019),Quigley-
McBride, 2020;Quigley-McBride
& Wells, 2018;Schmittata et al.,
2022
Familiarity bias (n=1) Marett and Adams (2006)
Early-stage neglect of
alternative (n=1)
Hayes et al. (2016)
Causality bias (n=1) D´
ıaz-Lago and Matute (2019)
Order effects (n=5) Bansback et al. (2014),Lau and
Coiera (2009),Ubel et al. (2010),
van Dongen et al. (2005),
Zikmund-Fisher et al. (2008)
Perception
biases
(n= 17)
“Perception biases affect the
processing of new information
that is received by an
individual. A potential
subsequent decision and the
resulting behaviour are flawed,
when based on this biased
information.”
Framing effect (n=7) Abyankar et al. (2014),Almashat
et al. (2008),Bernstein et al.
(1999),Bhandari et al. (2008),
Garcia-Retamero and Galesic
(2010a),Garcia-Retamero and
Dhami (2013),Hodgkinson et al.
(1999)
Decoy (or Attraction)
effect (n=3)
Jeong et al. (2021),Teppan and
Felfernig (2009),Dimara et al.
(2018)
Within-the-bar bias (n=1) Okan et al. (2018)
Temporal inconsistency
bias (n=1)
Zikmund-Fisher et al. (2007)
Equalising bias (n=1) Rezaei et al. (2022)
Representativeness bias
(n=3)
Bhandari et al. (2008),Lim and
Benbasat (1997),Küper et al.
(2022)
Ratio bias (n=1) Zikmund-Fisher et al. (2008)
(continued)
8Human Factors 0(0)
Table 2. (continued)
Bias category Definitions Biases addressed Studies
Decision
biases (n=
17)
“Decision biases occur directly
during the actual process of
decision-making and diminish
the quality of actual as well as
future decision outcomes.”
Conjunction fallacy (n=4) Bhasker and Kumaraswamy (1991),
O’Leary (2011), Morier & Borgida
(1984) Rieger (2012),Arkes et al.
(2022)
Disposition effect (n=3) Quigley-McBride and Wells (2018),
Frydman and Rangel (2014),Król
and Król (2019)
Base-rate fallacy (n=5) Bhasker and Kumaraswamy (1991),
Roy and Lerch (1996),Tsai et al.
(2011),Greening et al. (2005),
Chandler et al. (1999)
Diversification heuristic
(n=1)
Bhandari et al. (2008)
Denominator neglect
(n=2)
Garcia-Retamero and Galesic (2009),
Garcia-Retamero et al. (2010)
Anecdotal reasoning
(n=1)
Fagerlin et al. (2005)
Illusion of control (n=1) Meissner & Wulf (2017)
Action-
orientated
biases
(n= 11)
“Action-oriented biases lead to
premature decisions made
without considering actually
relevant information or
alternative courses of action.”
Overconfidence bias
(n=2)
Meissner et al. (2018),Ferretti et al.
(2023)
Time saving bias (n=4) Eriksson et al. (2015),Fink and
Pinchovski (2020),Gamliel and
Pe’er (2021),Svenson et al. (2014)
Outcome bias (n=1) Sezer et al. (2016)
Optimistic bias (n=3) Lipkus and Klein (2006),Weinstein
(1983),Brown and Imber (2003)
Comparative optimism
(n=1)
Rose (2012)
Stability biases
(n= 12)
“Stability biases make individuals
stick with established or
familiar decisions, even though
alternative information,
arguments, or conditions exist
that are objectively superior.”
Anchoring effect (n=6) Delgado et al. (2018),Lau and Coiera
(2009),Pitaloka et al. (2019),Ni
et al. (2019),Rastogi et al. (2022),
Tang et al. (2023)
Sunk-cost fallacy (n=1) Hamzagic et al. (2021)
Ambiguity aversion (n=1) Bhandari et al. (2008)
Loss aversion (n=2) Delgado et al. (2018),Zhang et al.
(2015)
Conservatism bias (n=1) Zhang et al. (2015)
End-anchoring (n=1) Duclos (2015)
Memory
biases (n=
3)
“Memory biases affect the
process of recalling
information that refers to the
past and thereby substantially
diminish the quality of this
information, which is later
used for decision-making.”
Relative encoding biases
(n=1)
Sharif and Oppenheimer (2021)
Hindsight bias (n=2) Smith and Greene (2005),Wu et al.
(2012)
Social biases
(n=4)
Social biases “arise from attitudes
shaped by the individual’s
relationship to other people.”
Desirability effect (n=1) van Swol et al. (2023)
Ideological bias (n=1) Solomon and Hall (2023)
Fundamental attribution
error (n=1)
Holder and Xiong (2023)
Information pooling bias
(n=1)
Rajivan and Cooke (2018)
Dharanikota et al. 9
Levels of Analysis: Individual Versus Group
Nine studies (11.3%) examined cognitive biases in
group judgements, though none of those were biases
that occur uniquely in groups such as groupthink—
groups’tendencies to maintain unanimity in their
beliefs or ideas even when faulty (Janis, 1971)—or
the bandwagon effect—individuals’tendency to
think and behave as others do and “join the crowd”
(Leibenstein, 1950, p. 184). Strategies addressed at
the group-level were ideological bias, illusion of
control bias, overconfidence bias, conjunction fal-
lacy, availability bias, confirmation bias, represen-
tativeness bias, hindsight bias and information
pooling bias. One study (O’Leary, 2011) compared
individuals’and groups’proneness to the
Figure 2. Frequency of debiasing strategies addressing each bias type (n= 114).
Figure 3. Frequency of debiasing investigations and their effectiveness as reported by the studies reviewed (n= 100).
10 Human Factors 0(0)
conjunction fallacy, concluding that groups are less
prone to the bias than individuals.
Outcome Measures
A range of outcome measures were utilised in the
identified studies to operationalise biased and
unbiased judgement and decision-making. The
majority studied probability or likelihood esti-
mates (e.g., Chandler et al., 1999;Küper et al.,
2022;Okan et al., 2018), judgement or decision
accuracy (e.g., Fink & Pinchovski, 2020;MacLean
& Read, 2019;Tang et al., 2023), absolute and
relative risk estimates (e.g., Garcia-Retamero &
Galesic, 2010b;Lipkus & Klein, 2006;Ubel et al.,
2010), choice or preference judgements (e.g.,
Abyankar et al., 2014;Bernstein et al., 1999;Jeong
et al., 2021) and information behaviour (e.g.,
Kostopoulou et al., 2021;Mojzisch et al., 2008).
Others reported confidence in judgement or de-
cision, and domain-specific measures such as guilt
assessments (Lid´
en et al., 2019;Schmittata et al.,
2022) and jury verdicts (Smith & Greene, 2005).
Discussion
Through systematically scoping the literature, we
identified 100 technological debiasing strategies
within 80 studies that were tested across domains
and organised them according to three distinct
novel theoretically derived categories based on the
distributed cognition framework. Building on the
well-established idea of extending the boundaries
of debiasing techniques to the decision environ-
ment, we found considerable scope to advance
empirical work based on the contributions of the
present review, the avenues for which are dis-
cussed below alongside considerations for prac-
tical applications.
Information design strategies were the most
commonly deployed technological strategies, which
is unsurprising given that errors due to misrepre-
sentation of information, both internally and exter-
nally, are widely documented. Most of these
strategies were reported as being effective in (i)
substituting or complementing text with graphical
representations and visualisations, (ii) enhancing
salience of information presented (e.g., Frydman &
Rangel, 2014;Sharif & Oppenheimer, 2021), (iii)
modifying information framing (e.g., Garcia-
Retamero & Galesic, 2010a) or (iv) selective
provision of information (e.g., Schmittata et al.,
2022), including provision of information about
others judgements and decision processes (e.g.,
Weinstein, 1983). Widely studied, these strate-
gies can aid the design of user interfaces, for
instance, through the manipulation of informa-
tion order and salience, or presentation of in-
formation in appropriate graphical formats.
Procedural debiasing strategies identified
involved techniques that (i) modify task sequence
(Svenson et al., 2014), (ii) segment decisions tasks
(e.g., Arkes et al., 2022), a type of “planned in-
terruption”(Soll et al., 2015, p. 937), (iii) modify
time allocated to task (Rastogi et al., 2022), (iv)
modify the mode in which decision tasks are
performed (e.g., communication medium and
computationally-assisted; Benbasat & Lim, 2000)
and (v) involve simultaneous processing of dif-
ferent types of information (e.g., gaze overlay
support; e.g., Wu et al., 2012). Utilising procedural
debiasing strategies; tools and systems that
structure decision-making processes may be em-
ployed to minimise different biases, particularly in
complex multi-step decision tasks. Alongside re-
cent advances in artificial augmented intelligence
(Zhou et al., 2021), is the emergence of procedural
debiasing strategies for situations where the
boundaries of the decision problem and potential
solutions are well-defined (Marlin, 2018). How-
ever, these strategies must be employed with
caution. Using algorithmic support in decision-
making may introduce a varied set of biases that
are either built into the algorithm or occur as a
result of humans’interaction with AI and its model
predictions (e.g., van Berkel et al., 2023;Kliegr
et al., 2021).
Group composition and structure was the
least explored type of debiasing strategy in the
reviewed studies. That only seven studies inves-
tigated this approach signals a significant gap and
opportunity to empirically advance team design as
a debiasing tool. However, this also raises the
question of whether this type of strategy is as
effective in practice as it may appear in theory. The
paucity of studies identified may be due to the
unique challenges associated with studying group
cognition over individual cognition or a lack of
significant results, that is, there may be unpub-
lished articles that have attempted it but did not
Dharanikota et al. 11
find any debiasing effects. In this review, we
identified the following types of strategies under
this class, namely, (i) replacing individuals with
teams (O’Leary, 2011), (ii) modifying team di-
versity or heterogeneity (Meissner et al., 2018;
Meissner & Wulf, 2017;Mojzisch et al., 2008;
Solomon & Hall, 2023) and (iii) modifying team
cognition through group tenure (Meissner et al.,
2018). Thus far, evidence on reported effectiveness
of these strategies points towards the need for
teams to achieve a conscious balance of hetero-
geneity and homogeneity across variables for
optimal decision outcomes.
Hinsz (2015) proposed that teams are a type of
technology. Defining technology as “the specific
methods, materials and devices used to solve
practical problems,”he suggested that individual
decision-makers utilise team members as “cogni-
tive technology”(p. 219) to solve problems as they
extend the capacity of an individual’s cognition.
By viewing teams as technology, the strengths of
team cognition can be systematically leveraged,
and weaknesses minimised to enhance task per-
formance. This “teams-as-technology”perspective
aligns with a fundamental principle of the dis-
tributed cognition framework, that cognition oc-
curs in a system of interdependent individuals. By
integrating evidence on individual and team
cognition, team composition can be systematically
modified to fit the task. In practice, this may in-
volve assessing individual and group attributes to
form teams that can minimise bias-driven judge-
ment errors.
Gaps and Future Research
The present review covered debiasing investiga-
tions across multiple domains including healthcare-
related, legal, organisational, and financial decision-
making. All but two studies (Kostopoulou et al.,
2021;Solomon & Hall, 2023) were based in lab-
based and otherwise controlled environments, with
a majority involving student samples, which limits
conclusions that can be drawn with respect to ap-
plicability in naturalistic performance settings. More
work establishing ecological validity through rigor-
ous quasi-experimental investigations or utilising
synthetic task environments to approximate the re-
alism of real-world conditions of performance is
required to establish efficacy of debiasing strategies
in human factors practice. Specificneedsforthefield
involve (i) including more representative samples in
future studies, (ii) inquiry into experts’openness to
technological debiasing strategies, (iii) measuring the
efficacy of the strategies and (iv) evaluating impact of
strategies on cognitive load or time to complete tasks.
Further, a majority of the studies identified in
the present review addressed biases at the indi-
vidual level. Seven investigations aimed to debias
decision-making at the group level. There were no
studies examining biases that occur only at the
group-level. Given that most real-world high-
stakes decisions are made by teams, testing de-
biasing strategies at the group-level is critical for
advancing application of debiasing in human
factors.
The digital and computational nature of many
contemporary real-world tasks means that de-
biasing strategies must evolve rapidly to keep
pace. At the intersection of information design and
procedural debiasing strategies are avenues for
novel research advancing real-time visual feed-
back (Shaikh, 2022), including customizable and
interactive visualisations (Tsai et al., 2011). The
feasibility of computationally aided detection of
biases, based on objective behavioural interaction
metrics using visual analytic platforms and tools
has already been demonstrated (Crowley et al.,
2013;Wall et al., 2017). In addition to assessing
bias probability in real-time, utilising these sys-
tems can enable feedback and customisability of
digital interfaces to support human cognition.
An important distinction to note is between
mitigating cognitive bias and mitigating the neg-
ative effects of cognitive bias. Not all biased
decision-making is undesirable or inaccurate.
Specific biases in certain types of decision-making
can also lead to ideal outcomes, while saving time
and cognitive resources. This means that some-
times, debiasing can look like “rebiasing,”that is,
inducing one bias to offset the effects of another
(Larrick, 2004), or leveraging an existing adaptive
heuristic to guide the decision-making towards the
optimal outcome. For example, manipulating the
order of information presented to allow decision-
makers to assign greater weight to the initial chunk
of information (leveraging the anchoring bias), may
be helpful to offset the confirmation bias when
either the initial information contains evidence that
disconfirms prior beliefs or is objectively the most
12 Human Factors 0(0)
important factor in each decision situation. Re-
biasing involves making trade-offs between the
risks associated with either bias, and prioritising the
one with most upside or least downside. As such,
this requires robust experimental testing before
implementing in human factors practice for real-
world consequential decisions.
Practical Implications
To improve judgement and decision-making
(JDM) performance, one may (i) leverage biased
tendencies, (ii) minimise impact of bias on out-
comes or (iii) eradicate impact of bias on out-
comes. With respect to their utilisation outside of
experimental settings and in practice, we draw on
the extant literature and Soll’s (2015) recom-
mendations for choosing debiasing strategies, to
propose a set of criteria to be considered when
assessing applicability of a technological debias-
ing strategy for a given decision situation. These
include (i) risks associated with bias, (ii) nature
and sources of bias, (iii) judgement or decision task
properties, (iv) properties of the decision envi-
ronment, (v) team properties (for team-based
tasks) and (vi) individual psychological and
physiological factors, all described below.
Risk Associated. Considerations for risks associated
involve establishing the (i) potential for harm
(What is the potential for harm if a cognitive bias is
left unchecked?) and (ii) margin of error (What is
the margin of error that defines the threshold for
unacceptable risk?).
Nature and Sources of Bias. To specify the target of
debiasing strategies and assess the source or nature
of cognitive bias, several classification systems
including Fleischmann et al. (2014) have been
suggested in literature. Another notable classifi-
cation is Arkes’(1991) which categorises biases
into three types: association-based (due to reliance
on accessibility of information in memory),
strategy-based (due to use of suboptimal reasoning
strategies or misapplication of decision rules) and
psychophysically-based (due to improper trans-
lation or accessing of information stimuli in the
environment). The source of bias can determine
which features of the decision environment can be
modified to minimise its undesired effects.
Judgement and Decision-Making (JDM) Task
Properties. Task properties relevant to debiasing
include (i) task type (What type of decision task is
being performed? E.g., probability or likelihood
assessment, hypothesis evaluation, choice and risk
estimation), (ii) task frequency (How frequently is
this task performed? Is the task performed non-
routinely or regularly and repetitively?), (iii) task
complexity (How complex is the task, in terms of the
number of distinct informationitemsandactsin-
volved, the degree of item integration required to
perform it and how variable the task components and
their relationships are?) (Wood, 1986 ), (iv) value
criteria (What is considered an optimal outcome?
How is the success or failure of a decision evalu-
ated?) and (v) time (Under what time constraints is
the task being performed? Is the time allocated to a
task fixed or flexible?).
Properties of the Decision Environment. The prop-
erties of the decision environment include (i) role
(Who is involved in performing the JDM task?) (ii)
information needs (What information is required
and utilised to perform the task?), (iii) cognitive
artefacts (What artefacts are currently utilised to
perform the task?) and (iv) information flow (How
does information currently flow, and how should
information flow in the system from one element to
another? E.g., information flow direction, trans-
formation, facilitators and barriers).
Team Properties (For Team-Based Tasks). For team-
based tasks, team properties to be considered in-
clude (i) role distribution (What are the established
roles and responsibilities in the team? How rigid or
fluid are they?), (ii) Cognitive diversity and
compatibility (Across what characteristics should
teams be homogeneous and heterogeneous to
ensure adequate information pool and compre-
hensive information elaboration and synthesis?)
(Mello & Rentsch, 2015;Wang et al., 2024), (iii)
team dynamics (What team dynamics are facili-
tating or hindering effective problem-solving?
E.g., cohesiveness, hierarchical rigidity and team
familiarity) and (iv) integration of any nonhuman
teammates (How can autonomous or nonautono-
mous agents such as artificial intelligence over-
come or aid experts in overcoming the limitations
of human cognition to enhance performance?)
(Vold, 2024).
Dharanikota et al. 13
Psychological and Physiological Factors. Psychological
and physiological considerations for debiasing in-
clude (i) cognitive load (How much intrinsic and
extrinsic cognitive load are individuals under? Would
the debiasing strategy reduce or add to the cognitive
load required to perform tasks?), (ii) fatigue (What
level of fatigue are individuals experiencing? For
instance, one may consider prioritising minimising
errors due to bias under conditions where fatigue is
likely such as decisions made at the end of a shift
during a handover in a hospital environment.) and
(iii) individual differences (How do individuals re-
sponsibleforthetaskfareon individual difference
attributes such as personality, need for cognition and
working memory capacity?).
Limitations of the Review
Whilst the results of this scoping review are en-
couraging and point to a growing body of literature
on debiasing using distributed cognition, our
findings are to be interpreted in light of several
limitations. First, given the fragmented nature of
research in the area and the use of a wide range of
terms to describe the same phenomena, it is pos-
sible that our review may have excluded some
relevant studies. The lack of consistency in ter-
minology, definitions and conceptualisations of
bias or debiasing allows for limited comparability.
Certain ambiguities are also apparent; some de-
biasing strategies can fall into more than one
category since distributed cognition is a complex
process comprising interwoven subprocesses. For
instance, debiasing techniques involving “choice
architecture”can be considered procedural de-
biasing or information design. While the debiasing
categorisation presented here is not absolute, this
scoping review lays the groundwork for advancing
debiasing processes and interventions upon which
future studies may build. Further, due to differing
perspectives on the nature and sources of biases,
the neat assignment of cognitive biases to cate-
gories has been an ongoing challenge in literature.
Fleischmann et al.’s (2014) taxonomy offers a
categorisation relevant to human factors. However,
these bias categories are not definitive and to be
regarded as a starting point towards the develop-
ment of debiasing hypotheses in context.
Second, most of the studies addressed in this
review were reported to be effective in either
completely eliminating the impact of bias on the
decision outcome, reducing the impact of bias or
reducing the undesirable effects of bias on the
judgement or decision outcome. The strategies
were reported as effective or ineffective relative to
the conditions specified in the heterogeneous set of
experimental studies, limiting the comparison of
the effectiveness of each of the three technological
debiasing strategy types. Given this heterogeneity
in study domains, experimental manipulations,
and outcomes studied, a meta-analysis was outside
the scope of this scoping review (Lipsey & Wilson,
2001).
Third, to make the task of reviewing the liter-
ature manageable, our inclusion criteria specified
studies that only explicitly aimed at debiasing
decision-making and judgement. However, ex-
perimental studies on any cognitive bias where one
condition displayed effects of bias and the other
did not, may also be evidence of debiasing. De-
spite this limitation, our work has exemplified the
application of a framework of distributed cognition
principles along which practitioners and re-
searchers can assess current evidence on debiasing
(whether explicit or not) for future work.
Finally, ironically, even in the research litera-
ture on bias, there is potential for publication bias.
This is well-documented in academia and psy-
chological literature (e.g., Gaillard & Devine,
2022;Ioannidis et al., 2014;Maier et al., 2022;
Siegel et al., 2022) wherein evidence supporting
the effectiveness of a strategy may be more likely
to be published than evidence to the contrary (e.g.,
the filing cabinet effect). Given the expansive
nature of work in this area, it was not possible to
access unpublished data from all research groups
studying this phenomenon.
Conclusion
Cognitive processes are not confined to the minds
of individuals. In scoping the literature on tech-
nological debiasing, this review highlights the
potential of adopting a distributed cognition ap-
proach to debiasing judgements through the uti-
lisation of technological strategies. Our review
established a link between distributed cognition
approaches to human factors and cognitive bias
mitigation, applying these principles to real-world
practice. The distributed cognition-based
14 Human Factors 0(0)
classification can aid human factors practitioners
and researchers in identifying debiasing strategies
relevant to their domains of interest and provides a
structured basis to guide practice and future
research. Any domain that requires simultaneous,
constant, and collaborative processing of infor-
mation, particularly under conditions of uncer-
tainty, information overload or time pressure can
benefit from integrating debiasing considerations
into the design of cognitive systems. Technolog-
ical debiasing may be particularly impactful for
teams who must collaborate to make consequential
decisions in uncertain contexts ranging from
spaceflight to surgery.
Key Points
·Negatively consequential cognitive biases in
high-stakes decision-making can be consid-
ered a product of inappropriate flow and
representation of information in the decision
environment.
·We propose that the distributed cognition
theory provides a new perspective on un-
derstanding and implementing bias mitiga-
tion strategies, called technological debiasing
strategies, in-context.
·The 80 papers analysed here provide evidence
supporting the applicability of technological
debiasing strategies across domains, with sig-
nificant gaps and opportunities.
·Real-world application of these strategies re-
quires a holistic human factors consideration of
decision environments and further conceptual
and empirical work.
Declaration of Conflicting Interests
The author(s) declared the following potential conflicts
of interest with respect to the research, authorship, and/
or publication of this article: Dr Yule reports research
grants from the National Institutes for Health, Canadian
Department of National Defense, National Aeronautics
and Space Administration, Melville Trust for Care and
Cure of Cancer and Royal College of Surgeons of
Edinburgh, outside the submitted work.
Funding
The author(s) disclosed receipt of the following financial
support for the research, authorship, and/or publication
of this article: This work was supported by The Melville
Trust for Care and Cure of Cancer (R47363).
ORCID iDs
Harini Dharanikota https://orcid.org/0000-0002-
5201-9753
Steven Yule https://orcid.org/0000-0001-9889-9090
Supplemental Material
Supplemental material for this article is available online.
References
Abhyankar, P., Summers, B. A., Velikova, G., & Bekker,
H. L. (2014). Framing options as choice or oppor-
tunity: Does the frame influence decisions? Medical
Decision Making: An International Journal of the
Society for Medical Decision Making,34(5),
567–582. https://doi.org/10.1177/
0272989X14529624
Almashat, S., Ayotte, B., Edelstein, B., & Margrett, J.
(2008). Framing effect debiasing in medical decision
making. Patient Education and Counseling,71(1),
102–107. https://doi.org/10.1016/j.pec.2007.11.004
Arkes, H. R. (1991). Costs and benefits of judgment
errors: Implications for debiasing. Psychological
Bulletin,110(3), 486–498. https://doi.org/10.1037/
0033-2909.110.3.486
Arkes, H. R., Aberegg, S. K., & Arpin, K. A. (2022).
Analysis of physicians’probability estimates of a
medical outcome based on a sequence of events.
JAMA Network Open,5(6), Article e2218804.
https://doi.org/10.1001/jamanetworkopen.2022.
18804
Arksey, H. & O’Malley, L. (2005). Scoping studies:
Towards a methodological framework. International
Journal of Social Research Methodology,8(1),
19–32. https://doi.org/10.1080/
1364557032000119616
Armstrong, B. A., Dutescu, I. A., Tung, A., Carter,
D. N., Trbovich, P. L., Wong, S., Saposnik, G., &
Grantcharov, T. (2023). Cognitive biases in surgery:
Systematic review. British Journal of Surgery,
110 (6), 645–654. https://doi.org/10.1093/bjs/
znad004
Bansback, N., Li, L. C., Lynd, L., & Bryan, S. (2014).
Exploiting order effects to improve the quality of
decisions. Patient Education and Counseling,96(2),
197–203. https://doi.org/10.1016/j.pec.2014.05.021
Dharanikota et al. 15
Benbasat,I.I.&Lim,J.(2000).Informationtech-
nology support for debiasing group judgments: An
empirical evaluation. Organizational Behavior
and Human Decision Processes,83(1), 167–183.
https://doi.org/10.1006/obhd.2000.2905
Bernstein, L. M., Chapman, G. B., & Elstein, A. S.
(1999). Framing effects in choices between multi-
outcome life-expectancy lotteries. Medical Decision
Making: An International Journal of the Society for
Medical Decision Making,19(3), 324–338. https://
doi.org/10.1177/0272989X9901900311
Bhandari, G., Hassanein, K., & Deaves, R. (2008).
Debiasing investors with decision support systems:
An experimental investigation. Decision Support
Systems,46(1), 399–410. https://doi.org/10.1016/j.
dss.2008.07.010
Bhasker, S. & Kumaraswamy, A. (1991). Graphical
techniques in debiasing: An exploratory study. In
Proceedings of the twenty-fourth annual Hawaii
international conference on system sciences (Vol. 3,
pp. 117–125). IEEE.
Brown, S. L. & Imber, A. (2003). The effect of reducing
opportunities for downward comparison on com-
parative optimism. Journal of Applied Social Psy-
chology,33(5), 1058–1068. https://doi.org/10.1111/j.
1559-1816.2003.tb01938.x
Chandler, C. C., Greening, L., Robison, L. J., &
Stoppelbein, L. (1999). It can’t happen to me…or
can it? Conditional base rates affect subjective
probability judgments. Journal of Experimental
Psychology: Applied,5(4), 361–378. https://doi.org/
10.1037/1076-898x.5.4.361
Chuanromanee, T. & Metoyer, R. (2022). A crowd-
sourced study of visual strategies for mitigating
confirmation bias. In2022 IEEE Symposium on vi-
sual languages and human-centric computing (VL/
HCC) (pp. 1–6). IEEE.
Comes, T. (2016). Cognitive biases in humanitarian
sensemaking and decision-making lessons from
field research. In 2016 IEEE international multi-
disciplinary conference on cognitive methods
in situation awareness and decision support
(CogSIMA) (pp. 56–62). IEEE.
Cook, M. B. & Smallman, H. S. (2008). Human factors
of the confirmation bias in intelligence analysis:
Decision support from graphical evidence land-
scapes. Human Factors,50(5), 745–754. https://doi.
org/10.1518/001872008X354183
Crowley, R. S., Legowski, E., Medvedeva, O., Reitmeyer,
K., Tseytlin, E., Castine, M., Jukic, D., & Mello-Thoms,
C. (2013). Automated detection of heuristics and biases
among pathologists in a computer-based system. Ad-
vances in Health Sciences Education: Theory and
Practice,18(3), 343–363. https://doi.org/10.1007/
s10459-012-9374-z
de Gruijter, M., Nee, C., & de Poot, C. J. (2017).
Identification at the crime scene: The sooner, the
better? The interpretation of rapid identification in-
formation by CSIs at the crime scene. Science &
Justice,57(4), 296–306. https://doi.org/10.1016/j.
scijus.2017.03.006
Delgado, L., Tripp, S., Michael, G., & Pearce, A. (2018).
Framing energy efficiency with payback period:
Empirical study to increase energy consideration
during facility procurement processes. Journal of
Construction Engineering and Management,144(5),
Article 04018027.
D´
ıaz-Lago, M. & Matute, H. (2019). A hard to read font
reduces the causality bias. Judgment and Decision
Making,14(5), 547–554. https://doi.org/10.1017/
s1930297500004848
Dimara, E., Bailly, G., Bezerianos, A., & Franconeri, S.
(2018). Mitigating the attraction effect with visual-
izations. IEEE Transactions on Visualization and
Computer Graphics.https://doi.org/10.1109/TVCG.
2018.2865233
Duclos, R. (2015). The psychology of investment be-
havior: (De)biasing financial decision-making one
graph at a time. Journal of Consumer Psychology,
25(2), 317–325. https://doi.org/10.1016/j.jcps.2014.
11.005
Dunbar, N. E., Miller, C. H., Adame, B. J., Elizondo, J.,
Wilson, S. N., Lane, B. L., Kauffman, A. A.,
Bessarabova, E., Jensen, M. L., Straub, S. K., Lee,
Y.-H., Burgoon, J. K., Valacich, J. J., Jenkins, J., &
Zhang, J. (2014). Implicit and explicit training in the
mitigation of cognitive bias through the use of a
serious game. Computers in Human Behavior,37(3),
307–318. https://doi.org/10.1016/j.chb.2014.04.053
Eriksson, G., Patten, C. J. D., Svenson, O., & Eriksson,
L. (2015). Estimated time of arrival and debiasing the
time saving bias. Ergonomics,58(12), 1939–1946.
https://doi.org/10.1080/00140139.2015.1051592
Fagerlin, A., Wang, C., & Ubel, P. A. (2005). Reducing
the influence of anecdotal reasoning on people’s
health care decisions: Is a picture worth a thousand
statistics? Medical Decision Making: An Interna-
tional Journal of the Society for Medical Decision
Making,25(4), 398–405. https://doi.org/10.1177/
0272989X05278931
16 Human Factors 0(0)
Fahsing, I., Rachlew, A., & May, L. (2023). Have you
considered the opposite? A debiasing strategy for
judgment in criminal investigation. The Police
Journal: Theory, Practice and Principles,96(1),
45–60. https://doi.org/10.1177/0032258x211038888
Ferretti, V., Montibeller, G., & von Winterfeldt, D.
(2023). Testing the effectiveness of debiasing tech-
niques to reduce overprecision in the elicitation of
subjective continuous probability distributions. Eu-
ropean Journal of Operational Research,304(2),
661–675. https://doi.org/10.1016/j.ejor.2022.04.008
Fink, L. & Pinchovski, B. (2020). It is about time: Bias
and its mitigation in time-saving decisions in soft-
ware development projects. International Journal of
Project Management,38(2), 99–111. https://doi.org/
10.1016/j.ijproman.2020.01.001
Finkelstein, E. A., Cheung, Y. B., Schweitzer, M. E., Lee,
L.H.,Kanesvaran,R.,&Baid,D.(2022).Accuracy
incentives and framing effects to minimize the influence
of cognitive bias among advanced cancer patients.
Journal of Health Psychology,27(9), 2227–2235.
https://doi.org/10.1177/13591053211025601
Fischhoff, B. (1982). Debiasing. In D. Kahneman, P.
Slovic, & A. Tversky (Eds.), Judgment under un-
certainty: Heuristics and biases (pp. 422–444).
Cambridge University Press.
Fleischmann, M., Amirpur, M., Benlian, A., & Hess, T.
(2014). Cognitive biases in information systems
research: A scientometric analysis. In Proceedings of
the European conference on information systems
(ECIS) 2014, Tel Aviv, Israel, 9–11 June, 2014.
Frydman, C. & Rangel, A. (2014). Debiasing the dis-
position effect by reducing the saliency of infor-
mation about a stock’s purchase price. Journal of
Economic Behavior & Organization,107(Pt B),
541–552. https://doi.org/10.1016/j.jebo.2014.01.017
Gaillard, S. & Devine, S. (2022). Systemic problems in
academia: The positive publication bias and solutions
from a human factors perspective. Journal of Trial and
Error,2(1), 1–5. https://doi.org/10.36850/ed3
Gamliel, E. & Pe’er, E. (2021). When two wrongs make
a right: The efficiency-consumption gap under sep-
arate vs. joint evaluations. Judgment and Decision
Making,16(1), 94–113. https://doi.org/10.1017/
s1930297500008317
Garcia-Retamero, R. & Dhami, M. K. (2013). On avoiding
framing effects in experienced decision makers. Quar-
terly Journal of Experimental Psychology,66(4),
829–842. https://doi.org/10.1080/17470218.2012.
727836
Garcia-Retamero, R. & Galesic, M. (2009). Com-
municating treatment risk reduction to people with
low numeracy skills: A cross-cultural comparison.
American Journal of Public Health,99(12),
2196–2202. https://doi.org/10.2105/AJPH.2009.
160234
Garcia-Retamero, R. & Galesic, M. (2010a). How to
reduce the effect of framing on messages about
health. Journal of General Internal Medicine,
25(12), 1323–1329. https://doi.org/10.1007/s11606-
010-1484-9
Garcia-Retamero, R. & Galesic, M. (2010b). How to
reduce the effect of framing on messages about
health. Journal of General Internal Medicine,
25(12), 1323–1329. https://doi.org/10.1007/s11606-
010-1484-9
Garcia-Retamero, R., Galesic, M., & Gigerenzer, G.
(2010). Do icon arrays help reduce denominator
neglect? Medical Decision Making: An International
Journal of the Society for Medical Decision Making,
30(6), 672–684. https://doi.org/10.1177/
0272989X10369000
Greening, L., Chandler, C. C., Stoppelbein, L., &
Robison, L. J. (2005). Risk perception: Using con-
ditional versus general base rates for risk Commu-
nication1. Journal of Applied Social Psychology,
35(10), 2094–2122. https://doi.org/10.1111/j.1559-
1816.2005.tb02211.x
Hamzagic, Z. I., Derksen, D. G., Matsuba, M. K.,
Aßfalg, A, & Bernstein, D. M. (2021). Harm to
others reduces the sunk-cost effect. Memory &
cognition,49(3), 544–556. https://doi.org/10.3758/
s13421-020-01112-7
Haselton, M. G., Nettle, D., & Andrews, P. W. (2015).
The evolution of cognitive bias. In The handbook of
evolutionary psychology (pp. 724–746). John Wiley
& Sons, Inc.
Hayes, B. K., Hawkins, G. E., & Newell, B. R. (2016).
Consider the alternative: The effects of causal
knowledge on representing and using alternative
hypotheses in judgments under uncertainty. Journal
of Experimental Psychology. Learning, Memory, and
Cognition,42(5), 723–739. https://doi.org/10.1037/
xlm0000205
Hernandez, I. & Preston, J. L. (2013). Disfluency dis-
rupts the confirmation bias. Journal of Experimental
Social Psychology,49(1), 178–182. https://doi.org/
10.1016/j.jesp.2012.08.010
Hinsz, V. (2015). Teams as technology: Strengths,
weaknesses, and trade-offs in cognitive task
Dharanikota et al. 17
performance. Team Performance Management,21(5/
6), 218–230. https://doi.org/10.1108/tpm-02-2015-
0006
Hodgkinson, G. P., Bown, N. J., Maule, A. J., Glaister,
K. W., & Pearman, A. D. (1999). Breaking the frame:
An analysis of strategic cognition and decision
making under uncertainty. Strategic Management
Journal,20(10), 977–985. https://doi.org/10.1002/
(sici)1097-0266(199910)20:10<977::aid-smj58>3.
0.co;2-x
Holder, E. & Xiong, C. (2023). Dispersion vs dis-
parity: Hiding variability can encourage stereo-
typing when visualizing social outcomes. IEEE
Transactions on Visualization and Computer
Graphics,29(1), 624–634. https://doi.org/10.
1109/TVCG.2022.3209377
Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed
cognition: Toward a new foundation for human-
computer interaction research. ACM Transactions
on Computer-Human Interaction,7(2), 174–196.
https://doi.org/10.1145/353485.353487
Huang, H.-H., Hsu, J. S.-C., & Ku, C.-Y. (2012). Un-
derstanding the role of computer-mediated counter-
argument in countering confirmation bias. Decision
Support Systems,53(3), 438–447. https://doi.org/10.
1016/j.dss.2012.03.009
Hutchins, E. (1995). Cognition in the wild. MIT Press.
Ioannidis, J. P. A., Munaf`
o, M. R., Fusar-Poli, P., Nosek,
B. A., & David, S. P. (2014). Publication and other
reporting biases in cognitive sciences: Detection,
prevalence, and prevention. Trends in Cognitive
Sciences,18(5), 235–241. https://doi.org/10.1016/j.
tics.2014.02.010
Janis, I. L. (1971). Groupthink psychology today
(pp. 443–447). American Psychological Association.
Jeong, Y., Oh, S., Kang, Y., & Kim, S.-H. (2021).
Impacts of visualizations on decoy effects. Inter-
national Journal of Environmental Research and
Public Health,18(23), Article 12674. https://doi.org/
10.3390/ijerph182312674
Kahneman, D. & Klein, G. (2009). Conditions for intuitive
expertise: A failure to disagree. American Psychologist,
64(6), 515–526. https://doi.org/10.1037/a0016755
Kayhan, V. O. (2013). Seeking health information on the
web: Positive hypothesis testing. International
Journal of Medical Informatics,82(4), 268–275.
https://doi.org/10.1016/j.ijmedinf.2012.12.004
Kliegr, T., Bahn´
ık,
ˇ
S., & Fürnkranz, J. (2021). A
review of possible effects of cognitive biases on
interpretation of rule-based machine learning
models. Artificial Intelligence,295(2), Article
103458. https://doi.org/10.1016/j.artint.2021.
103458
Kostopoulou, O., Tracey, C., & Delaney, B. C. (2021).
Can decision support combat incompleteness and
bias in routine primary care data? Journal of the
American Medical Informatics Association: JAMIA,
28(7), 1461–1467. https://doi.org/10.1093/jamia/
ocab025
Król, M. & Król, M. (2019). Learning from peers’eye
movements in the absence of expert guidance: A
proof of Concept using laboratory stock trading, eye
tracking, and machine learning. Cognitive Science,
43(2), Article e12716. https://doi.org/10.1111/cogs.
12716
Küper, A., Lodde, G., Livingstone, E., Schadendorf, D.,
&Kr
¨
amer, N. (2022). Mitigating cognitive bias with
clinical decision support systems: An experimental
study. Journal of Decision Systems,1–20.
Larrick, R. P. (2004). Debiasing. In D. J. Koehler, & N.
Harvey (Eds.), Blackwell handbook of judgment and
decision making (pp. 316–337). Blackwell Publishing.
https://doi.org/10.1002/9780470752937.ch16
Lau, A. Y. S. & Coiera, E. W. (2009). Can cognitive biases
during consumer health information searches be reduced
to improve decision making? Journal of the American
Medical Informatics Association: JAMIA,16(1), 54–65.
https://doi.org/10.1197/jamia.M2557
Leibenstein, H. (1950). Bandwagon, snob, and Veblen
effects in the theory of consumers’demand. Quar-
terly Journal of Economics,64(2), 183–207. https://
doi.org/10.2307/1882692
Levac, D., Colquhoun, H., & O’Brien, K. K. (2010).
Scoping studies: Advancing the methodology. Im-
plementation Science,5(1), 1–9. https://doi.org/10.
1186/1748-5908-5-69
Liang, Y., Anil, B., Mohsen, S., & Baabak, A. (2022).
Availability heuristic in construction workforce
decision-making amid COVID-19 pandemic: Empir-
ical evidence and mitigation strategy. Journal of
Management in Engineering,38(5), Article 04022046.
Lid´
en, M., Gr¨
ans, M., & Juslin, P. (2019). “Guilty, no
doubt”: Detention provoking confirmation bias in
judges’guilt assessments and debiasing techniques.
Psychology, Crime and Law,25(3), 219–247. https://
doi.org/10.1080/1068316x.2018.1511790
Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009).
Giving debiasing away: Can psychological research
on correcting cognitive errors promote human wel-
fare? Perspectives on Psychological Science: A
18 Human Factors 0(0)
Journal of the Association for Psychological Sci-
ence,4(4), 390–398. https://doi.org/10.1111/j.1745-
6924.2009.01144.x
Lim, L.-H. & Benbasat, I. (1997). The debiasing role of
group support systems: An experimental investigation of
the representativeness bias. International Journal of
Human-Computer Studies,47(3), 453–471. https://doi.
org/10.1006/ijhc.1997.0137
Lipkus, I. M. & Klein, W. M. P. (2006). Effects of
communicating social comparison information on
risk perceptions for colorectal cancer. Journal of
Health Communication,11(4), 391–407. https://doi.
org/10.1080/10810730600671870
Lipsey, M. W. & Wilson, D. B. (2001). Practical meta-
analysis. Sage publications, Inc.
Lopes, L. L. (1987). Procedural debiasing. Acta Psy-
chologica,64(2), 167–185. https://doi.org/10.1016/
0001-6918(87)90005-9
MacLean, C. L. & Read, J. D. (2019). An illusion of
objectivity in workplace investigation: The cause
analysis chart and consistency, accuracy, and bias in
judgments. Journal of Safety Research,68(3),
139–148. https://doi.org/10.1016/j.jsr.2018.12.008
Maier, M., Bartoˇ
s, F., Stanley, T. D., Shanks, D. R.,
Harris, A. J., & Wagenmakers, E. J. (2022). No
evidence for nudging after adjusting for publication
bias. Proceedings of the National Academy of Sci-
ences,119 (31), e2200300119.
Marett, K. & Adams, G. (2006). I alleviating the fa-
miliarity bias. In Proceedings of the 39th Annual
Hawaii International Conference on System Sciences
(HICSS’06) (Vol. 2, p. 31b). IEEE.
Marlin, E. (2018). Using artificial intelligence to min-
imize information overload and cognitive biases in
military intelligence. US Army School for Advanced
Military Studies Fort Leavenworth United States.
https://apps.dtic.mil/sti/pdfs/AD1071764.pdf
Meissner, P., Schubert, M., & Wulf, T. (2018). Deter-
minants of group-level overconfidence in teams: A
quasi-experimental investigation of diversity and
tenure. Long Range Planning,51(6), 927–936.
https://doi.org/10.1016/j.lrp.2017.11.002
Meissner, P. & Wulf, T. (2017). The effect of cognitive
diversity on the illusion of control bias in strategic
decisions: An experimental investigation. European
Management Journal,35(4), 430–439. https://doi.
org/10.1016/j.emj.2016.12.004
Mello, A. L. & Rentsch, J. R. (2015). Cognitive diversity
in teams: A multidisciplinary review. Small Group
Research,46(6), 623–658. https://doi.org/10.1177/
1046496415602558
Mojzisch, A., Schulz-Hardt, S., Kerschreiter, R., & Frey,
D. (2008). Combined effects of knowledge about
others’opinions and anticipation of group discussion
on confirmatory information search. Small Group
Research,39(2), 203–223. https://doi.org/10.1177/
1046496408315983
Morier, D. M. & Borgida, E. (1984). The conjunction
fallacy: A task specific phenomenon? Personality
and Social Psychology Bulletin,10(2), 243–252.
https://doi.org/10.1177/0146167284102010
Murata, A., Nakamura, T., & Karwowski, W. (2015).
Influence of cognitive biases in distorting decision
making and leading to critical unfavorable incidents.
Safety Now,1(1), 44–58. https://doi.org/10.3390/
safety1010044
Ni, F., Arnott, D., & Gao, S. (2019). The anchoring
effect in business intelligence supported decision-
making. Journal of Decision Systems,28(2), 67–81.
https://doi.org/10.1080/12460125.2019.1620573
Okan, Y., Garcia-Retamero, R., Cokely, E. T., &
Maldonado, A. (2018). Biasing and debiasing health
decisions with bar graphs: Costs and benefits of
graph literacy. Quarterly Journal of Experimental
Psychology,71(12), 2506–2519. https://doi.org/10.
1177/1747021817744546
Pitaloka, E., Masruroh, N. A., & Lin, S.-W. (2019).
Decision bias in the newsvendor problem: On the
comparison of managers and students as news-
vendors with decision support system as debiasing
strategy. In 2019 IEEE International Conference on
Industrial Engineering and Engineering Management
(IEEM) (pp. 940–944). IEEE.
Pollock, D., Peters, M. D. J., Khalil, H., McInerney, P.,
Alexander, L., Tricco, A. C., Evans, C., de Moraes,
´
E. B., Godfrey, C. M., Pieper, D., Saran, A., Stern, C.,
& Munn, Z. (2023). Recommendations for the ex-
traction, analysis, and presentation of results in
scoping reviews. JBI Evidence Synthesis,21(3),
520–532. https://doi.org/10.11124/JBIES-22-00123
Pronin, E. (2009). The introspection illusion. Advances
in Experimental Social Psychology,41(1), 1–67.
https://doi.org/10.1016/S0065-2601(08)00401-2
Quigley-McBride, A. (2020). Practical solutions to forensic
contextual bias. Zeitschrift Für Psychologie,228(3),
162–174. https://doi.org/10.1027/2151-2604/a000409
Quigley-McBride, A. & Wells, G. L. (2018). Fillers can
help control for contextual bias in forensic
Dharanikota et al. 19
comparison tasks. Law and Human Behavior,42(4),
295–305. https://doi.org/10.1037/lhb0000295
Rajivan, P. & Cooke, N. J. (2018). Information-pooling
bias in collaborative security incident correlation
analysis. Human Factors,60(5), 626–639. https://
doi.org/10.1177/0018720818769249
Rastogi,C.,Zhang,Y.,Wei,D.,Varshney,K.R.,
Dhurandhar, A., & Tomsett, R. (2022). Deciding
fast and slow: The role of cognitive biases in AI-
assisted decision-making. Proceedings of the ACM
on Human-Computer Interaction,6(CSCW1),
1–22. https://doi.org/10.1145/3512930
Rezaei, J., Arab, A., & Mehregan, M. (2022). Equalizing
bias in eliciting attribute weights in multiattribute
decision-making: Experimental research. Journal of
Behavioral Decision Making,35(2), Article e2262.
https://doi.org/10.1002/bdm.2262
Rieger, M. O. (2012). Why do investors buy bad fi-
nancial products? Probability misestimation and
preferences in financial investment decision. The
Journal of Behavioral Finance,13(2), 108–118.
https://doi.org/10.1080/15427560.2012.680991
Rose, J. P. (2012). Debiasing comparative optimism and
increasing worry for health outcomes. Journal of
Health Psychology,17(8), 1121–1131. https://doi.
org/10.1177/1359105311434051
Roy, M. C. & Lerch, F. J. (1996). Overcoming inef-
fective mental representations in base-rate problems.
Information Systems Research,7(2), 233–247.
https://doi.org/10.1287/isre.7.2.233
Schmittata, S. M., Englich, B., Sautner, L., & Velten, P.
(2022). Alternative stories and the decision to
prosecute: An applied approach against confirmation
bias in criminal prosecution. Psychology, Crime and
Law,28(6), 608–635. https://doi.org/10.1080/
1068316x.2021.1941013
Sezer, O., Zhang, T., Gino, F., & Bazerman, M. H.
(2016). Overcoming the outcome bias: Making in-
tentions matter. Organizational Behavior and Hu-
man Decision Processes,137(4), 13–26. https://doi.
org/10.1016/j.obhdp.2016.07.001
Shaikh, S. E. (2022). Interactive and revisable decision-
support: Doing more harm than good? Behaviour &
Information Technology,41(4), 845–863. https://doi.
org/10.1080/0144929x.2020.1837242
Sharif, M. A. & Oppenheimer, D. M. (2021). The effect
of categories on relative encoding biases in memory-
based judgments. Organizational Behavior and
Human Decision Processes,162(4), 1–8. https://doi.
org/10.1016/j.obhdp.2020.10.005
Siegel, M., Eder, J. S. N., Wicherts, J. M., & Pietschnig, J.
(2022). Times are changing, bias isn’t: A meta-meta-
analysis on publication bias detection practices, prev-
alence rates, and predictors in industrial/organizational
psychology. Journal of Applied Psychology,107(11),
2013–2039. https://doi.org/10.1037/apl0000991
Smith, A. C. & Greene, E. (2005). Conduct and its
consequences: Attempts at debiasing jury judgments.
Law and Human Behavior,29(5), 505–526. https://
doi.org/10.1007/s10979-005-5692-5
Solomon, B. C. & Hall, M. E. K. (2023). When (non)
differences make a difference: The roles of demo-
graphic diversity and ideological homogeneity in
overcoming ideologically biased decision making.
Organization Science,34(5), 1820–1838. https://doi.
org/10.1287/orsc.2022.1647
Svenson, O., Gonzalez, N., & Eriksson, G. (2014).
Modeling and debiasing resource saving judgments.
Judgment and Decision Making,9(5), 465–478.
https://doi.org/10.1017/s1930297500006823
Tang, X., Wang, L., Peng, N., Xue, C., & Wang, H. (2023).
Anchoring effects in view transitions of data Visuali-
zation: Evidence from an ERP study. International
Journal of Industrial Ergonomics,97(S9), Article
103460. https://doi.org/10.1016/j.ergon.2023.103460
Teppan, E. C. & Felfernig, A. (2009). Minimization of
product utility estimation errors in recommender result
set evaluations. In 2009 IEEE/WIC/ACM International
Joint Conference on Web Intelligence and Intelligent
Agent Technology (Vol. 1, pp. 20–27). IEEE.
Tricco, A. C., Lillie, E., Zarin, W., O’Brien, K. K.,
Colquhoun, H., Levac, D., Moher, D., Peters,
M. D. J., Horsley, T., Weeks, L., Hempel, S., Akl,
E. A., Chang, C., McGowan, J., Stewart, L., Hartling,
L., Aldcroft, A., Wilson, M. G., Garritty, C., &
Straus, S. E. (2018). PRISMA extension for scoping
reviews (PRISMA-ScR): Checklist and explanation.
Annals of Internal Medicine,169(7), 467–473.
https://doi.org/10.7326/M18-0850
Tsai, J., Miller, S., & Kirlik, A. (2011). Interactive vi-
sualizations to improve bayesian reasoning. Pro-
ceedings of the Human Factors and Ergonomics
Society - Annual Meeting,55(1), 385–389. https://
doi.org/10.1177/1071181311551079
Tversky, A. & Kahneman, D. (1974). Judgment under un-
certainty: Heuristics and biases. Science,185(4157),
1124–1131. https://doi.org/10.1126/science.185.4157.
1124
Ubel,P.A.,Smith,D.M.,Zikmund-Fisher,B.J.,Derry,
H. A., McClure, J., Stark, A., Wiese, C., Greene, S.,
20 Human Factors 0(0)
Jankovic, A., & Fagerlin, A. (2010). Testing whether
decision aids introduce cognitive biases: Results of a
randomized trial. Patient Education and Counseling,
80(2), 158–163. https://doi.org/10.1016/j.pec.2009.10.
021
van Berkel, N., Bellio, M., Skov, M. B., & Blandford, A.
(2023). Measurements, algorithms, and presentations of
reality: Framing interactions with AI-enabled decision
support. ACM Transactions on Computer-Human In-
teraction,30(2), 1–33. https://doi.org/10.1145/3571815
van Dongen, K., Schraagen, J. M., Eikelboom, A., & te
Brake, G. (2005). Supporting decision making by a
critical thinking tool. Proceedings of the Human
Factors and Ergonomics Society - Annual Meeting,
49(3), 517–521. https://doi.org/10.1177/
154193120504900364
van Swol, L. M., Chang, C.-T., & Gong, Z. (2023). The
benefits of advice from outgroup members on de-
cision accuracy and bias reduction. Decision,10(1),
81–91. https://doi.org/10.1037/dec0000173
Vold, K. (2024). Human-AI cognitive teaming: Using AI to
support state-level decision making on the resort to force.
Australian Journal of International Affairs,78(2),
229–236. https://doi.org/10.1080/10357718.2024.
2327383
Wall, E., Blaha, L. M., Franklin, L., & Endert, A. (2017).
Warning, bias may occur: A proposed approach to
detecting cognitive bias in interactive visual ana-
lytics. 2017 IEEE Conference on Visual Analytics
Science and Technology (VAST), 104–115. https://
doi.org/10.1109/VAST.2017.8585669
Wang, S. C., Cronin, M. A., & Mannix, E. A. (2024).
Beyond diversity and homogeneity: Conceptualizing
compatibility in cognition. Group & Organization
Management, Article 10596011241241093. https://
doi.org/10.1177/10596011241241093
Weinstein, N. D. (1983). Reducing unrealistic optimism
about illness susceptibility. Health Psychology,2(1),
11–20. https://doi.org/10.1037/0278-6133.2.1.11
Wood, R. E. (1986). Task complexity: Definition of the
construct. Organizational Behavior and Human
Decision Processes,37(1), 60–82. https://doi.org/10.
1016/0749-5978(86)90044-0
Wu, D.-A., Shimojo, S., Wang, S. W., & Camerer, C. F.
(2012). Shared visual attention reduces hindsight
bias. Psychological Science,23(12), 1524–1533.
https://doi.org/10.1177/0956797612447817
Zhang, Y., Bellamy, R. K. E., & Kellogg, W. A. (2015).
Designing information for remediating cognitive
biases in decision-making. In Proceedings of the
33rd annual ACM conference on human factors in
computing systems (pp. 2211–2220). ACM.
Zhou, L., Paul, S., Demirkan, H., Yuan, L., Spohrer, J., Zhou,
M., & Basu, J. (2021). Intelligence augmentation: To-
wards building human-machine symbiotic relationship.
AIS Transactions on Human-Computer Interaction,
13(2), 243–264. https://doi.org/10.17705/1thci.00149
Zikmund-Fisher, B. J., Fagerlin, A., & Ubel, P. A. (2007).
Mortality versus survival graphs: Improving temporal
consistency in perceptions of treatment effectiveness.
Patient Education and Counseling,66(1), 100–107.
https://doi.org/10.1016/j.pec.2006.10.013
Zikmund-Fisher, B. J., Ubel, P. A., Smith, D. M., Derry,
H. A., McClure, J. B., Stark, A., Pitsch, R. K., &
Fagerlin, A. (2008). Communicating side effect risks
in a tamoxifen prophylaxis decision aid: The de-
biasing influence of pictographs. Patient Education
and Counseling,73(2), 209–214. https://doi.org/10.
1016/j.pec.2008.05.010
Author Biographies
Harini Dharanikota is a PhD candidate at the Usher
Institute, University of Edinburgh. She received
her MSc in psychological research from the
University of Edinburgh in 2021.
Emma Howie is a surgical registrar and clinical
research fellow at the University of Edinburgh.
She received her MBChB from the University of
Glasgow in 2014 and BSc (Hons) in medicine
from the University of St. Andrews in 2011.
Lorraine Hope is a professor of applied cognitive
psychology at the University of Portsmouth. She
received her PhD in psychology from the Uni-
versity of Aberdeen in 2004.
Stephen J Wigmore is the Regius Chair of Surgery
and Head of the Department of Clinical Surgery at
the University of Edinburgh, Royal Infirmary of
Edinburgh. He received his MD from the Uni-
versity of Edinburgh in 1998.
Richard Skipworth is a consultant surgeon, Hon-
orary Reader and NHS Research Scotland Clinician
at the Royal Infirmary of Edinburgh. He received his
MD from the University of Edinburgh in 2011.
Steven Yule is a professor and chair of behavioural
sciences at the Usher Institute, University of Ed-
inburgh. He received his PhD in psychology from
the University of Aberdeen in 2003.
Dharanikota et al. 21