ArticlePDF Available

Can We Trust Best Practices? Six Cognitive Challenges of Evidence-Based Approaches

Authors:
  • ShadowBox LLC & MacroCognition LLC

Abstract

There is a growing popularity of data-driven best practices in a variety of fields. Although we applaud the impulse to replace anecdotes with evidence, it is important to appreciate some of the cognitive constraints on promulgating best practices to be used by practitioners. We use the evidence-based medicine (EBM) framework that has become popular in health care to raise questions about whether the approach is consistent with how people actually make decisions to manage patient safety. We examine six potential disconnects and suggest ways to strengthen best practices strategies.
There is a growing popularity of data-driven best
practices in a variety of fields. Although we applaud the
impulse to replace anecdotes with evidence, it is impor-
tant to appreciate some of the cognitive constraints on
promulgating best practices to be used by practitioners.
We use the evidence-based medicine (EBM) framework
that has become popular in health care to raise ques-
tions about whether the approach is consistent with
how people actually make decisions to manage patient
safety. We examine six potential disconnects and sug-
gest ways to strengthen best practices strategies.
Keywords: quality, safety, evidence-based medicine,
decision making, expertise, training
INTRODUCTION
The concept of “best practices,” informed
by empirical evaluation, has enormous poten-
tial for guiding practitioners in many different
fields toward treatments that have been shown
to work and away from folk remedies that
have little to offer besides tradition. Data are
more important than intuitions, or at least they
should be. However, it is simplistic to conclude
that data and research are sufficient to improve
practice. Practitioners need to make a variety of
decisions about how to interpret and apply the
evidence. This article explores some of the cog-
nitive challenges of establishing and applying
best practices—cognitive challenges of relying
on data for treatment recommendations. Our
goal is to explore the limitations of data-driven
decision making from a cognitive engineering
perspective and make suggestions to overcome
these limitations.
To illustrate some of the difficulties with a
best-practices regimen, we use the example of
evidence-based medicine (EBM) in health care.
EBM seeks to establish a set of best practices for
clinicians to use based primarily on the results of
scientifically rigorous research (Gray, 1996;
Roberts & Yeager, 2004). The EBM approach to
clinical practice is to identify a treatment of
interest; conduct carefully controlled studies,
ideally using a double-blind paradigm; deter-
mine the effectiveness of the treatment; and dis-
seminate the results in the form of best practices
in the form of rules, such as “If X, then do Y.”
These rules can be readily applied and also used
to evaluate compliance. EBM offers a way to get
rid of expensive treatments that do not make a
difference to patients. This EBM script is an
antidote to anecdotal medicine and a counter to
the limits of the experience of physicians.
The concept of EBM has some ambiguity, as
proponents have been progressively backing away
from their original pronouncements about how it
was going to revolutionize medicine. Further,
EBM has been adapted for financial purposes in
order to deny payments for noncompliance. We
recognize these kinds of issues, but they are
peripheral to the goals of this article. We are inter-
ested in the basic strategy of using data to improve
the quality of treatment. EBM, including guide-
lines and best practices, are specific manifesta-
tions of a more general effort to apply modernist,
rationalist approaches to improve treatments in
fields such as health care (Wears & Hunte, 2014).
637520EDMXXX10.1177/1555343416637520Journal of Cognitive Engineering and Decision MakingCognitive Challenges of Best Practices
2016
Address correspondence to Devorah E. Klein, PhD, Senior
Scientist at Marimo Consulting, LLC, 44 Capen St.,
Medford, MA 02155, Devorah@marimoconsulting.com.
Special Issue
Can We Trust Best Practices? Six Cognitive
Challenges of Evidence-Based Approaches
Devorah E. Klein, Marimo Consulting, LLC, Medford, Massachusetts,
David D. Woods, The Ohio State University, Columbus,
Gary Klein, MacroCognition, LLC, Washington, D.C., and
Shawna J. Perry, University of Florida, Gainesville
Journal of Cognitive Engineering and Decision Making
201X, Volume XX, Number X, Month 2016, pp. 1 –11
DOI: 10.1177/1555343416637520
Copyright © 2016, Human Factors and Ergonomics Society.
by guest on April 18, 2016edm.sagepub.comDownloaded from
2 Month XXXX - Journal of Cognitive Engineering and Decision Making
Cognitive engineering insights and methods can
be useful for all of these efforts. We are using
EBM in medicine as an exemplar because that is
where the best- practices program has been pur-
sued most extensively.
This article describes some of the challenging
decisions practitioners face when they try to use
EBM for medical decision making in clinical
work. Timmermans and Berg (2003) have dis-
cussed the sociocultural context of EBM. Our
interest is in the cognitive challenges that con-
front EBM in the context of actual practice and
the complexity of patients and diseases. An addi-
tional goal is to consider ways to complement or
extend EBM to help health care professionals
make better care decisions for and with patients.
SIX COGNITIVE CHALLENGES OF EBM
For physicians who work in complex,
dynamic environments with high time pres-
sure, uncertainty, and risk, EBM may appear a
salvation. However, these attributes of complex
settings may make it more difficult to employ
the principles of EBM. Based on the results of
several decades of study on how people make
decisions under time pressure, uncertainty, and
complexity, as well as extensive work on patient
safety (e.g., Cook, Render, & Woods, 2000;
Perry & Wears, 2011), we have identified six
cognitive challenges facing clinicians who want
to apply EBM in their practice.
1. Characterizing problems
2. Gauging confidence in the evidence
3. Deciding what to do when the generally accepted
best practices conflict with professional expertise
4. Applying simple rules to complex situations
5. Revising treatment plans that do not seem to be
working
6. Considering remedies that are not best practices
In all six challenges, a rigid adherence to EBM
can run counter to clinical practitioners’ cogni-
tive and conceptual strengths as they face uncer-
tain medical situations. In its extreme forms,
EBM suggests a conflict between the use of
evidence and the use of experience. We believe
that little is gained by postulating such a con-
flict, and we advocate a blend of evidence with
hard-won expertise and informed intuitions.
Challenge 1: Characterizing problems
The notion that we can apply a rule—“If this
condition arises, apply that treatment”—ignores
the ability to judge whether the condition has
arisen. Much, if not most, of the challenge is in
understanding what is anomalous and the nature
of the underlying problems (Klein, Pliske, Cran-
dall, & Woods, 2005). Once a problem is accu-
rately identified, selection of a treatment can be
straightforward. Best-practices approaches, such
as EBM, tackle the easier part of the equation—
how to address each problem—and do not give
adequate coverage to the detection and identifica-
tion of the problem, the variability within a prob-
lem category, and the nature of other interacting
problems, conditions, and treatments. As a result,
they undervalue the diagnostic expertise that is
essential for effective health care. For example,
the following vital signs—blood pressure = 90/60,
heart rate = 130, respiratory rate = 30, oxygen
saturation = 88%—open an immediate list of situ-
ations of concern (sepsis, congestive heart failure,
emphysema flare) that are sorted out by expertise
as the physician takes into account what is the
context and who is the patient, a newborn baby or
an 85-year-old woman with diabetes and a cough?
Clinicians use heuristics (useful tactics that gain
responsiveness and robustness but fall short of
optimality) and pattern recognition (requiring
diverse experiences) to cope with complexity.
Best practices rely on categorizations as a form
of organizing knowledge. These generalizations
are simultaneously valuable and limited when
a practitioner is confronted with variety, time
course, and interactions across disease conditions.
Heuristics, pattern recognition, and other forms
of expertise lose their vitality, adaptiveness, and
context sensitivity when reduced to rules.
Challenge 2: Gauging Confidence in the
Evidence
Many physicians would like to access a
database of best practices to identify a preferred
treatment regimen for a patient, but too often
the choice of treatment depends on judging the
quality and relevance of the evidence, judg-
ments that often depend on experience and are
made under uncertainty and time pressure.
Evidence by itself can mislead us. Witness
the frequent revision of claims that had empirical
by guest on April 18, 2016edm.sagepub.comDownloaded from
Cognitive Challenges of Best PraCtiCes
3
support but were modified or overturned by later
evidence in fields such as cancer treatment
( Kaiser, 2015). In “garden path” problems,
early-arriving pieces of evidence points to one
hypothesis, but later, more subtle patterns of evi-
dence reveal a different problem is actually pres-
ent (Woods & Hollnagel, 2006).
Even if we ignore the problem of studies that
cannot be replicated, we also have to worry
about negative findings that are used to dismiss
treatments that do have value. The data collec-
tion may have been flawed because of variables
that were not well understood at the time. A
well-known example is the set of best practices
for treating peptic ulcer disease, thought to arise
from excess stomach acids, likely caused by
stress. When Barry Marshall explored the
hypothesis that ulcers were actually caused by
Helicobacter pylori, he conducted a critical
experiment searching for H. pylori in the gut of
ulcers victims. But the search failed. None of the
patients showed any sign of H. pylori. That
should have scuttled the hypothesis, but the
problem was that the lab technicians were dis-
carding the cultures after only a few days, the
standard procedures they used for strep infec-
tions. Once the lab technicians accidentally gave
the cultures more time to grow, the link to H.
pylori was demonstrated definitively (Marshall,
2005). And the best practice shifted from sur-
gery to antibiotics.
Some of the gold-standard sources of EBM
have proved to be misleading—the Framingham
Heart Study was performed focusing only on
White males. However, many of the key symp-
toms for differentiating a heart attack from other
diseases (i.e., “feels like an elephant sitting on
my chest”) are found in only 5% of women.
After one emergency physician realized the con-
sequences of these limitations, she explained her
horrified reaction: “I think about all the women
I sent home to their deaths because I was follow-
ing the algorithm for best practices for chest
pain” (personal communication, April 6, 2012).
Excessive faith in evidence by itself can also
literally blind people (Chatterjee, 2015) and lead
to fixation on one view, which impairs the search
for and utilization of new, emerging, alternative
indicators. The 1981 Nobel Prize in Medicine,
given to Torsten Wiesel and David Hubel,
focused on their finding of a critical period of
neuroplasticity for vision. On the basis of this
research, clinicians stopped performing correc-
tive surgery for children born with cataracts if
the intervention occurred after the age of 8.
However, a researcher in India decided to chal-
lenge this dogma. When children’s cataracts
were removed and artificial lenses inserted,
vision was at least partially restored. The origi-
nal Nobel Prize research, performed on pri-
mates, had led the medical community astray.
Examples like these show that the implica-
tions of evidence are not clear-cut, and the
search for evidence is ongoing. Researchers may
be unaware of the variables that are obscuring
the relationships they are studying. As a result,
the medical community has to be continually
prepared to revise its faith in evidence that may
seem solid today. Increasingly, researchers are
appreciating the value of being uncertain (Tim-
mermans & Berg, 2003). This is a critical find-
ing in cognitive engineering—performance
depends on how well people, teams, and organi-
zations revise assessments as new evidence
comes in (Woods & Hollnagel, 2006). A clear-
cut example occurred in the run-up to the Colum-
bia space shuttle disaster as managers discounted
new evidence about new kinds of safety risks
(Woods, 2005). EBM appears to offer a reassur-
ing set of stable best practices, but in reality, we
find that yesterday’s best practices become
questioned and eventually revised or even dis-
credited. Although once established, these prac-
tices may take a long time to be modified or dis-
carded even when new evidence builds up.
Challenge 3: Deciding What to Do
When the Generally Accepted Best
Practices Conflict With Professional
Expertise
This conflict is an issue because the profes-
sional judgment of physicians is credible, and
their expertise and intuitions need to be taken
into account. Physicians build up expertise
and acquire pattern repertoires over years of
experience. When they have ample opportuni-
ties for feedback about their judgments, their
intuitions—their use of experience to make
pattern-based judgments—are valuable (Klein,
1998). Kahneman and Klein (2009) assert that
by guest on April 18, 2016edm.sagepub.comDownloaded from
4 Month XXXX - Journal of Cognitive Engineering and Decision Making
intuitions are useful under two conditions: a
reasonably stable environment and an oppor-
tunity for people to learn from feedback. For
example, the stock market does not constitute
a reasonably stable environment, and we are
highly skeptical about claims of expertise or
intuition in selecting stocks for investment.
Another example is organizational decision
making. Most people who work on the adminis-
trative side of organizations fail to get frequent,
consistent, or accurate feedback and thus fail to
develop expertise and credible intuition.
In contrast, medicine satisfies both of the
conditions for credible intuitions. It is a reason-
ably stable environment, and physicians do get
some feedback. Intuition here is not a random or
mystical process but simply the use of pattern
matching that is based on experience in a rea-
sonably stable environment.
Admittedly, physicians do not achieve the lev-
els of expertise found in chess grandmasters.
Chase and Simon (1973) described how chess
grandmasters accumulate tens of thousands of
patterns that enabled them to rapidly size up situ-
ations. Chess is a highly stable environment—the
positions of the pieces are unambiguous. And
chess players receive clear feedback about the
quality of their decisions. They know that they
have won or lost a game and can go over the
moves to determine where they made mistakes.
Physicians do not receive the same level of feed-
back on their decisions. When they refer patients
to specialists, they may not be informed about the
results. Worse, feedback is abundant on common
conditions but limited on rare conditions, such as
early rabies. And worse yet, not all feedback is
equal. Losses loom larger than successes. Ghaf-
farzadegan, Epstein, and Martin (2013) have
shown that a false negative (e.g., failing to do a
needed C-section) will have a greater impact than
a false positive (doing an unnecessary C-section).
That is why physicians are sometimes cautioned
not to trust their intuition. Indeed, this warning is
part of the rationale for EBM.
We are not advocating for physicians to uncrit-
ically trust their intuitions. Rather, we see the
importance of taking judgments and intuitions
into account when making decisions, which will
occasionally generate a conflict between best
practices and professional judgment. Even though
medicine is not as stable and well structured a
domain as chess, and does not permit the same
quality of feedback, it is not a zero-reliability
domain, like stock selection. Experience lets
skilled physicians identify reasonable courses of
action and reasonable treatments upon diagnos-
ing a condition. These intuitions can sometimes
be misleading, which is why they need to be
checked by deliberately scrutinizing the condi-
tions and likely consequences of different actions.
Still, in complex situations, heuristics and pattern
recognition are essential and cannot be replaced
by sets of rules (DeAnda & Gaba, 1991; Wears &
Schubert, 2015).
Challenge 4: Applying Simple Rules
to Complex Situations
Problem solving is applied to both well-
ordered and complex situations. Well-ordered
situations are highly stable and are easily cap-
tured as a set of procedures. Insertion of a
central line is a well-ordered situation in which
a checklist approach has markedly reduced line
infections (Pronovost et al., 2006). A central
line is inserted into a large vein to deliver blood,
medications, fluids, or nutrients for an extended
period of time. In situations like this, there are
fairly unambiguous tasks to be accomplished
and clear criteria for success.
Complex situations, in contrast, contain many
variables that must be taken into account. These
variables all interact and vary over time as the
patient’s status changes. Further, a patient may
be suffering from several different problems,
making the patient’s current status difficult to
assess. A patient may have both asthma and dia-
betes. Severe asthma requires the use of steroids,
but the steroids will drive up blood glucose. The
physician cannot rely on either an asthma proto-
col or a diabetes protocol alone but will have to
take both conditions into account, performing
trade-offs that also reflect the characteristics and
lifestyle of the individual patient. Much of health
care involves wicked problems (Rittel & Web-
ber, 1973) that do not have an unambiguously
correct solution, a right treatment—but even so,
some solutions are better than others.
Rules and evidence are about populations,
but physicians have to treat individual patients.
The evidence may focus on average patient
by guest on April 18, 2016edm.sagepub.comDownloaded from
Cognitive Challenges of Best PraCtiCes
5
response data, downplaying the distribution and
ignoring considerable individual variations. A
given drug may be ineffective when averaged
over an entire sample but may work at one of the
extremes of this sample.
Physicians have to connect the general or cat-
egorical guidance of best practices to fit the indi-
vidual patient in front of them. And that is diffi-
cult because the research base consists of studies
that vary one thing at a time, so it is hard to adapt
the findings to all of the factors present in an
individual patient. The research base cannot
vary all of the relevant factors without quickly
running into combinatorial explosions and con-
founded variables.
Global rules do not necessarily apply blindly
to specific patients. Patients, diseases, and
comorbidities are more variable than the catego-
ries of best practices. As a result, no set of rules
by themselves can completely specify how to
treat individual patients under complex condi-
tions. This finding holds regardless of whether
sets of rules are organized as best practices, writ-
ten as procedures, or mechanized in computers;
rules are resources for considered action (Such-
man, 1987; Woods, Roth, & Bennett, 1990).
Skilled physicians certainly use statistically
based generalization, but they also use other
processes and inputs for situated cognition and
action (Suchman, 1987).
One of the primary attractions of best prac-
tices is simplicity—the potential to dial in a
course of treatment once a diagnosis is made.
However, simplicity is also a limitation when
confronting the complexity of individual
patients.
Challenge 5: Revising Treatment Plans
That Do Not Seem to Be Working
Health care providers often must rapidly
determine when a treatment plan needs to be
adapted. Plans have to be adapted for several
reasons: They may not be working, complex
situations are continually changing, and patient
status may fluctuate. Plans are often created for
these “wicked problems,” involving ill-defined
goals that need to be revised. Medicine is prac-
ticed with patients who improve or deteriorate
or develop additional symptoms and problems
over time.
However, EBM is punctate; given these data,
here are the recommendations. EBM is not well
suited for plan adaptation. Practicing physicians
who want to adhere to EBM have to wrestle with
several challenges: the need to pick up early signs
that a best practice is not producing the intended
effects, early signs of expectancies that have been
violated. They have to determine when and how
to gather evidence to test these concerns and
when to revise or withdraw a best practice.
Plan revision places great demands on exper-
tise in understanding the treatment regimen and
the individual patient so that revisions can be
made quickly and effectively. Physicians need
skill in revising a treatment plan without being
too impatient to give it a chance to work or wait-
ing so long that the patient’s chances of recovery
are diminished. Rudolph, Morrison, and Carroll
(2009) has examined the dynamic of hanging
onto a plan for a while but being prepared to
drop it when required. Research on dynamic and
adaptive decision making can inform medical
practice and be extended to enhance patient
safety (Amalberti, 2013; Brehmer, 1987, 1992;
Kylesten, 2013).
Woods (1994) and Woods and Hollnagel
(2006) model anomaly response and plan revi-
sion based on studies in abnormal events in avia-
tion, nuclear power emergencies, and space
shuttle missions. As anomalies occur, judgments
about when and how to revise a plan are diffi-
cult. Gaba, Maxwell, and DeAnda (1987) docu-
mented this need for adaptation in their study of
anesthesia crises. Klein (2007a, 2007b) has
found that revising a plan can be more difficult
than initiating it because once a plan is under
way, it can be difficult to disentangle the condi-
tion from the treatment. The patient may be get-
ting worse or might be having a bad reaction to
the medication even as the underlying disease is
being brought under control. Plan revision can
consist of changing a schedule to a more serious
modification of adding or deleting tasks, to an
even more serious step of replacing one strategy
with another in order to achieve the goals, and in
the most extreme cases, to revise or replace the
goals themselves. Applying EBM assumes the
best practice has the desired effect and fails to
speak to the cognitive and teamwork issues in
plan revision.
by guest on April 18, 2016edm.sagepub.comDownloaded from
6 Month XXXX - Journal of Cognitive Engineering and Decision Making
Challenge 6: Considering Remedies
That Are Not Best Practices
Best-practice guidelines may not be avail-
able that cover all of the situations physicians
face or may not address all aspects of a patient’s
case, yet physicians still have to make treatment
decisions. How should a practitioner use other
evidence not derived from controlled studies
that seems relevant or fill gaps for the patients
they encounter? We appreciate that not all EBM
advocates insist on using evidence only when
derived from double-blinded studies and that in
many circumstances, it will be impossible to set
up this kind of design. This raises the question of
the role of less rigorously documented practices.
Should physicians not apply remedies unless they
are part of prescribed/sanctioned “best practices”
even when practice turns out to be incomplete or
insufficient for the patient who is suffering now?
How should physicians act now when research
trials to expand best-practices knowledge may
not be completed for years? An all-or-none
stance on rigor leaves a gap between the develop-
ment of rigorous knowledge and the need to act
now when knowledge is incomplete. And what to
do about meta-analyses that throw out all infor-
mation from studies that fail to meet the highest
levels of rigor even though the study may provide
some provisional information? Studies may fail
to generate significant results because of vari-
ability in the data, but that variability may reflect
subpopulations that have different reactions to
treatments. Thus, meta-analyses can mask differ-
ent reactions among various subpopulations.
These examples identify another judgment
that EBM downplays but has been studied in
cognitive engineering—the sufficient-rigor
judgment (Zelik, Patterson, & Woods, 2010).
Although one would like to make decisions
based only on the most rigorous evidence, in the
real world, there are always constraints from
limited resources and time pressure. This means
in practice there is a judgment about what level
of rigor is sufficient at that point in time given
the uncertainty and the risks of acting too late or
too early. There are ways to assess and assist the
sufficient-rigor judgment as decision makers
struggle under pressure.
A related challenge is the role of practitioners
in providing evidence. Does the evidence base
always fall within the provenance of medical
researchers, or will data from other sources be
accepted? Unexpected twists in a case, unex-
pected reactions, coincidences, all have contrib-
uted new hypotheses and ideas. Spectacular
progress in surgery over the past century—joint
replacement, cardiac valve replacement, recon-
structions, and so forth—have been achieved
without initial randomized control trials
(O’Sullivan, 2010).
Finally, it seems foolish to ignore “rock
stars”—individual practitioners who achieve
much higher success rates than others. Staszewski
(2004) provided an example of a rock star in the
domain of mine detection. Conventional mine
detectors proved ineffective in the face of a new
generation of mines that relied on plastic parts
instead of metal, so the Army invested $38 mil-
lion over 9 years to develop the next generation
of handheld mine detector. Unfortunately, tests
showed that the new version was not any better
than the previous version; both achieved only
10% to 20% accuracy. However, there were a
few specialists, one in particular, who were able
to achieve success rates greater than 90% with
the new equipment. By studying their strategies,
it was possible to develop a training course that
allowed soldiers to jump from 20% accuracy to
90% accuracy. When there are only a few experts,
or just a single expert, statistical evaluation will
be impossible. However, these standouts, some-
times referred to as positive deviants (Pascale &
Sternin, 2010), offer important lessons. Of
course, there is a risk of drawing the wrong les-
sons from standouts. For example, one risk of
naive copying of successful cases involves
undersampling failure (Denrell, 2003; Denrell &
Fang, 2010). Organizations may try to learn from
other, successful organizations and conclude that
bold, risky decisions are essential. However, this
conclusion misses the cases of organizations that
failed because they took unfortunate risks; the
failures are no longer in business and unavailable
for study. The Staszewski example was not naive
copying of standouts but was a detailed study to
understand what made the standouts successful.
In contrast, “best practice” as a paradigm in
health care easily slides into naive copying. The
above example from Staszewski also illustrates
another aspect of experience and expertise.
by guest on April 18, 2016edm.sagepub.comDownloaded from
Cognitive Challenges of Best PraCtiCes
7
Expertise is not expressible in the form of rule
sets and so can be easily dismissed as mere intu-
ition and unreliable. But forms of expertise, such
as the recognition of patterns of relationships,
can be identified, modeled, and supported
( Hoffman et al., 2014).
DISCUSSION
Best-practices approaches oversimplify the
cognitive challenges of putting general guidance
into action when confronting specific complex
situations under pressures. The world presents
complexities and uncertainties that cannot all
be overcome with sets of best practices. The
variability of diseases and patients and the inter-
actions across patient conditions spill over the
category boundaries of best-practice guidance.
Things do not always go as intended or planned,
as harried clinicians treat patients with chronic
conditions and comorbidities. The knowledge
of what is “best” changes, and the sources of
discovery of new knowledge are highly diverse
and opportunistic.
In this context, EBM seems to overemphasize
the limitations of heuristics, intuition, and exper-
tise without appreciating their strengths. As a
result, EBM advocates offer programs that seek
to substitute for experience rather than augment
it. A strict reliance on sanctioned evidence runs
the risk of diminishing the experience and skills
of practitioners rather than strengthening and
calibrating those experiences, skills, and exper-
tise (Hoffman et al., 2014). We are concerned
that practitioners in a variety of disciplines may
have trouble gaining expertise if they just
mechanically apply prescribed rules, because
this has occurred in other areas, such as aviation,
where there has been deskilling/loss of expertise
of pilots’ ability to handle non-normal and
abnormal flight situations as overreliance on
automation has grown (Abbott, McKenney, &
Railsback, 2013).
Evidence-based best-practices programs
have become a valuable antidote to anecdotal
practice. Evidence, however, does not speak for
itself. It needs to be interpreted, revised, and tai-
lored to specific contexts and conditions, all of
which takes expertise. In health care, the best-
practices approach of EBM works best in well-
ordered situations—for example, for tasks such
as inserting a central line, where the focus is on
the task, not on the patient, and context is largely
irrelevant. Problems arise when these successes
encourage the medical community to establish
best practices for complex situations that depend
on interpreting subtle cues about specific
patients, for instance, whether or not a patient is
in good enough condition to be transitioned
safely from the recovery room or intensive care
unit to other levels of care (Cook, 2006).
EBM works smoothly when the evidence is
clear and directly applicable to a patient. What
about the more challenging situations in which
physicians have to make decisions without
good evidence? What about a patient with a
moderate condition—does the physician apply
best practices from studies with patients who
have extreme forms of the illness? What about
conflicting needs, such as a patient with a non-
extreme kidney disease? The best practice is to
minimize protein, ideally no more than 4
ounces a day, but that is for more severely ill
patients; this patient is also advised to counter-
act steroid treatment by building up muscle
and—you guessed it—ingesting more protein.
These kinds of decisions do not line up neatly
with EBM and best practices. They require
judgment and expertise and processes well
studied in cognitive engineering.
The cognitive engineering community can
complement the best-practices movement. It
seeks to identify subtle aspects of expertise
needed to integrate knowledge and apply it to the
variability and complexity of individual patient
cases, for example, those involving tacit knowl-
edge (e.g., G. Klein, 2009). And it addresses the
limitations of rules, procedures, and best prac-
tices (Suchman, 1987). The six cognitive chal-
lenges we examine in this article illustrate the
importance of expertise and the boundary condi-
tions for evidence and best practices.
Therefore, the cognitive engineering com-
munity is in a position to develop methods for
improving decision making and patient care by
taking advantage of evidence without becom-
ing trapped by accepted best practices that can-
not cope with complexity (Cook et al., 2000).
The cognitive engineering research suggests
eight directions for strengthening best practices
strategies.
by guest on April 18, 2016edm.sagepub.comDownloaded from
8 Month XXXX - Journal of Cognitive Engineering and Decision Making
(a) Develop and sustain expertise. In order
to maintain a balance of evidence and expertise,
the cognitive engineering/naturalistic decision
making (NDM) community has advocated for
ways to build expertise (Ericsson, Charness,
Feltovich, & Hoffman, 2006; Klein, 2005;
Hoffman et al., 2014). Ericsson (2004) found
that the longer it had been since general physi-
cians had completed their formal medical train-
ing, the greater the reductions in accuracy and
consistency of cardiac diagnoses. That is why it
is so important to apply methods for training
and sustaining skilled performance in areas
such as health care, aviation, intelligence analy-
sis, and so forth. Expertise often takes the form
of tacit knowledge, so training in perceptual
skills, pattern recognition, anomaly detection,
and mental models becomes critical. Decision
makers cannot rely on explicit knowledge
(facts, rules, procedures) alone. One example of
how development of tacit knowledge is being
supported is through the ShadowBox training
(Klein & Borders, in press), which focuses on
speeding the development of expertise through
comparisons and introspection.
(b) Support adaptation. Expertise depends
on tacit knowledge and, in particular, the use of
tacit knowledge to judge when to change treat-
ments and when to depart from best practices for
individual cases. For example, in health care, we
suggest replacing global prescriptions about
populations with expectations that physicians
will adapt treatment plans to the needs of spe-
cific patients. Physicians can shift to customized
and annotated plans, not only about medication
management but generally about patient treat-
ment strategies.
(c) Combine evidence with experience. We
see little benefit in contrasting evidence and
experience. Both provide a basis for practitio-
ners to make decisions. Sometimes they will
conflict, and the decision maker needs to navi-
gate through the conflict. Balancing the compet-
ing claims of evidence and experience, and the
strengths and limitations of each approach, is
not unique to medicine. This duality appears in
many different disciplines. For example, the
new Pearson Q interactive assessment tool
allows clinicians to select tests to be given
(Delis, 2014) but makes suggestions based on
other experts (Eric Saperstein, personal commu-
nication, January 14, 2015). Experienced behav-
ioral researchers may review the results of a
study in terms of means and standard deviations
but then look at individual participants for signs
of anomalies and unusual patterns.
(d) Balance generic evidence with experien-
tial evidence. There is a further conflict within
evidence-based approaches: the confidence
placed in general evidence drawn from popula-
tions versus experiential evidence drawn from
the individual cases. In health care, EBM
encourages the medical community to rely on
generic evidence rather than the evidence of
their own patients. However, both types of evi-
dence seem important in making treatment deci-
sions. There are several ways to address this.
Gigerenzer (2002) has used frequency data to
present generic evidence, with the goal of con-
textualizing the generic evidence. Another
approach is to make better use of displays that
illustrate the various parameters and context of a
situation to allow for greater resilience (Hollna-
gel, Woods, & Leveson, 2006; Nemeth,
O’Connor, Klock, & Cook, 2006).
(e) Represent evidence. Publication of scien-
tific papers is not sufficient. We need to provide
data in more easily digested forms to help deci-
sion makers see how to personalize the findings
to specific cases. Additionally, we advocate
ways to offer clearer presentation of effect sizes
and clearer presentation of variability, even
speculating about the clusters of study partici-
pants who gained the most and the least. Using
the language of statisticians, we are advocating
for ways to highlight Subject × Treatment
interactions.
(f) Appraise evidence. Much of the evidence
we believe in today will be discarded in 5 to 10
years’ time. Decision makers cannot blindly
accept the latest studies. They have to gain skills
in judging how much confidence to place in evi-
dence. At least, they have to be able to determine
that a small difference, although statistically sig-
nificant, may not warrant too much confidence.
The cognitive engineering/NDM community
has encountered challenges about appraising
evidence with populations, such as intelligence
by guest on April 18, 2016edm.sagepub.comDownloaded from
Cognitive Challenges of Best PraCtiCes
9
analysts, who are always alert to the possibility
of accidental, erroneous, or even deceptive data
points. Panel operators of petrochemical plants
need to constantly be alert to the possibility that
sensor data may be erroneous. They are trained
on garden-path scenarios involving flawed evi-
dence and mistaken initial assessments.
(g) Share evidence. Expanding the use of
information-exchange mechanisms will help the
entire medical community learn about new treat-
ments that have not yet been vetted by the stan-
dards of best practices. Several communities
have made powerful use of information
exchanges, for example, the informal and
impromptu lessons learned from chat rooms that
sprang up during Operation Iraqi Freedom to
trade observations about topics such as detecting
improvised explosive devices. The health care
domain has established the Patient-Centered
Outcomes Research Institute, tasked with sifting
data from electronic medical records to identify
promising therapies that could make a valuable
contribution without having to rely on double-
blind experiments. With appropriate oversight,
synthesis, and medical legal protection, such
forums would provide an opportunity for a
broader dialogue on how to balance best prac-
tices and expertise.
(h) Support collaborative decision making.
Cognitive engineering/NDM researchers pay a
great deal of attention to effective teamwork.
Best practices should not be narrowly drawn as
the province of the decision maker. The decision
maker has to establish trade-offs and coordina-
tion with team members. For example, within
health care, the team includes physicians, nurses,
and various other professionals. The team con-
cept should be broadened to include patients. A
best practice has little value if the patient is
unable or unwilling to adhere to the regimen.
Physicians can blame the patient, but a more
useful stance is to take the patient’s abilities and
motivations into account in designing a treat-
ment program, even if it means departing from
the generally accepted best practices. A subopti-
mal regimen that a patient can sustain may be
better than an optimal one that the patient will
ignore. Health care professionals can use what
we have learned about adherence (e.g., D. Klein,
2009) in designing individual treatment
programs.
CONCLUSIONS
Best practices are an important opportunity
for any community to shed outmoded traditions
and unreliable anecdotal procedures. They pro-
vide an opportunity for scrutiny and debate and
progress. They enable organizations to act in a
consistent way. However, as we have argued,
best practices come with their own challenges.
Cognitive engineering and NDM studies
have shown some of the difficulties of using evi-
dence in situations that have a great deal of vari-
ability, uncertainty, and risk. In effect, decision
makers in domains such as health care need
plans like best practices but also need to be
effective at revising plans to fit the dynamics
and variability of specific situations (e.g.,
patients and diseases) and to handle the chang-
ing knowledge about what is effective.
The approach of cognitive engineering and
NDM focuses on layering best practices with
experiential knowledge of different situations.
In this way, decision makers can handle specific
situations, regardless of variability, uncertainty,
and change.
We should regard best practices as provi-
sional, not optimal, as a floor rather than a ceil-
ing. When we label an approach a best practice,
it tends to become a ceiling that is hard to change
even as more knowledge is gained. Instead, we
can identify provisional best practices that serve
as a floor while learning goes forward. It is a
move from “best practices” to “better practices”
that frees us from undocumented anecdotal
approaches and forces a commitment to contin-
ual improvement.
ACKNOWLEDGMENTS
We would like to thank Emilie Roth, Emily Pat-
terson, and Laura Militello for their helpful feed-
back. We would also like to thank three anonymous
reviewers for their extremely thoughtful comments
and suggestions.
REFERENCES
Abbott, K., McKenney, D., & Railsback, P. (2013). Operational
use of flight path management systems. Retrieved from
http://www.faa.gov/about/office_org/headquarters_offices/
by guest on April 18, 2016edm.sagepub.comDownloaded from
10 Month XXXX - Journal of Cognitive Engineering and Decision Making
avs/offices/afs/afs400/parc/parc_reco/media/2013/130908_
PARC_FltDAWG_Final_Report_Recommendations.pdf
Amalberti, R. (2013). Navigating safety: Necessary compromises
and tradeoffs, theory and practice. Dordrecht, Netherlands:
Springer-Verlag.
Brehmer, B. (1987). Development of mental models for decision in
technological systems. In J. Rasmussen, K. Duncan, & J. Leplat
(Eds.), New technology and human error (pp. 111–120). Chich-
ester, UK: Wiley.
Brehmer, B. (1992). Dynamic decision making: Human control of
complex systems. Acta Psychologica, 81, 211–241.
Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cogni-
tive Psychology, 4, 55–81.
Chatterjee, T. (2015). Out of the darkness. Science, 350, 372–375.
Cook, R. I. (2006). Being bumpable: Consequences of resource
saturation and near-saturation for cognitive demands on ICU
practitioners. In D. D. Woods & E. Hollnagel (Eds.), Joint cog-
nitive systems: Patterns in cognitive systems engineering (pp.
23–35). Boca Raton, FL: CRC Press.
Cook, R. I., Render, M. L., & Woods, D. D. (2000). Gaps in the
continuity of care and progress on patient safety. British Medi-
cal Journal, 320, 791–794.
DeAnda, A., & Gaba, D. (1991). The role of experience in the
response to simulated critical incidents. Anesthesia and Anal-
gesia, 72, 308–315.
Delis, D. (2014, October 31). Cognitive assessment leaps into the
digital age. ESchool News. Retrieved from http://www.eschool
news.com/2014/10/31/cognitive-assessment-digital-429/
Denrell, J. (2003). Vicarious learning, undersampling of failure,
and the myths of management. Organization Science, 14,
228–243.
Denrell, J., & Fang, C. (2010). Predicting the next big thing: Suc-
cess as a signal of poor judgment. Management Science, 56,
1653–1667.
Ericsson, K. A. (2004). Deliberate practice and the acquisition and
maintenance of expert performance in medicine and related
domains. Academic Medicine, 10, S70–S81.
Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R.
(Eds.). (2006). The Cambridge handbook of expertise and expert
performance. Cambridge, UK: Cambridge University Press.
Gaba, D., Maxwell, M., & DeAnda, A. (1987). Anesthetic mishaps:
Breaking the chain of accident evolution. Anesthesiology, 66,
670–676.
Ghaffarzadegan, N., Epstein, A. J., & Martin, E. G. (2013). Prac-
tice variation, bias, and experiential learning in Cesarean deliv-
ery: A data-based system dynamics approach. Health Services
Research, 48, 713–734.
Gigerenzer, G. (2002). Calculated risks: How to know when num-
bers deceive you. New York, NY: Simon & Schuster.
Gray, J. A. M. (1996). Evidence-based healthcare. London, UK:
Churchill Livingstone.
Hoffman, R. R., Ward, P., Feltovich, P. J., DiBello, L., Fiore,
S. M., & Andrews, D. H. (2014). Accelerated expertise: Train-
ing for high proficiency in a complex world. New York, NY:
Psychology Press.
Hollnagel, E., Woods, D. D., & Leveson, N. (2006). Resilience
engineering: Concepts and precepts. Farnham, UK: Ashgate.
Kahneman, D., & Klein, G. A. (2009). Conditions for intuitive
expertise: A failure to disagree. American Psychologist, 64,
515–526.
Kaiser, J. (2015). The cancer test: A nonprofit’s effort to repli-
cate 50 top cancer papers is shaking up labs. Science, 348,
1411–1413.
Klein, D. E. (2009). The forest and the trees: An integrated
approach to designing adherence interventions. Australasian
Medical Journal, 1, 181–184.
Klein, G. (1998). Sources of power: How people make decisions.
Cambridge, MA: MIT Press.
Klein, G. (2005). The power of intuition. New York, NY: Currency/
Doubleday.
Klein, G. (2007a). Flexecution as a paradigm for replanning, Part 1.
IEEE Intelligent Systems, 22, 79–83.
Klein, G. (2007b). Flexecution, Part 2: Understanding and support-
ing flexible execution. IEEE Intelligent Systems, 22, 108–112.
Klein, G. (2009). Streetlights and shadows: Searching for the keys
to adaptive decision making. Cambridge, MA: MIT Press.
Klein, G., & Borders, J. (in press). The ShadowBox approach to
cognitive skills training: An empirical evaluation. Journal of
Cognitive Decision Making.
Klein, G., Pliske, R., Crandall, B., & Woods, D. (2005). Problem
detection. Cognition, Technology, and Work, 7, 14–28.
Kylesten, B. (2013). Dynamic decision-making on an operative
level: A model including preconditions and working method.
Cognitive Technology & Work, 15, 197–205.
Marshall, B. J. (2005, December). Helicobacter connections.
Nobel lecture, Stockholm, Sweden.
Nemeth, C., O’Connor, M., Klock, P. A., & Cook, R. (2006). Dis-
covering healthcare cognition: The use of cognitive artifacts to
reveal cognitive work. Organization Studies, 27, 1011–1035.
O’Sullivan, G. C. (2010). Advancing surgical research in a sea of
complexity. Annals of Surgery, 252, 711–714.
Pascale, R., & Sternin, J. (2010). The power of positive deviance:
How unlikely innovators solve the world’s toughest problems.
Boston, MA: Harvard Business Review Press.
Perry, S. J., & Wears, R. L. (2011). Large scale coordination of
work: Coping with complex chaos within healthcare. In K. L.
Mosier & U. Fisher (Eds.), Informed by knowledge: Expert
performance in complex situations (pp. 55–59). New York,
NY: Taylor & Francis.
Pronovost, P., Needham, D., Berenholtz, S., Sinopoli, D., Chu,
H., Cosgrove, S., Sexton, B., Hyzy, R., Welsh, R., Roth, G.,
Bander, J., Kepros, J., & Goeschel, C. (2006). An intervention
to decrease catheter-related bloodstream infections in the ICU.
New England Journal of Medicine, 355, 2725–2732.
Rittel, H. W. J., & Weber, M. M. (1973). Dilemmas in a general
theory of planning. Policy Sciences, 4, 155–169.
Roberts, A. R., & Yeager, K. R. (Eds.), (2004). Evidence-based
practice manual: Research and outcome measures in health
and human services. New York, NY: Oxford University
Press.
Rudolph, J. W., Morrison, J. B., & Carroll, J. S. (2009). The dynam-
ics of action-oriented problem solving: Linking interpretation
and choice. Academy of Management Review, 34, 733–756.
Staszewski, J. (2004). Models of expertise as blueprints for cogni-
tive engineering: Applications to landmine detection. In Pro-
ceedings of the Human Factors and Ergonomics Society 48th
Annual Meeting (pp. 458–462). Santa Monica, CA: Human
Factors and Ergonomics Society.
Suchman, L. A. (1987). Plans and situated actions: The problem of
human–machine communication. Cambridge, UK: Cambridge
University Press.
Timmermans, S., & Berg, M. (2003). The gold standard: The
challenge of evidence-based medicine and standardization in
healthcare. Philadelphia, PA: Temple University Press.
Wears, R. L., & Hunte, G. S. (2014). Seeing patient safety “like a
state.” Safety Science, 67, 50–57.
by guest on April 18, 2016edm.sagepub.comDownloaded from
Cognitive Challenges of Best PraCtiCes
11
Wears, R. L., & Schubert, C. C. (2015). Visualizing expertise in
context. Annals of Emergency Medicine. Advance online pub-
lication. doi:10.1016/j.annemergmed.2015.11.027
Woods, D. D. (1994). Cognitive demands and activities in dynamic
fault management. In N. Stanton (Ed.), Human factors in
alarm design (pp. 63–92). London, UK: Taylor & Francis.
Woods, D. D. (2005). Creating foresight: Lessons for resilience
from Columbia. In W. H. Starbuck & M. Farjoun (Eds.),
Organization at the limit: NASA and the Columbia disaster
(pp. 289–308). Malden, MA: Blackwell.
Woods, D. D., & Hollnagel, E. (2006). Joint cognitive systems:
Patterns in cognitive systems engineering. Boca Raton, FL:
Taylor & Francis.
Woods, D. D., Roth, E. M., & Bennett, K. B. (1990). Explorations
in joint human–machine cognitive systems. In S. Robertson,
W. Zachary, & J. Black (Eds.), Cognition, computing and
cooperation (pp. 123–158). Norwood, NJ: Ablex.
Zelik, D., Patterson, E. S., & Woods, D. D. (2010). Measuring attri-
butes of rigor in information analysis. In E. S. Patterson & J.
Miller (Eds.), Macrocognition metrics and scenarios: Design
and evaluation for real-world teams (pp. 65–83). Aldershot,
UK: Ashgate.
Devorah E. Klein, a senior scientist with Marimo
Consulting, LLC, is a cognitive psychologist
working to design medical products, systems, and
services.
David D. Woods is a professor in the Department of
Integrated Systems Engineering at The Ohio State
University and past president of the Human Factors
and Ergonomics Society and the Resilience Engi-
neering Association.
Gary Klein is a senior scientist with MacroCogni-
tion, LLC, and the author of Seeing What Others
Don’t: The Remarkable Ways We Gain Insights.
Shawna J. Perry is an emergency medicine physician
and visiting scholar at the University of Florida
School of Medicine.
by guest on April 18, 2016edm.sagepub.comDownloaded from
... This is because the variability of diseases and patients and the interactions across patient conditions spill over the category boundaries of best-practice guidance. In addition, the scientific evidence presented in written guidelines and recommendations does not always speak for itself but needs to be interpreted, revised, and tailored to specific contexts and conditions, all of which takes experience and expertise [30,[32][33][34]. As a result, experts believed that in some situations patient safety could only be guaranteed by not following rules if this was supported by a valid mental model or social understanding or both of the situation [35]. ...
... Experts, in contrast, knew exactly what intuitive decision making feels like and were able, at least to some extent, to talk about it. In addition, they reported that they checked their intuitions with conscious deliberation before acting upon the first, an approach that has been termed 'informed intuitions' in the literature on decision making [32]. ...
Article
Full-text available
Background: The development of expertise in anaesthesia requires personal contact between a mentor and a learner. Because mentors often are experienced clinicians, they may find it difficult to understand the challenges novices face during their first months of clinical practice. As a result, novices' perspectives may be an important source of pedagogical information for the expert. The aim of this study was to explore novice and expert anaesthetists understanding of expertise in anaesthesia using qualitative methods. Methods: Semi-structured interviews were conducted with 9 novice and 9 expert anaesthetists from a German University Hospital. Novices were included if they had between 3 and 6 months of clinical experience and experts were determined by peer assessment. Interviews were intended to answer the following research questions: What do novices think expertise entails and what do they think they will need to become an expert? What do experts think made them the expert person and how did that happen? How do both groups value evidence-based standards and how do they negotiate following written guidance with following one's experience? Results: The clinical experience in both groups differed significantly (novices: 4.3 mean months vs. experts: 26.7 mean years; p < 0.001). Novices struggled with translating theoretical knowledge into action and found it difficult to talk about expertise. Experts no longer seem to remember being challenged as novice by the complexity of routine tasks. Both groups shared the understanding that the development of expertise was a socially embedded process. Novices assumed that written procedures were specific enough to address every clinical contingency whereas experts stated that rules and standards were essentially underspecified. For novices the challenge was less to familiarise oneself with written standards than to learn the unwritten, quasi-normative rules of their supervising consultant(s). Novices conceptualized decision making as a rational, linear process whereas experts added to this understanding of tacit knowledge and intuitive decision making. Conclusions: Major qualitative differences between a novice and an expert anaesthetist's understanding of expertise can create challenges during the first months of clinical training. Experts should be aware of the problems novices may have with negotiating evidence-based standards and quasi-normative rules.
... Evidence-based medicine (EBM) seeks to establish a set of best practices for physicians by identifying the treatment of interest and researching the effectiveness of the treatment (Gray and Chambers, 1997). But there are also cognitive challenges involved in using EBM for diagnosis (Klein et al., 2016). Physicians trained in methods of EBM are more likely to use Bayes' theorem for diagnosis than untrained ones (Shaughnessy, 2007). ...
Article
AI systems are increasingly being fielded to support diagnoses and healthcare advice for patients. One promise of AI application is that they might serve as the first point of contact for patients, replacing routine tasks, and allowing health care professionals to focus on more challenging and critical aspects of healthcare. For AI systems to succeed, they must be designed based on a good understanding of how physicians explain diagnoses to patients, and how prospective patients understand and trust the systems providing the diagnosis, as well as the explanations they expect. In this thesis, I examine this problem across three studies. In the first study, I interviewed physicians to explore their explanation strategies in re-diagnosis scenarios. I identified five broad categories of explanation strategies and I developed a generic diagnostic timeline of explanations from the interviews. For the second study, I tested an AI diagnosis scenario and found that explanation helps improve patient satisfaction measures for re-diagnosis. Finally, in a third study I implemented different forms of explanation in a similar diagnosis scenario and found that visual and example-based explanation integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. Based on these studies and the review of the literature, I provide some design recommendations for the explanations offered for AI systems in the healthcare domain.
... The field of health care, where decisions often have great impact on human lives, also strives to make decisions that optimize outcomes. Evidence-based medicine (EBM) is a movement that promotes the use of data-driven learnings from clinical research and clinical practice to provide a more solid ground for judgements and decisions in empirical evaluation of past results (Klein et al., 2016). Using data analysis can help prevent errors in judgement of for example, the likelihood of a certain diagnosis and it can help to optimize outcomes of treatment decisions that are easily quantifiable across certain populations. ...
Article
Full-text available
Clinical Decision Support (CDS) aims at helping physicians optimize their decisions. However, as each patient is unique in their characteristics and preferences, it is difficult to define the optimal outcome. Human physicians should retain autonomy over their decisions, to ensure that tradeoffs are made in a way that fits the unique patient. We tend to consider autonomy in the sense of not influencing decision-making. However, as CDS aims to improve decision-making, its very aim is to influence decision-making. We advocate for an alternative notion of autonomy as enabling the physician to make decisions in accordance with their professional goals and values and the goals and values of the patient. This perspective retains the role of autonomy as a gatekeeper for safeguarding other human values, while letting go of the idea that CDS should not influence the physician in any way. Rather than trying to refrain from incorporating human values into CDS, we should instead aim for a value-aware CDS that actively supports the physician in considering tradeoffs in human values. We suggest a conversational AI approach to enable the CDS to become value-aware and the use of story structures to help the user integrate facts and data-driven learnings provided by the CDS with their own value judgements in a natural way.
... The design of the NAVPLAN integrates explicit knowledge built on navigator's best practices. Thus, it enables navigators in controlling specific situations, regardless of variability and uncertainty [9]. Anticipatory thinking ability is a kind of sensemaking that helps facing problems that are not clearly presented. ...
Chapter
Modern technology revolutionised marine navigation, reducing errors and increasing navigation safety. However, the same technology has been associated with critical accidents and navigators’ errors. On the other hand, expert mariners have proved to manage complex situations, adapting to unforeseen events successfully. To better understand the effects of new technologies and how work is currently done, the Portuguese navy promoted a study about navigation team performance. The results suggest that navigation technology appears to have a strong anchoring effect on team activity. While sensemaking and intuitive judgements complement the shortfalls of the decision support system (DSS), it was found that the combination of high automation influence with lack of coordination leads to a collaborative biased perception of the situation.
Article
Almost half of projects have failed globally during the last 50 years yet most studies in the literature review were inclusive. The research design was a robust repeated measures controlled experiment where the 16 participants received all treatments, which may be contrasted to a similar 4 x 4 factorial experiment with a control group (common in psychology or healthcare) resulting in a group size of only 4. All but the individual project manager (PM) factors were controlled, while primary demographic and behavior data were collected. PM’s were tested for competence using a risk management scenario, and given two manipulated conditions (a basic and a biased treatment). Since the organizational and project level factors were controlled, some individual level factors impacted the decision. PM’s with higher competence made better decisions, with a 22% effect size, when all other factors in the model were accounted for. Competent non-certified PM’s made better decisions as compared to certified incompetent PM’s.
Chapter
Full-text available
AFET YÖNETİMİNDE KANIT TEMELLİ İÇERİK
Article
Full-text available
This study aims to explore how physicians make sense of and give meaning to their decision-making during obstetric emergencies. Childbirth is considered safe in the wealthiest parts of the world. However, variations in both intervention rates and delivery outcomes have been found between countries and between maternity units of the same country. Interventions can prevent neonatal and maternal morbidity but may cause avoidable harm if performed without medical indication. To gain insight into the possible causes of this variation, we turned to first-person perspectives, and particularly physicians' as they hold a central role in the obstetric team. This study was conducted at four maternity units in the southern region of Sweden. Using a narrative approach, individual in-depth interviews ignited by retelling an event and supported by art images, were performed between Oct. 2018 and Feb. 2020. In total 17 obstetricians and gynecologists participated. An inductive thematic narrative analysis was used for interpreting the data. Eight themes were constructed: (a) feeling lonely, (b) awareness of time, (c) sense of responsibility, (d) keeping calm, (e) work experience, (f) attending midwife, (g) mind-set and setting, and (h) hedging. Three decision-making perspectives were constructed: (I) individual-centered strategy, (II) dialogue-distributed process, and (III) chaotic flow-orientation. This study shows how various psychological and organizational conditions synergize with physicians during decision-making. It also indicates how physicians gave decision-making meaning through individual motivations and rationales, expressed as a perspective. Finally, the study also suggests that decision-making evolves with experience, and over time. The findings have significance for teamwork, team training, patient safety and for education of trainees.
Chapter
The concept of evidence-based policymaking reflects the belief that rigorous and scientific evidence is an essential tool to help bring sustainable information to decision-making process especially in the interest of social actors. The aim is to use unbiased reasoning to guide social interventions and spend public funds more effectively. In an era where the truth seems dispensable to some politicians, evidence-informed policymaking champions the importance of getting the facts right. The final aim of evidence-informed policymaking could be that of helping to strengthen the cooperative attitude in working relationship between the scientific and political world to enhance, boost, and assess the direct and efficient application of research findings to the society throughout political decision-making processes.
Chapter
There is a growing popularity of data-driven best practices in various fields—such as climate change, biodiversity, and pollution; ensuring nutritious, healthy, and sustainable food; and societal transformations due to the rise of artificial intelligence and other next-generation digital technologies. A best practice is a technique or methodology that is generally accepted as superior to any alternative because, through experience and research, it has proven to be reliable leading to a desired result through evidence-based approach. Despite the fact that data and research are very important, it is simplistic to conclude that they are sufficient to improve practice. Expertise and intuition, as claimed in the previous chapter, are fundamental to reach the best solution to the problem. As a matter of fact, evidence-informed for policymaking has been borrowed from medicine and we hope it can still be borrowed from medicine, but with a more mature awareness to move faster from theory to practice on the ground.
Article
Full-text available
Anaesthesiology has witnessed a growing acknowledgement of the fact that stress can have a negative impact on individual cognitive function and effective team performance. Cognitive aids such as checklists have come to be viewed as promising tools in the management of critical events. While checklists have been an integral part of the safety strategy in aviation for many decades, there has been little progress in establishing related concepts in anaesthesiology. Reasons for this reluctance are the lack of usability of the cognitive aids developed and the fact that these cognitive artefacts do not support established treatment processes. The main reason, however, are the different system properties of technical devices and biological systems. While it is possible to define the one best way of solving a technical problem and translate this knowledge into a linear checklist, the behaviour of biological systems is dynamic and adaptive, which makes it impossible to predict with certainty the cause of a pathophysiological disturbance and define the single best way to solve a problem. Rather than being restricted to a linear checklist, cognitive aids can improve emergency management by helping experienced teams to remember and excel. The German Cognitive Aid Working Group of the Professional Association of German Anaesthesiologists (BDA) and the German Society of Anaesthesiology and Intensive Care Medicine (DGAI) has developed a digital cognitive aid for intraoperative emergencies in an iterative user-centred design process. The future challenge will be to understand the physical, cognitive and social aspects of implementing the cognitive aid into established processes of crisis management in anaesthesia.
Book
Full-text available
Our fascination with new technologies is based on the assumption that more powerful automation will overcome human limitations and make our systems 'faster, better, cheaper,' resulting in simple, easy tasks for people. But how does new technology and more powerful automation change our work? Research in Cognitive Systems Engineering (CSE) looks at the intersection of people, technology, and work. What it has found is not stories of simplification through more automation, but stories of complexity and adaptation. When work changed through new technology, practitioners had to cope with new complexities and tighter constraints. They adapted their strategies and the artifacts to work around difficulties and accomplish their goals as responsible agents. The surprise was that new powers had transformed work, creating new roles, new decisions, and new vulnerabilities. Ironically, more autonomous machines have created the requirement for more sophisticated forms of coordination across people, and across people and machines, to adapt to new demands and pressures. This book synthesizes these emergent Patterns though stories about coordination and mis-coordination, resilience and brittleness, affordance and clumsiness in a variety of settings, from a hospital intensive care unit, to a nuclear power control room, to a space shuttle control center. The stories reveal how new demands make work difficult, how people at work adapt but get trapped by complexity, and how people at a distance from work oversimplify their perceptions of the complexities, squeezing practitioners. The authors explore how CSE observes at the intersection of people, technology, and work, how CSE abstracts patterns behind the surface details and wide variations, and how CSE discovers promising new directions to help people cope with complexities. The stories of CSE show that one key to well-adapted work is the ability to be prepared to be surprised. Are you ready?.
Article
Full-text available
We offer a theory of action-oriented problem solving that links interpretation and choice, processes usually separated in the sensemaking literature and decision-making literature. Through an iterative, simulation-based process we developed a formal model. Three insights emerged: (1) action-oriented problem solving includes acting, interpreting, and cultivating diagnoses; (2) feedback among these processes opens and closes windows of adaptive problem solving; and (3) reinforcing feedback and confirmation bias, usually considered dysfunctional, are helpful for adaptive problem solving.
Article
Unlike behavioral skills training, cognitive skills training attempts to impart concepts that typically depend on tacit knowledge. Subject-matter experts (SMEs) often deliver cognitive training, but SMEs are expensive and in short supply, causing a training bottleneck. Recently, Hintze developed the ShadowBox method to overcome this limitation. As part of the Defense Advanced Research Projects Agency’s Social Strategic Interaction Modules, Klein, Hintze, and Saab adapted the ShadowBox approach to train large numbers of trainees without relying on expert facilitators. As part of this program, we used the ShadowBox approach to train warfighters on the social cognitive skills needed to successfully manage civilian encounters without creating hostility or resentment. ShadowBox training was evaluated in two studies. Evaluation 1 provided 3 hr of nonfacilitated, paper-based training to Marines at Camp Pendleton and Camp Lejeune (N = 59), and improved performance (i.e., match to the SME rankings) by 28% compared to a control group. Evaluation 2 provided 1 hr of nonfacilitated training, administered via Android tablet, to soldiers at Fort Benning (N = 30) and improved performance by 21%. These results, both statistically significant, suggest ways to use scenario-based training to develop cognitive skills in the military.
Book
Speed in acquiring the knowledge and skills to perform tasks is crucial. Yet, it still ordinarily takes many years to achieve high proficiency in countless jobs and professions, in government, business, industry, and throughout the private sector. There would be great advantages if regimens of training could be established that could accelerate the achievement of high levels of proficiency. This book discusses the construct of ‘accelerated learning.’ it includes a review of the research literature on learning acquisition and retention, focus on establishing what works, and why. This includes several demonstrations of accelerated learning, with specific ideas, plans and roadmaps for doing so. The impetus for the book was a tasking from the Defense Science and Technology Advisory Group, which is the top level Science and Technology policy-making panel in the Department of Defense. However, the book uses both military and non-military exemplar case studies.
Article
For years, physicians assumed that once a blind person passed a critical age in early childhood without regaining vision, their brain would never be able to make sense of the visual world. A project called Prakash has demolished that assumption. Since 2004, project eye surgeons have removed congenital cataracts from hundreds of blind children, teenagers and young adults in India, restoring their sight. The surprising capacity of Prakash patients to regain substantial vision is rewriting our understanding of visual neuroscience. While probing how the newly sighted process visual cues, project scientists are peeling away layers of mystery about which aspects of sight come preprogrammed and which are shaped by experience.