ArticlePDF Available

Abstract and Figures

Five different scientific communities are challenging the abilities of experts and even the very concept of expertise: the decision research community, the sociology community, the heuristics and biases community, the evidence-based practices community, and the computer science community (including the fields of artificial intelligence, automation, and big data). Although each of these communities has made important contributions, the challenges they pose are misguided. This essay describes the problems with each challenge and encourages researchers in the five communities to explore ways of moving forward to improve the capabilities of experts.
Content may be subject to copyright.
NOVEMB ER/DECEMBER 2017 1541-1672/17/$33.00 © 2017 IEEE 67
Published by the IEEE Computer Societ y
Editor: Robert R. Ho ffman, Institute for Human and Machine Cognition,
Why Expertise Matters:
A Response to the
Gary Klein, MacroCognition LLC
Ben Shneiderman, University of Maryland
Robert R. Hoffman and Kenneth M. Ford, Institute for Human and Machine Cognition
We dedicate this article to our colleague Robe rt Wears,
who tragically died in July 2017 just before we started
to work on this article.
Overwhelming scientic evidence demon-
strates that experts’ judgments can be highly
accurate and reliable. As dened in the scientic
literature,1 experts
employ more effective strategies than others,
and do so with less effort;
perceive meaning in patterns that others do not
form rich mental models of situations to support
sensemaking and anticipatory thinking;
have extensive and highly organized domain
knowledge; and
are intrinsically motivated to work on hard
problems that stretch their capabilities.
Our society depends on experts for mission-critical,
complex technical guidance for high-stakes decision
making because they can make decisions despite in-
complete, incorrect, and contradictory information
when established routines no longer apply.2 Experts
are the people the team turns to when faced with
difcult tasks.
Despite this empirical base, we witness a num-
ber of challenges to the concept of expertise. Tom
Nichols’ The Death of Expertise presents a strong
defense of expertise,3 a defense to which we are
adding in this article. We address the attempts
made by ve communities to diminish the credibil-
ity and value of experts (see Figure 1). These chal-
lenges come from
decision researchers who show that simple lin-
ear models can outperform expert judgment;
heuristics and biases researchers who have
claimed that experts are as biased as anyone else;
sociologists who see expertise as just a social
practice-oriented researchers seeking to replace
professional judgments with data-based pre-
scriptions and checklists; and
technophiles who believe that it is only a matter
of time before articial intelligence (AI) surpasses
Each of these communities has questioned the
value of expertise, using different arguments, per-
spectives, and paradigms.
Society needs experts, even though they are fal-
lible. Although we are expert-advocates, eager to
highlight the strengths of experts, we acknowledge
that experts are not perfect and never will be. Our
purpose is to correct the misleading claims and im-
pressions being spread by the expertise-deniers.
Then we hope to engage productively with each of
these communities to improve human performance
through better training, workows, and technology.
We begin with the challenge from the decision
research community because this body of work
can be traced back the furthest, to the mid-1960s,
and echoes to this day.
The Challenge from
the Decision Research
Research by some experimental
psychologists shows that in judg-
ment tasks, simple linear mod-
els will be more consistent in their
performance than human judges.
Examples are faculty ratings of
graduate students versus a model
based on grades and test scores, or
physicians’ ratings of cancer biopsy
results versus a model based on sur-
vival statistics:4,5
There are some, but only a few, truly re-
markable judges, whereas there are many
so-called experts who are no better than
complete novicesthe picture of the ex-
pert painted in broad brush strokes by this
research is relatively un atteringwhen-
ever possible, human judges should be re-
placed by linear models.6
A few aspects of this literature are
The linear models are derived in
the rst place from the advice of
experts about what the key vari-
ables are—the variables that ex-
perts themselves use in making
The decision research tends to re-
duce expertise to single measures,
such as judgment hit rate, ignoring
more qualitative contributions to
For many of the studies, it is not
obvious that the particular judg-
ment task that is presented to the
participants is actually the same
task that the experts routinely
perform, and therefore might
not be the task at which they are
For many of the studies, there is
scant evidence that the participants
who are called experts actually
qualify for that designation, apart
from their having had a certain
numbers of years of experience.
Advocates of this view go beyond
their empirical base by generalizing
from studies using college students
as judges to argue for the fallibility
of all judges.
Although linear models are con-
sistent, when they fail, they fail
miserably. One problem involves
“broken leg cues.” A linear model
might do a decent job of predicting
whether a given person is likely to
go to the movies this weekend, but
will fail because it is blind to the
fact that the person in question just
broke a leg.7 Experts will perform
better than the model if they have
information to which the model is
Who are the experts? In some
studies, the linear models were com-
pared to college students, sometimes
called “judges.” And even in studies
in which the judges were profession-
als, perhaps the linear models should
be compared to the best of the profes-
sionals rather than to the average.
Even when the linear models out-
performed the experts, it is a mistake
to infer that the linear models got it
right and the experts failed miserably.
The linear models had their greatest
edge in domains6 and prediction tasks
involving human activity (that is,
clinical psychologists, psychiatrists,
counselors, admissions ofcers, pa-
role ofcers, bank loan ofcers, and
so on). However, the linear models
weren’t very accurate—it was just
that the experts were even worse. In
one often-cited study of cancer di-
agnosis, the linear model performed
better than oncologists at predicting
patient longevity, but a closer look
shows that the model only accounted
for 18 percent of the variance in the
judgment data.9 The clearest conclu-
sion from this and other studies on
linear modeling is that some things
are of intrinsic low predictability.
Next is the challenge from the
heuristics and biases (HB) commu-
nity, which can be traced to the early
The Challenge from the
Heuristics and Biases
Led by Daniel Kahneman and Amos
Tver s k y,10 the HB community has
called into question assumptions
about rationality by demonstrating
that people fall prey to a wide variety
of biases in judgment and probabilis-
tic reckoning, and that even experts
sometimes show these biases. This
nding helped create a mindset that
experts are not to be trusted. The
proliferation of HB research in aca-
demic psychology departments has
strengthened the impression that ex-
pert judgments are not accurate.
The HB paradigm typically uses
participants who are not experts (col-
lege students) and gives them arti-
cial laboratory tasks that require little
training and have little or no ecologi-
cal validity. The tasks, conveniently
enough, can be performed in a col-
lege class period. Bias effects found
using this “paradigm of convenience”
Challenges to expertise
and biases
Sociology Decision
Figure 1. The ve communities that have
challenged the concept of expertise.
diminish or disappear when research-
ers add context11 or when research-
ers have genuine experts engage in
their familiar environments rather
than work on articial puzzles
and probability-juggling tasks. Vari-
ations in the materials, instructions,
procedures, or experimental design
can cause bias effects to diminish or
Although some studies have shown
that bias can occur in expert reason-
ing,13 several studies show that bias
effects are much smaller than those of
the college students.14 There is mixed
evidence for the claim that experts
tend to be overcondent, and what
evidence there is stems from narrow
methods for measuring condence.
Experts such as weather forecasters
and reghters are careful to keep
their judgments within their core spe-
cialty and to use experience and ac-
curate feedback to attain reasonable
levels of condence in their judg-
ments. Weather forecasters search for
evidence that conrms a hypothesis;
it would be irrational not to. On the
other hand, weather forecasters de-
liberately and deliberatively look for
evidence that their hypotheses might
be wrong.
The HB researchers’ antipathy to-
ward experts opened the way for
the emergence of the naturalistic
decision-making movement,15 which
regards heuristics as strengths ac-
quired through experience, rather
th an we akne sse s. Tver sky a nd
Kahneman were careful to state that,
“In general these heuristics are quite
useful, but sometimes they lead to se-
vere and systematic errors.”16 How-
ever, the HB eld usually ignores this
caveat and emphasizes the downside
of heuristics.
We now come to the challenge
from sociology, which began in the
1970s and emerged most forcefully in
the 1980s.
The Challenge from
Sociological analysis of the conse-
quences of occupational special-
ization has considered the value of
professions to society.17 Given its
close association to the concept of
professions, the concept of expertise
was also assessed from the sociologi-
cal perspective, referred to as “science
and technology studies.” Ethnogra-
phers, sociologists, and philosophers
of science researched expertise in do-
mains including astronomy, physics,
and endocrinology.18–20 Their reso-
nant paradigms have been referred
to as “situated cognition,” “distrib-
uted cognition,” and the “sociology
of scientic knowledge.”21–24 Some
individuals have dened their para-
digm, in part, as a reaction to cogni-
tive psychology:
If one relegates all of cognition to in-
ternal mental processes, then one is re-
quired to pack all the explanatory ma-
chinery of cognition into the individual
mind as well, leading to misidentica-
tion of the boundaries of the cognitive
syste m, and the over-at tribution to the
individual mind alone all of the process-
es that give rise to intelligent behavior.25
Proponents of the situated cogni-
tion approach offer many good ex-
amples of why one should dene “the
cognitive system” as persons acting
in coordination with a social group
to conduct activities using tools and
practices that have evolved within a
culture.25 The core claim is that ex-
pertise and cognition reside in the in-
teraction among the individual and
the team, community, or organiza-
tion. The strongest view is that ex-
pertise is a social attribution or role,
a matter of prestige and authority. A
moderate view is that individual cog-
nition is an enabling condition for ex-
pertise, which just happens to be a
condition that is not of particular in-
terest in a sociological analysis.
One of the valuable aspects of this
perspective is to sensitize us to the
importance of external resources and
community relationships for the ac-
quisition, expression, and valuation
of expertise. Thus, we respect these
researchers and their contributions.
The importance of context has been
recognized in cognitive psychology
for decades,26 and in computer sci-
ence as well.27 We agree completely
that resources for cognition are in
the world. We agree that teamwork
and organizational issues are an im-
portant part of naturalistic decision
making. Indeed, the notion of “mac-
rocognition”28 refers to coordinating
and maintaining common ground as
primary functions. However, cog-
nitive scientists are disappointed by
any approach that takes the strong
stance that expertise is merely a so-
cial attribution, a stance that dis-
counts the importance and value of
individual cognition, knowledge, and
There is overwhelming empirical
evidence that individual knowledge is
crucial in expert reasoning and prob-
lem solving. If you plug experts and
nonexperts into the same work set-
tings you will nd huge differences
in the quality of the outputs of the
groups/teams. The claims derived from
a dismissive reaction to cognitive-
individualist views move the pendu-
lum too far. Sociologists including
Harry Collins29 and Harald Mieg30
have taken the balanced view, which
we advocate, describing the impor-
tance of individual expertise along
with social and contextual factors
that can be essential for developing
and maintaining expertise. We re-
main hopeful that over time, this bal-
anced view will predominate.
We now come to challenges that
have emerged most recently.
The Challenge from the
Evidence-Based Practices
The evidence-based practices com-
munity argues that professionals need
to nd the best scientic evidence, de-
rive prescriptive procedures for deci-
sions, and adhere to these procedures
rather than rely on their own judg-
ments.31 This approach has been ad-
vocated in healthcare, where it is
referred to as evidence-based medi-
cine. This community argues against
trusting experts because they rely
on anecdotal practices and out-of-
date and ineffective remedies. This
takeaway message seeks to replace
reliance on experts with faith in
dened scientic studies. Clearly, em-
pirical evaluation studies have great
value, but we do not believe that such
studies deserve uncritical acceptance.
Witness how the evidence seems to
change every two years. Witness also
the difculty of sorting out the evi-
dence base for a patient with multiple
medical problems. Clinicians must
consider the individual patient, who
may differ from the criteria on which
the evidence is based. We seek a bal-
ance between scientic evidence and
broad experience.32
One way that evidence is compiled
is through checklists. These are valu-
able safety tools to prevent decision
makers from omitting important
steps in a process, but they are not
decision-support tools. We believe
that reducing complex judgments
to simple checklists often misses es-
sential aspects of decision making.33
Checklists work for stable, well-
dened tasks, and have to be carefully
crafted with a manageable number of
steps. If the checklist is sequential,
each step must lead to a clear out-
come that serves as the trigger for the
next step. However, in complex and
ambiguous situations, the anteced-
ent conditions for each step are likely
to be murky; expert decision makers
must determine when to initiate the
next step or whether to initiate it at
all. Although checklists can be help-
ful, it is risky to have individuals use
checklists for complex tasks that de-
pend on considerable tacit knowledge
to judge when a step is appropriate,
how to modify a step, and how to de-
cide whether the checklist is work-
ing.34 Experts must decide what to
do when the best practices conict
with their own judgments. They must
revise plans that do not seem to be
working. It is one thing to hold physi-
cians to task for relying on ineffective
remedies and ignoring scientic evi-
dence that the procedures that they
were once taught that have since been
shown ineffective, but it is another
thing to compel physicians to rely
on scientic evidence by procedural-
izing clinical judgment in a checklist
and penalize them for not following
the steps.
Guidelines, rules, and checklists
raise the oor by preventing silly
errors—mistakes that even a rst-
year medical student might recog-
nize as an error. But they also lower
the ceiling, making it easy to shift to
an unthinking, uncritical mode that
misses subtle warning signs and does
not serve the needs of patients.
Finally, we come to the challenge
from within computer science itself.
The Challenge from
Computer Science
This challenge has been presented
on three fronts: AI, big data, and
automation. It is claimed that these
technologies are smarter and more re-
liable than any human. Since experts
are the gold standard of performance,
demonstrations of smart technology
win big when they beat out an expert.
AI successes have been widely
publicized. IBM’s Deep Blue beat
Garry Kasparov, the reigning chess
champion at the time. IBM’s Watson
beat a panel of experts at the game of
Jeopardy. AlphaGo trounced one of
the most highly regarded Go masters.
These achievements have been inter-
preted as showing that AI can out-
perform humans at any cognitively
challenging task. But the successes in-
volve games that are well-structured,
with unambiguous referents and de-
nitive correct answers. In contrast,
most decision makers face wicked
problems with unclear goals in am-
biguous and dynamic situations.
Roger Schank, an AI pioneer, stated
atly that “Watson is a fraud.”35 He
objected to IBM’s claims that Watson
could outthink humans and nd in-
sights within large datasets. Although
Watson excels at keyword searches,
it does not consider the context of
the passages it is searching, and as
a result is insensitive to underlying
messages in the material. Schank’s
position is that counting words is not
the same as inferring insightful con-
clusions. Our experience is that AI
developers have much greater appre-
ciation for human expertise than the
AI popularizers.
A good example of the challenge
to expertise comes from the weather
forecasting domain. Articles with
titles such as “All Hail the Com-
puter!”36 promulgate the myth that
if more memory and faster process-
ing speeds could be thrown at the
problem, the need for humans would
evaporate. Starting in the late 1980s,
as more computer models were in-
troduced into operational forecast-
ing, prognostications were made
that computer models would out-
perform humans within the next 10
years—for example, “[The] human’s
advantage over the computer may
eventually be swamped by the vastly
increased number crunching ability
of the computer ... as the computer
driven models will simply get bigger
and better.37 Articles in the scientic
literature as well as the popular press
continue to present the stance of hu-
man versus machine, asking whether
“machines are taking over.36 This
stance conveys a counterproductive
attitude of competition in which the
experts cannot beat the computers.
A more productive approach would
be to design technologies that enhance
human performance. The evidence
clearly shows that the expert weather
forecaster adds value to the outputs of
the computer models. Furthermore,
“numerical prediction models do not
produce a weather forecast. They pro-
duce a form of guidance that can help
a human being decide upon a forecast
of the weather.”38,39
Next, we turn to the denigration
of expertise that has been expressed
by advocates of big data analytics.
Despite their widely publicized suc-
cesses, a closer look often tells a dif-
ferent story. For instance, Google’s
FluTrends project initially seemed
successful at predicting u outbreaks,
but over time it misled public health
planners.40 Advocates of big data
claim that the algorithms can detect
trends, spot problems, and generate
inferences and insights; no human,
no matter how expert, could pos-
sibly sift through all of the avail-
able sensor data; and no human can
hope to interpret even a fraction of
these data sources. These statements
are all true. But the big data com-
munity wants to reduce our trust in
domain experts so decision makers
become comfortable using automated
big data analyses. Here is a typical
and dangerous claim: “The big tar-
get here isn’t advertising, though. It’s
science … faced with massive data,
this approach to science—hypothesize,
model, test—is becoming obsolete
Petabytes allow us to say: Correlation
is enough. We can stop looking for
A balanced view recognizes that
big data analytics can identify pat-
terns where none exist. Big data al-
gorithms can follow historical trends
but might miss departures from these
trends, as in the broken leg cues, cues
that have implications that are clear
to experts but aren’t part of the algo-
rithms. Further, experts can use ex-
pectancies to spot missing events that
may be highly signicant. In contrast,
big data approaches, which crunch
the signals received from a variety of
sources, are unaware of the absence
of data and events.
Finally, we consider the challenge
offered by proponents of automation.
Some researchers in the automation
community have promulgated the
myth that more automation can ob-
viate the need for humans, including
experts. The enthusiasm for technol-
ogies is often extreme.42 Too many
technologists believe that automa-
tion can compensate for human lim-
itations and substitute for humans.
They also believe the myth that tasks
can be cleanly allocated to either the
human or the machine. These mis-
leading beliefs have been questioned
by cognitive systems engineers for
more than 35 years, yet the debunk-
ing has to be periodically refreshed
in the minds of researchers and pro-
gram managers.43 The misleading be-
liefs persist because of the promissory
note that more automation means
fewer people, fewer people means
fewer errors, and (especially) fewer
people means reduced costs.44
Nearly every funding program that
calls for more automation is premised
with the claim that the introduction
of automation will entail a need for
fewer expert operators at potentially
lower cost to the organization. But
the facts are in plain view: The intro-
duction often requires more experts.
Case studies45 show that automation
creates new kinds of cognitive work
for the operator, often at the wrong
times. Automation often requires
people to do more, to do it faster, or
to do it in more complex ways. The
explosion of features, options, and
modes often creates new demands,
new types of errors, and new paths
toward failure. Ironically, as these
facts became apparent, decision mak-
ers seek additional automation to
compensate for the problems trig-
gered by the automation.44
We see technology—AI, big data,
and automation—continuing to im-
prove, which will make computers
ever more valuable tools. But in the
spirit of human-centered comput-
ing, we dene intelligent systems as
human-machine systems that am-
plify and extend human abilities.46
The technology in such work systems
is designed to improve human per-
formance and accelerate the achieve-
ment of expertise. We hope that
expertise-deniers can get past the
mindset of trying to build systems to
replace the experts and instead seek
to build useful technologies that em-
power experts.
If the challenges to expertise hold
sway, the result might be degrada-
tion of the decision making and resil-
ience of organizations and agencies.
Once such organizations accept the
expertise-deniers’ arguments, they
may sideline domain experts in favor
of statistical analysts and ever more
automation. They are likely to di-
vert funding from training programs
that might produce experts into tech-
nology that makes decisions without
human intervention or responsible
action. Shifting cognitive work over
to automation may deskill workers,
erode the expertise that is crucial for
adaptability, and lead to a downward
spiral of diminishing expertise.
Experts are certainly not perfect,
so the challenges can be useful for
increasing our understanding of the
boundary conditions of expertise. We
do not want to return to an era where
medicine was governed by anecdote
rather than data—we think it essen-
tial to draw from evidence and from
expertise. We appreciate the dis-
coveries of the heuristics and biases
researchers—the heuristics they have
uncovered can have great value for fos-
tering speculative thinking. We respect
the judgment and decision research
community—we want to take advan-
tage of their efforts to improve the way
we handle evidence and deploy our
intuitions. We want to productively
move forward with improved informa-
tion technology—we want these tools
to be designed to help us gain and en-
hance expertise. We value the social
aspects of work settings—we want
to design work settings and team ar-
rangements that magnify expertise.
Our hope is to encourage a balance
that respects expertise while designing
new ways to strengthen it.
We regard the design of cognitive
work systems as the design of human-
machine interdependencies, guided by
the desire to make the machines com-
prehensible, predictable, and control-
lable. This course of action seems best
suited to promote human welfare and
enable greater achievements.47,48
We thank Bon ni e Dorr, Ha l Daume, Jon-
athan Laza r, Jim Hendler, Mark Smith,
and Jenny Preece for their comments on
a draft of this article; and Jan Maarten
Sch raagen and Paul Ward for their helpful
comments and suggestions and for their
patience and encouragement. Th is essay
was adapted from a longer and more in-
dept h account, “T he War on E xperts,”
that will appear in the Oxford H andbook
of Expertise.49
1. K.A. Ericsson et al., Cambridge
Handbook of Expertise and Expert
Performan ce, 2nd ed., Cambridge Univ.
Press, 2017.
2. B. Shneiderman and G. Klein, “Tools
That Aid Expert Decision Making:
Supporting Frontier Thinking, Social
Engagement and Responsibility,” blog,
Psycholog y Today, Mar. 2017; www /blog /seeing-what
3. T. Nichols, The Death of Expertise,
Oxford Univ. Press, 2017.
4. R. Dawes, “The Robust Beauty of
Improper Linear Models,” American
Psychologist, vol. 34, no. 7, 1979,
pp. 571–582.
5. P.E. Meehl, “Seer Over Sign: The First
Good Example,” J. Experimental
Research in Personality, vol. 1, no. 1,
1965, pp. 27–32.
6. R. Hastie and R. Dawes, Rational
Choice in an Uncertain World, Sage
Publications, 2001.
7. K. Salzinger, “Clinical, Statistical, and
Broken-Leg Predictions,” Behavior and
Philosophy, vol. 33, 2005, pp. 91–99.
8. R. Johnston, Analytic Culture in the
U.S . Intelligence Community: An Eth-
nographic Study, Center for the Study
of Intelligence, Washington, DC, 2005.
9. H.J. Einhorn and R.M. Hogarth,
“Judging Probable Cause,” Psyc hologi-
cal Bull., vol. 99, no. 1, 1978, pp. 3 –19.
10. D. Kahneman and A . Tversky, “Pros-
pect Theory: An Analysis of Decision
under Risk, Econometrica, vol. 47,
no. 2, 1979, pp. 263 –291.
11. D.W. Cheng et al., “Pragmatic Versus
Syntactic Approaches to Training De-
ductive Reasoning,” Cognitive Psychol-
ogy, vol. 18, no. 3, 1986, pp. 293–328.
12. R. Hertwig and G. Gigerenzer, “The
‘Conjunction Fallacy’ Revisited: How In-
telligent Inferences Look like Reasoning
Errors,J. Behavioral Decision Making,
vol. 12, no. 2, 1999, pp. 27–305.
13. B. Fischhoff, “Eliciting Knowledge
for Analytical Representation,” IEEE
Trans. Systems, M an, and Cybernetics,
vol. 19, no. 3, 1989, pp. 448– 461.
14. M.D. Shields, I. Solomon, and W.S.
Waller, “Effects of Alternative Sample
Space Representations on the Accuracy
of Auditors’ Uncertainty Judgments,”
Accounting, Organizations, and Soci-
ety, vol. 12, no. 4, 1987, pp. 375–385.
15. G. Klein, R. Calderwood, and A. Clinton-
Cirocco, “Rapid Decision Making on
the Fire Ground,Proc. Human Factors
and Ergonomics Soc. Ann. Meeting,
vol. 30, no. 6, 1986, pp. 576 –580.
16. A. Tversky and D. Kahneman, “Judg-
ment under Uncertainty: Heu ristics and
Biases, Science, vol. 185, Sept. 1974,
pp. 1124–1131.
17. J. Evetts, “Professionalism: Value and
Ideology,Current Sociology, vol. 61,
no. 5– 6, 2013, pp. 778 –779.
18. H.M. Collins, Changing Order.
Replication and Induction in Scientic
Practice, 2nd ed., Univ. of Chicago
Press, 1992.
19. B. Latour and S. Woolgar, Laborato ry
Life. The Social Construction of Sci-
entic Facts, Sage Publications, 1979.
20. M. Lynch, Scientic Practice and Or-
dinary Action, Cambridge Univ. Press,
21. K.D. Knorr-Cetina, The Manufacture
of Knowledge, Pergamon Press, 1981.
22. J. Lave, “Situating Learning in Com-
munities of Practice,” Perspectives on
Socially Shared Cognition, L.B. Resnick,
J.M. Levine, and S.D. Teasley, eds.,
American Psychological Assoc., 1993,
pp. 63–82.
23. L. Suchman, Plans and Situated Ac-
tions: The Problem of Human-Machine
Communication, Cambridge Univ.
Press, 1987.
24. E. Wenger, Communities of Practice:
Lear ning, Meaning, & Identity, Cam-
bridge Univ. Press, 1998.
25. M.S. Weldon, “Remembering as a
Social Process,” The Psychology of
Lear ning and Motivation, vol. 40,
no. 1, 2000, pp. 67–120.
26. G.A. Miller, “Dismembering Cog ni-
tion,” One Hundred Years of
Psychological Research in America,
Johns Hopkins Univ. Press, 1986,
pp. 277–298.
27. N.M. Agnew, K.M. Ford, and P.J.
Hayes, “Expertise in Context: Person-
ally Construc ted , Socially Selected
and Reality-Relevant?” Int’l J. E xpe rt
Systems, vol. 7, no. 1, 19 94, pp.
28. G. Klein et al., “Macrocognition,”
IEEE Intelligent Systems, vol. 18, no. 3,
2003, pp. 81–85.
29. H.M. Collins,A Sociological/
Philosophical Perspective on Expertise:
The Acquisition of Expertise Through
Socialization,Cambridge Handbook
of Expertise and Expert Performance,
2nd ed., K.A. Ericsson et al., Cambridge
Univ. Press, 2017.
30. H.A. Mieg, “Social and Sociological
Factors in the Development of Exper-
tise,” Cambridge Handbook of Ex-
pertise and Expert Performance, K.A.
Ericsson et al., Cambridge Univ. Press,
2006, pp. 743–760.
31. A.R. Roberts and K.R. Yeager, eds.,
Evidence -Based Practice Manual:
Research and Outcome Measures in
Health and Human Ser vices, Oxford
Univ. Press, 2004.
32. E. Barends, D.M. Rousseau, and R.B.
Briner, Evidence- Based Manage-
ment: The Basic Principles, Center for
Evidence-Based Management,
Amsterdam, 2014.
33. R.L. Wears and G. Klein, “The Rush
from Judgment,” Annals of Emergency
Medicine, forthcoming, 2017.
34. D.E. Klein et al., “Can We Trust Best
Practices? Six Cognitive Challenges of
Evidence-Based Approaches,” J. Cogni-
tive Eng. and Decision Making, vol. 10,
no. 3, 2016, pp. 244 –254.
35. R. Schank, “The Fraudulent Claims
Made by IBM about Watson and AI,”
36. R.A. Kerr, “Weather Forecasts Slowly
Clearing Up,” Science, vol. 38, no. 388,
2012, pp. 734–737.
37. P.S. Targett, “Predicting the Future
of the Meteorologist: A Forecaster’s
View,” Bull. Australian Meteorological
and Oceanographic Soc., vol. 7, no. 1,
1994, pp. 46 –52.
38. H.E. Brooks, C. A. Doswell, and R.A.
Maddox, “On the Use of Mesoscale
and Cloud-Scale Models in Operational
Forecasting,” Weather and Forec asting,
vol. 7, Mar. 1992, pp. 120 –132.
39. R.R. Hoffman et al., Minding the
Weather: How Expert Forecasters
Think, MIT Press, 2017.
40. D. Lazer et al., “The Parable of Google
Flu: Traps in the Big Data Analysis,”
Science, vol. 343, 14 Mar. 2014,
pp. 1203–1205.
41. C. Anderson, “The End of Theory: The
Big Data Deluge Makes the Scientic
Method Obsolete,” Wired, 23 June
2008; w
42. E. Brynjolfsson and A. McAfee, The Sec-
ond Machine Age: Work, Progress, and
Prosperity in a Time of Brilliant Tech-
nologies, W.W. Nor ton & Co., 2014.
43. J.M. Bradshaw et al., “The Seven
Deadly Myths of ‘Autonomous Sys-
tems’,” IEEE Intelligent Systems,
vol. 28, no. 3, 2013, pp. 54– 61.
44. “Technical Assessment: Autonomy,”
report from the Ofce of the Assistant
Secretary of Defense for Research and
Engineering, Ofce of Technical Intelli-
gence, US Department of Defense, 2015.
45. R.R. Hoffman, T.M. Cullen, and J.K.
Hawley, “Rhetoric and Reality of Auton-
omous Weapons: Getting a Grip on the
Myths and Costs of Automation,” Bull.
Atomic Scientists, vol. 72, no. 4, 2016;
doi:10.1080/00963402. 2016.1194619.
46. J.M. Johnson et al., “Beyond Coop-
erative Robotics: The Central Role of
Interdependence in Coactive Design,”
IEEE Intelligent Systems, vol. 26, no. 3,
2011, pp. 81–88.
47. M. Johnson et al., “Seven Cardinal Vir-
tues of Human-Machine Teamwork,”
IEEE Intelligent Systems, vol. 29, no. 6,
2014, pp. 74–79.
48. G. Klein et al., “Ten Challenges for
Making Automation a ‘Team Player’ in
Joint Human-Agent Activity,” IEEE In-
telligent Systems, vol. 19, no. 6, 2004,
pp. 91–95.
49. P. Ward et al., eds., Oxford Handbook
of Expertise, Oxford Univ. Press, forth-
Gary Klein is senior scientist at M acroCog-
nition LLC. His research interests include
naturalistic decision making. Klein received
his PhD in experimental psychology from
the University of Pittsburgh. Contact him at
Ben Shneiderman is distinguished univer-
sity professor in the Department of Com-
puter Science at the University of Maryland.
His research interests include human-com-
puter interaction, user experience design, and
information visualization. Shneiderman has
a PhD in computer science from SUNY-Stony
Brook. Contact him at
Robert R. Hoffman is a senior research
scientist at the Institute for Human and
Machine Cognition. His research interests
include macrocognition and complex cog-
nitive systems. Hoffman has PhD in exper-
imental psychology from the University of
Cincinnati. He is a fellow of the Association
for Psychological Science and the Human
Factors and Ergonomics Society and a senior
member of IEEE. Contact him at rhoffman@
Kenneth M. Ford is director of the Insti-
tute for Human and Machine Cognition.
His research interests include articial in-
telligence and human-centered computing.
Ford has a PhD in computer science from
Tulane University. Contact him at kford@
Read your subscriptions
through the myCS
publications portal at
... Our review is grounded in two observations. First, while a vast array of judgment-based studies treat the question of expertise with care and suspicion (Klein et al., 2017), the status of experts within the wider and more "applied" field of foresight often relies on a notion of self-evidence. As several authors note, there is a paucity of literature developing scientific motivations for the usage of experts and expertise in foresight (Baker et al., 2006;Devaney and Henchion, 2018). ...
... The second family of (semi-) qualitative foresight studies (Delphi studies, scenarios, etc.), however, diverts from this view by following the assumption that "the future can better be confronted by opening our minds and learning to consider different viewpoints" (Poli, 2011, p. 403). As Klein et al., 2017 summarize in their recentlypublished Why Expertise Matters: ...
While decision research tends to treat the question of expertise with suspicion, the capacity of experts within the wider, applied field of foresight often remains unquestioned. In this review of expert identification methods, we share the positive assessment of expert judgment for exploring plausible conditions of the future. However, given the contested status of expertise, it is crucial to know how a particular mode of recruitment, or (more often) a combination of diverse methods, seeks to secure the expert status of a person. Our review is motivated by the conviction that foresight researchers should not only assume value in expertise, but defend this value from scientific viewpoints. Based on an introduction to sociological, behavioral and cognitive perspectives on expertise, we trace the epistemological premises guiding different modes of selection. We list eight expert identification methods and explore their core assumptions, strengths, weaknesses and domain examples. Developing such linkages between priorities in expert selection and the arguments underlying them is the major contribution of this article. A second contribution lies in providing an overview of expert selection methods, ranging from simple and low-cost to more complex and combined methodologies.
... Experts commonly make good intuitive decisions, grounded in heuristics acquired through experience [1,2]. They also reveal greater anticipatory thinking skills [3]. ...
Modern technology revolutionised marine navigation, reducing errors and increasing navigation safety. However, the same technology has been associated with critical accidents and navigators’ errors. On the other hand, expert mariners have proved to manage complex situations, adapting to unforeseen events successfully. To better understand the effects of new technologies and how work is currently done, the Portuguese navy promoted a study about navigation team performance. The results suggest that navigation technology appears to have a strong anchoring effect on team activity. While sensemaking and intuitive judgements complement the shortfalls of the decision support system (DSS), it was found that the combination of high automation influence with lack of coordination leads to a collaborative biased perception of the situation.
... Further, it is difficult to identify what is "expert" about experts because the experts themselves operate as if with a "felt sense," and are not always able to articulate their tacit knowledge (Evans 2008;Klein 2017;Rolfe 2005;Taylor et al. 2013). Nonetheless, expertise is important as expert judgment tends to be more effective, efficient, and accurate than that of novices (Ericsson et al. 1993;Klein et al. 2017). In this study, we take on this challenge, and we demonstrate how the use of relatively simple methods can make expert knowledge relatively easy to explicitly identify. ...
Full-text available
Improving police use-of-force training is methodologically difficult. By providing a method for identifying the “expert” response to any given scenario, and by triangulating multiple methods, we aim to contribute towards police departments’ capacities to engage in more effective and targeted training. Forty-two police experts and 36 novices watched five scenarios taken from body-worn camera footage. The videos would pause at several points, and respondents gave both close-ended survey answers and open-ended written answers. Using a mixed-methods approach combining quantitative regression and natural-language-processing techniques, we triangulated our findings to reach conclusions regarding the differences between experts and novices. Relative to novices, expert police officers were more likely to report the importance of force mitigation opportunities to any given scenario in close-ended questions, and were more likely to use words associated with verbal de-escalation; novices were more likely to use words associated with physical control. The materials can be accessed at
... Similarly, AI as a design material could be plugged into applications or artifacts, facilitating designers throughout iterative design and evaluation. Such a perspective points to a direction in which intelligence is used as a tool to truly empower experts and technology becomes an even more valuable tool (Klein, Shneiderman, Hoffman, & Ford, 2017). At the same time, it brings to the foreground the need for establishing concrete guidelines and a code of ethics for the design and development of AI applications, services, and artifacts; it also highlights the requirement for evolving the existing design, evaluation, and testing procedures. ...
Full-text available
ABSTRACT This article aims to investigate the Grand Challenges which arise in the current and emerging landscape of rapid technological evolution towards more intelligent interactive technologies, coupled with increased and widened societal needs, as well as individual and collective expectations that HCI, as a discipline, is called upon to address. A perspective oriented to humane and social values is adopted, formulating the challenges in terms of the impact of emerging intelligent interactive technologies on human life both at the individual and societal levels. Seven Grand Challenges are identified and presented in this article: Human-Technology Symbiosis; Human-Environment Interactions; Ethics, Privacy and Security; Well-being, Health and Eudaimonia; Accessibility and Universal Access; Learning and Creativity; and Social Organization and Democracy. Although not exhaustive, they summarize the views and research priorities of an international interdisciplinary group of experts, reflecting different scientific perspectives, methodological approaches and application domains. Each identified Grand Challenge is analyzed in terms of: concept and problem definition; main research issues involved and state of the art; and associated emerging requirements. BACKGROUND This article presents the results of the collective effort of a group of 32 experts involved in the community of the Human Computer Interaction International (HCII) Conference series. The group’s collaboration started in early 2018 with the collection of opinions from all group members, each asked to independently list and describe five HCI grand challenges. During a one-day meeting held on the 20th July 2018 in the context of the HCI International 2018 Conference in Las Vegas, USA, the identified topics were debated and challenges were formulated in terms of the impact of emerging intelligent interactive technologies on human life both at the individual and societal levels. Further analysis and consolidation led to a set of seven Grand Challenges presented herein. This activity was organized and supported by the HCII Conference series. This is a Public Access article available at:
De acordo com a literatura, os experts ou especialistas utilizam estratégias mais eficazes do que outros e fazem-no com menos esforço, compreendem o significado em padrões que outras pessoas não se apercebem. elaboram modelos mentais abrangentes de situações para apoiar a conceção de sentido e pensamento antecipado, possuem conhecimento amplo e altamente organizado do domínio; e são intrinsecamente motivados a trabalhar em problemas difíceis que aumentam suas capacidades. Adotando esta definição, podemos considerar que os oficias de marinha são experts num conjunto de domínios associados a utilização dos navios, seus sensores e armas em operações marítimas. Não obstante o reconhecimento destas evidencias empíricas, várias comunidades científicas têm desafiado a validade do conceito de expert. Alguns sugere que modelos lineares superam as capacidades e experts; investigadores na área das heurísticas e vieses defendem que os experts também são sujeitos a vieses a semelhança de outros; sociólogos sugerem que se trata de um reconhecimento social; outras comunidades sugerem que as decisões dos experts podem ser substituídas por listas de verificação e procedimentos sustentados em bases de dados; enquanto que os tecnólogos acreditam no advento das substituição dos experts pela inteligência artificial. Presentemente decore o trabalho de revisão dos planos curriculares dos cursos tradicionais da Escola Naval. Neste sentido é importante compreender quais os conhecimentos e competências fundamentais para os futuros oficias da marinha. Seguindo a escola de pensamento de tomadas de decisões naturalista, o presente trabalho pretende apresentar uma metodologia para caraterização das competências e conhecimentos exigidos para a tomada de decisão à bordo dos navios. A metodologia baseia-se na análise de cenários e desenvolve-se em três etapas. A primeira recorrendo a elicitação de conhecimento de profissionais, analisando experiências de tomadas de decisão em situações difíceis ou complexas. Neste processo de análise retrospetiva caracterizam-se as necessidades para a tomada de decisão. A segunda fase passa pela revisão dos programas curriculares, ajustando essencialmente as metodologias adotadas nas unidades curriculares nos últimos dois anos do programa de mestrado, focados na aplicação do conhecimento. A Terceira fase passa pela adoção cenários em contexto educativo, na forma de Problem Based Learning, onde os alunos exercem a prática e aplicam a retrospetiva prospetiva – imaginar que um caso já aconteceu. Estudos apontam que as aplicações desta metodologia, na formação de experts, aumentam em 30% a capacidade de reconhecer e resolver problemas em situações futuras.
Full-text available
There is a growing popularity of data-driven best practices in a variety of fields. Although we applaud the impulse to replace anecdotes with evidence, it is important to appreciate some of the cognitive constraints on promulgating best practices to be used by practitioners. We use the evidence-based medicine (EBM) framework that has become popular in health care to raise questions about whether the approach is consistent with how people actually make decisions to manage patient safety. We examine six potential disconnects and suggest ways to strengthen best practices strategies.
Full-text available
This article plays counterpoint to our previous discussions of the “seven deadly myths” of autonomous systems. The seven deadly myths are common design misconceptions to be acknowledged and avoided for the ills they breed. Here, we present seven design principles to be understood and embraced for the virtues they engender. The cardinal virtues of classical antiquity that were adopted in Christian tradition included justice, prudence, temperance, and fortitude (courage). As we’ll show in this essay, in effective human-machine teamwork we can also see virtues at play—namely clarity, humility, resilience, beneficence (helpfulness), cohesiveness, integrity, and thrift. As we unfold the principles that enable these virtues to emerge, it will become clear that fully integrating them into the design of intelligent systems requires the participation of a broad range of stakeholders who aren’t always included in such discussions, including workers, engineers, operators, and strategic visionaries developing research roadmaps. The principles aren’t merely for the consumption of specialists in human factors or ergonomics. We illustrate these principles and their resultant virtues by drawing on lessons learned in the US Defense Advanced Research Projects Agency (DARPA) Robotics Challenge (DRC).
Full-text available
The objective of this study was to examine the way decisions are made by highly proficient personnel, under conditions of extreme time pressure, and where the consequences of the decisions could affect lives and property. Fire Ground Commanders (FGCs), who are responsible for allocating personnel and resources at the scene of a fire, were studied using a critical incident protocol analysis. The major finding was that in less than 12% of the decision points was there any evidence of simultaneous comparisons and relative evaluation of two or more options. Instead the FGCs most commonly relied on their experience to directly identify the situation as typical and to identify a course of action as appropriate for that prototype. A Recognition Primed Decision (RPD) model is proposed which emphasizes the use of recognition rather than calculation or analysis for rapid decision making.
This book argues that the human cognition system is the least understood, yet probably most important, component of forecasting accuracy. Minding the Weather investigates how people acquire massive and highly organized knowledge and develop the reasoning skills and strategies that enable them to achieve the highest levels of performance. The authors consider such topics as the forecasting workplace; atmospheric scientists’ descriptions of their reasoning strategies; the nature of expertise; forecaster knowledge, perceptual skills, and reasoning; and expert systems designed to imitate forecaster reasoning. Drawing on research in cognitive science, meteorology, and computer science, the authors argue that forecasting involves an interdependence of humans and technologies. Human expertise will always be necessary.
A primary motivation for developing autonomous weapons is the assumption that such systems require fewer combatants with lower levels of expertise, thus cutting costs. For more than two decades, the Defense Department has employed highly automated weapons, including the Patriot air defense system and the Predator aircraft system. For the myths of automation to hold, the human costs of the systems should have fallen as the Army and Air Force became more accustomed to them. But automated weapon systems have actually required more highly trained and experienced experts, and the failure to train personnel to operate the systems adequately has led to fratricide and the loss of innocent life. The Defense Department needs to improve its policies on the procurement and development of automated systems if the weapons are to be responsive and effective in future conflicts.
Accurate prediction of behavior is a critical task for the psychologist, particularly for the practitioner. Outstanding among those who have successfully wrestled with this complicated task is Paul Meehl. Yet, small has been the influence of his work on the everyday practice of prediction. The object of this paper is to review Meehl's work in this area and, using behavior analysis, to seek an understanding of practitioners' continued opposition to his findings.
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.