Content uploaded by Gary Klein
Author content
All content in this area was uploaded by Gary Klein on Sep 25, 2018
Content may be subject to copyright.
DEPARTMENT: HUMAN-CENTERED COMPUTING
Explaining Explanation,
Part 3: The Causal
Landscape
This is the third of a series of essays about explanation.
After laying out the core theoretical concepts in the first
article, including aspects of causation and abduction, the
second article presented some empirical research to
reveal the great variety of purposes and types of causal
reasoning, as well as a number of different causal
explanation patterns. Now it’s time to look at reasoning
patterns. Taking the notion of reasoning patterns a step
further, the author describes a method whereby a decision maker can go from a causal
explanation to a viable course of action for making positive change in the future, not to
mention aiding decision making in general.
When we hear that an acquaintance or a celebrity has died, our first question often is: What happened?
We expect to get a clear and simple explanation, just one or two words. Cancer. Heart attack. Car acci-
dent. Stroke. We can fit the reason into the distinct categories we have learned.
But further inspection often clouds the picture. Perhaps the deceased person had developed cancer after
years of smoking and refused to see a physician until it was too late. So did the person die from cancer,
unhealthy living habits, or obstinacy? Death certificates refer to the immediate cause of death (the final
condition) along with the underlying cause of death (any disease or injury that triggered the downward
spiral).1 For the cancer example, the immediate cause of death might have been pneumonia contracted
at the late stages of cancer, but a physician would probably list cancer, with smoking as an underlying
cause. There probably wouldn’t be any mention of the patient’s delay in getting a medical examination.
Yet we can point to multiple interacting causes, not a simple neat answer.
People want definitive answers, as if life events are a series of operations for which it is usually possi-
ble to affix blame and diagnose faults. For example, if a copy machine jams, there’s usually a mechani-
cal reason—a sheet of paper got stuck in the assembly, and once it’s removed, the problem is solved.
Mechanical problems such as this are determinate; there’s a cause and it can be identified.
Gary Klein
MacroCognition, LLC
Editors:
Robert Hoffman,
rhoffman@ihmc.us;
Ken Ford,
kford@ihmc.us;
Matthew Johnson,
mjohnson@ihmc.us
83
IEEE Intelligent Systems Published by the IEEE Computer Society
1541-1672/18/$33.00 ©2018 IEEE
March/April 2018
IEEE INTELLIGENT SYSTEMS
Yet most of human problems aren’t mechanical. They aren’t determinate. There isn’t a single cause.
There are multiple, intersecting causes and we might never uncover some of the most important ones.2,3
We live in a multi-cause, indeterminate world, and our attempts to understand why events occurred
will usually be frustrating. We can’t expect specific, single-cause, 1- to 2-word answers.
A further complication is how to clearly distinguish trigger causes from enabling causes that are pre-
conditions.4 A trigger cause is immediate and obvious, such as dropping a lighted match onto a stack of
newspapers that sets a house on fire. The lighted match is a trigger cause. The presence of oxygen in
the house is an enabling cause, a precondition for the fire to be lit. The trigger cause gets the attention,
but isn’t always the best cause to address. Firefighters might spray foam on the fire to smother it—de-
prive it of oxygen. When it comes to human problems, a psychotherapist might listen when a client
complains that a recent arbitrary action by her domineering and insensitive spouse made her feel help-
less and anxious, but the skilled therapist might spend time on an enabling cause—the client’s inability
to assert herself, which has played out with her spouse, her child, and colleagues at work.
Returning to the example of the jammed copy machine, the piece of paper that got stuck is the trigger
cause, but a poor design that permits lots of paper jams is an enabling cause. If you’re waiting in line to
make a copy, all you care about is the trigger cause; if you’re the representative of the company manu-
facturing the copying machine, and you are continuously fielding customer complaints, you care about
the enabling cause.
Fortunately, there’s a way to cope with complexity: the causal landscape.
THE CAUSAL LANDSCAPE CONCEPT
The concept is to portray a wide array of causes as a causal network, to help people escape from their
single-cause, determinate mindset, but then to highlight a smaller number of causes that matter the
most and that suggest viable courses of action. These are the causes that: (a) contributed most heavily
to the effect (if they hadn’t occurred, neither would the effect), and (b) are the easiest to negate or miti-
gate. When we want to take steps to prevent an adverse event, the highlighted nodes in a causal net-
work are the places to start exploring.
The causal landscape’s two-step method highlights the few causes worth addressing through: their im-
pact score, which reflects how much each cause influenced the effect; and their reversibility score,
which reflects the ease of eliminating that cause. The causes that had the strongest impact and are the
easiest to reverse are the ones that offer the greatest potential to prevent future accidents or adverse
events.
The causal landscape is a hybrid explanatory form that attempts to get the best of both worlds—both
triggering and enabling causes. It portrays the complex range and interconnection of causes and identi-
fies a few of the most important ones. Without reducing some of the complexity, we’d be confused
about how to act.
DETAILED EXAMPLE
Consider the 1994 friendly fire incident in which two US Air Force F-15 fighter jets shot down two US
Army Black Hawk helicopters in northern Iraq, killing 26 peacekeepers. That’s right: the military shot
down its own aircraft. The incident occurred in broad daylight, with no other aircraft around. The F-15s
and the Army helicopters were all being monitored by the same AWACS (Airborne Warning and Con-
trol System) airplane, which failed to prevent the incident. Scott Snook wrote a masterful analysis of
the event in his 2002 book Friendly Fire, identifying a wide array of causes as shown in Figure 1 (re-
printed with permission).5 There are a lot of causes leading to the red outcome at the bottom right—too
many.
To create a causal landscape for this incident, I evaluated each node on two dimensions—the impact of
the cause, and the ease of eliminating it. I rated each node high, medium, or low for impact (would re-
versing this cause have prevented the shootdown?), and high, medium, and low for ease of reversal
(how much effort would it take to reverse the cause?). The reversibility scale is presented in Table 1.
84
March/April 2018 www.computer.org/inteligent
DEPARTMENT: HUMAN-CENTERED COMPUTING
For the Black Hawk shootdown, only six nodes were rated high on the impact and reversibility scales.
These are enlarged and highlighted in a second version of the diagram, presented in Figure 2. These
highlighted nodes are the leverage points for decision makers to consider when trying to prevent such
accidents in the future.
Figure 1. Snook’s causal network for the Black Hawk shootdown (reproduced with permission
from Friendly Fire: The Accidental Shootdown of US Black Hawks over Northern Iraq;
Princeton University Press, 2002).
There isn’t much we can do about some of the nodes at the top of Snook's original diagram, such as the
“Shrinking Defense Budget,” or the “Changing World Order,” or the “Long History of Inter-Service
Rivalry.” Other causes, such as the “Few Joint-Training Opportunities for Air Force and Army Pilots,”
are difficult to alter. In contrast, the highlighted nodes, such as the “No Helo Reps at Weekly Coord.
Meetings,” are easy to remedy—just invite someone from the helicopter community to sit in on these
weekly meetings. The node about “Confusion over Responsibility for Helicopter Operations within
OPC” is readily resolved by assigning someone in Operation Provide Comfort to track helicopter mis-
sions.
This example illustrates what a causal landscape can look like. The ratings that I made (without any
pretense of expertise in this area) are just provided for illustration of how the causal landscape, as a
form of causal network, can aid decision makers as they try to redesign work processes. Further at-
tempts to use this method in diverse applications, along with psychometric analysis, would be useful in
establishing the method’s validity and utility. Generally, the creation of a causal landscape could help
people gain insights about how to navigate the multiple causes for events about which they care.
The causal landscape avoids simplistic single-cause explanations, and it also avoids exhaustive cata-
logs of the entire field of relevant causal factors. The concept of a “landscape” is intended to highlight
the most actionable causes and place them in the context of the wider array of influences so as to retain
that context—they’re the landscape in which we show the actionable causes.
85
March/April 2018 www.computer.org/inteligent
IEEE INTELLIGENT SYSTEMS
Table 1. The Reversiblity Scale.
SCORE EXAMPLES
4: Impossible to
change
For the military shootdown incident, these factors
would include the fall of the Soviet Union, which
contributed to the tragedy but cannot be undone.
Similarly, for anxious clients wanting to understand
why they are so easily overwhelmed, causes such
as childhood neglect and heredity can play a role
but can’t be undone.
3. Very difficult to
change
For the helicopter shootdown, this includes a shrink-
ing defense budget. For the anxious client, it might
include financial problems and chronic pain.
2. Changeable with
some effort
These items aren’t low-hanging fruit, they’re a basis
for making fundamental and lasting improvements.
In the friendly-fire incident, two additional nodes are
inter-service rivalry and too few joint-training exer-
cises. Making these changes could prevent or re-
duce lots of different problems. The benefits strongly
outweigh the costs. Similarly, therapists might help
anxious clients learn general strategies such as cop-
ing skills.
1. Simple to change The shootdown could have been prevented if small
changes had been made, such as arranging for hel-
icopter representatives to attend the weekly coordi-
nation meetings. Simple fixes such as this would
have prevented the shootdown but wouldn’t create
more general benefits. Similarly, treating anxious
clients with anti-anxiety medications is easy to im-
plement, but addresses only the immediate symp-
tom.
POTENTIAL APPLICATIONS
The causal landscape format can be useful for accident investigation in domains such as aviation and
healthcare to help decision and policy makers avoid the “blame game” that accompanies reductive
thinking. However, we often want to do more than just diagnose a reason for the accident. We want to
direct the causal landscape forward, to prevent future accidents.
The friendly fire diagram shows a causal network, which is familiar to computer scientists, but the
causal landscape can be used to make the networks actionable. In the friendly fire example, we might
want to attack the deeper-rooted general conditions (for example, the inter-service rivalry, few joint
training exercises, and so on), which have increased the likelihood of not only this particular accident
but also an entire family of others.
86
March/April 2018 www.computer.org/inteligent
DEPARTMENT: HUMAN-CENTERED COMPUTING
Figure 2. Snook’s causal network highlighting the key nodes based on the scalar analysis.
(Reprinted with permission.)
For this to happen, the impact score should address each case as a general problem, and not just map
the causal relations for a current accident or outcome. Thus, a military planner could use the friendly
fire tragedy to see if there is a way to improve coordination between the Army and Air Force. Simi-
larly, a psychotherapist explained to me how he used the method to help clients gain insights into the
conditions and triggers that lead to their anxiety episodes. Causal landscapes can also help teams build
common ground by having the team members generate their individual causal landscapes and then
compare these.
Finally, the causal landscape has several potential applications in computer science. Artificial intelli-
gence researchers could use it as a protocol for having complex systems explain the basis for their rec-
ommendations. In this application, a causal landscape could present information that explains AI
systems globally (namely, their mechanisms) and also locally, such as how the AI makes decisions or
categorizations for particular cases or instances. In addition, project teams could use the causal land-
scape as an interview format to enable subject-matter experts to explain why they made decisions; we
know that people are not reliable when answering “why” questions and perhaps the causal landscape
might help broaden that kind of inquiry. Also, the causal landscape might be a way to represent users’
mental models. Computer scientists might also want to explore whether there are emergent causes
within the causal landscape—new causes formed by the intersection of existing lines of influence. This
information could be used to redesign models and analyses.
CONCLUSION
Currently, professionals in a variety of settings strive for single-cause explanations of why some event
happened, which oversimplifies the situation. We know that multiple causes lead to specific events.
Some communities do seek to identify a range of causes. An example would be the use of Root Cause
Analyses in hospitals. One problem with Root Cause Analyses is that hospitals often rely on a standard
set of potential causes, making the exercise sterile after a few iterations. These causes include: training,
87
March/April 2018 www.computer.org/inteligent
IEEE INTELLIGENT SYSTEMS
human factors issues, fatigue, failure to follow procedures, unavailability of specialists, and so forth.
Usually several of these issues are flagged. But flagging contributing causes isn’t the same as diagnos-
ing what went wrong in a specific case, or showing how the different causes relate to each other, or,
most important, formulating a cost-effective plan to reduce the chances of similar adverse effects in the
future.
Aviation has its own protocols for reviewing adverse incidents, with its own standard set of causal fac-
tors such as the aircrew’s currency, its medical status, recent personal history, aircraft’s maintenance
history, Air Traffic Control issues, environmental conditions, policies, command climate, flight plan-
ning process, completeness and accuracy of the flight execution, potential hardware or systems prob-
lems, and so forth. Sometimes these causal factors are weighted based on how much they contributed
to the incident, or they’re at least rated as major or minor. The actions typically also involve policy,
procedural, and/or training changes.
The causal landscape is certainly consistent with these kinds of approaches. It goes beyond them in a
few ways, however. First, it represents the range of contributing causes and shows the relationships
among these causes. Second, it systematically assesses the degree to which each cause contributed to
the outcome, much the same as the aviation community but with a more nuanced rating scale. Third, it
assesses the relative ease of addressing each cause. In these ways, the causal landscape conveys the
richness of the causal field while at the same time helping people avoid getting overwhelmed or dis-
couraged by complexity.
ACKNOWLEDGMENTS
I would like to thank Robert Hoffman for his many contributions to my thinking about causal reason-
ing, and for his careful and patient editing of this paper. I also appreciate suggestions provided by Mat-
thew Johnson. The original work on the causal landscape was supported by the US Air Force under
Contract FA8650-04-D-6546, Task Order 13 (“Naturalistic Model of Causal Reasoning”). The prepara-
tion of this manuscript was supported by the DARPA Explainable AI Program, Award No. FA8650-
17-2-7711.
REFERENCES
1. K. Schulz, “Final Forms: What Death Certificates Can Tell Us and What They Can’t,” The New
Yorker, 7 April 2014, pp. 32–37.
2. R.R. Hoffman, S.T. Mueller, and G. Klein, “Explaining Explanation, Part 2: Empirical Foundations,”
IEEE Intelligent Systems, vol. 32, no. 4, 2017, pp. 78–86.
3. R.R. Hoffman and G. Klein, “Explaining Explanation, Part 1: Theoretical Foundations,” IEEE
Intelligent Systems, vol. 32, no. 3, pp. 68–73.
4. T. Miller, P. Howe, and L. Sonenberg, “Explainable AI: Beware of Inmates Running the Asylum. Or:
How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences,” Proc. Int’l Joint Conf.
Artificial Intelligence (IJCAI 17), 2017, pp. 36-43.
5. S. Snook, Friendly fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq, Princeton
University Press, 2002.
ABOUT THE AUTHOR
Gary Klein is a senior scientist at MacroCognition LLC. He is a Fellow of the American Psychologi-
cal Association and the Human Factors and Ergonomics Society. Contact him at gary@macrocogni-
tion.com.
88
March/April 2018 www.computer.org/inteligent