ArticlePDF Available

Explaining Explanation, Part 3: The Causal Landscape

Authors:
  • ShadowBox LLC & MacroCognition LLC

Abstract and Figures

This is the third in a series of essays about explanation. After laying out the core theoretical concepts in the first article, including aspects of causation and abduction, the second article presented some empirical research to reveal the great variety of purposes and types of causal reasoning, as well as a number of different causal explanation patterns. Taking the notion of reasoning patterns a step further, the author describes a method whereby a decision maker can go from a causal explanation to a viable course of action for making positive change in the future, not to mention aiding decision making in general.
Content may be subject to copyright.
DEPARTMENT: HUMAN-CENTERED COMPUTING
Explaining Explanation,
Part 3: The Causal
Landscape
This is the third of a series of essays about explanation.
After laying out the core theoretical concepts in the first
article, including aspects of causation and abduction, the
second article presented some empirical research to
reveal the great variety of purposes and types of causal
reasoning, as well as a number of different causal
explanation patterns. Now it’s time to look at reasoning
patterns. Taking the notion of reasoning patterns a step
further, the author describes a method whereby a decision maker can go from a causal
explanation to a viable course of action for making positive change in the future, not to
mention aiding decision making in general.
When we hear that an acquaintance or a celebrity has died, our first question often is: What happened?
We expect to get a clear and simple explanation, just one or two words. Cancer. Heart attack. Car acci-
dent. Stroke. We can fit the reason into the distinct categories we have learned.
But further inspection often clouds the picture. Perhaps the deceased person had developed cancer after
years of smoking and refused to see a physician until it was too late. So did the person die from cancer,
unhealthy living habits, or obstinacy? Death certificates refer to the immediate cause of death (the final
condition) along with the underlying cause of death (any disease or injury that triggered the downward
spiral).1 For the cancer example, the immediate cause of death might have been pneumonia contracted
at the late stages of cancer, but a physician would probably list cancer, with smoking as an underlying
cause. There probably wouldn’t be any mention of the patient’s delay in getting a medical examination.
Yet we can point to multiple interacting causes, not a simple neat answer.
People want definitive answers, as if life events are a series of operations for which it is usually possi-
ble to affix blame and diagnose faults. For example, if a copy machine jams, there’s usually a mechani-
cal reason—a sheet of paper got stuck in the assembly, and once it’s removed, the problem is solved.
Mechanical problems such as this are determinate; there’s a cause and it can be identified.
Gary Klein
MacroCognition, LLC
Editors:
Robert Hoffman,
rhoffman@ihmc.us;
Ken Ford,
kford@ihmc.us;
Matthew Johnson,
mjohnson@ihmc.us
83
IEEE Intelligent Systems Published by the IEEE Computer Society
1541-1672/18/$33.00 ©2018 IEEE
March/April 2018
IEEE INTELLIGENT SYSTEMS
Yet most of human problems aren’t mechanical. They aren’t determinate. There isn’t a single cause.
There are multiple, intersecting causes and we might never uncover some of the most important ones.2,3
We live in a multi-cause, indeterminate world, and our attempts to understand why events occurred
will usually be frustrating. We can’t expect specific, single-cause, 1- to 2-word answers.
A further complication is how to clearly distinguish trigger causes from enabling causes that are pre-
conditions.4 A trigger cause is immediate and obvious, such as dropping a lighted match onto a stack of
newspapers that sets a house on fire. The lighted match is a trigger cause. The presence of oxygen in
the house is an enabling cause, a precondition for the fire to be lit. The trigger cause gets the attention,
but isn’t always the best cause to address. Firefighters might spray foam on the fire to smother it—de-
prive it of oxygen. When it comes to human problems, a psychotherapist might listen when a client
complains that a recent arbitrary action by her domineering and insensitive spouse made her feel help-
less and anxious, but the skilled therapist might spend time on an enabling cause—the client’s inability
to assert herself, which has played out with her spouse, her child, and colleagues at work.
Returning to the example of the jammed copy machine, the piece of paper that got stuck is the trigger
cause, but a poor design that permits lots of paper jams is an enabling cause. If you’re waiting in line to
make a copy, all you care about is the trigger cause; if you’re the representative of the company manu-
facturing the copying machine, and you are continuously fielding customer complaints, you care about
the enabling cause.
Fortunately, there’s a way to cope with complexity: the causal landscape.
THE CAUSAL LANDSCAPE CONCEPT
The concept is to portray a wide array of causes as a causal network, to help people escape from their
single-cause, determinate mindset, but then to highlight a smaller number of causes that matter the
most and that suggest viable courses of action. These are the causes that: (a) contributed most heavily
to the effect (if they hadn’t occurred, neither would the effect), and (b) are the easiest to negate or miti-
gate. When we want to take steps to prevent an adverse event, the highlighted nodes in a causal net-
work are the places to start exploring.
The causal landscape’s two-step method highlights the few causes worth addressing through: their im-
pact score, which reflects how much each cause influenced the effect; and their reversibility score,
which reflects the ease of eliminating that cause. The causes that had the strongest impact and are the
easiest to reverse are the ones that offer the greatest potential to prevent future accidents or adverse
events.
The causal landscape is a hybrid explanatory form that attempts to get the best of both worlds—both
triggering and enabling causes. It portrays the complex range and interconnection of causes and identi-
fies a few of the most important ones. Without reducing some of the complexity, we’d be confused
about how to act.
DETAILED EXAMPLE
Consider the 1994 friendly fire incident in which two US Air Force F-15 fighter jets shot down two US
Army Black Hawk helicopters in northern Iraq, killing 26 peacekeepers. That’s right: the military shot
down its own aircraft. The incident occurred in broad daylight, with no other aircraft around. The F-15s
and the Army helicopters were all being monitored by the same AWACS (Airborne Warning and Con-
trol System) airplane, which failed to prevent the incident. Scott Snook wrote a masterful analysis of
the event in his 2002 book Friendly Fire, identifying a wide array of causes as shown in Figure 1 (re-
printed with permission).5 There are a lot of causes leading to the red outcome at the bottom right—too
many.
To create a causal landscape for this incident, I evaluated each node on two dimensions—the impact of
the cause, and the ease of eliminating it. I rated each node high, medium, or low for impact (would re-
versing this cause have prevented the shootdown?), and high, medium, and low for ease of reversal
(how much effort would it take to reverse the cause?). The reversibility scale is presented in Table 1.
84
March/April 2018 www.computer.org/inteligent
DEPARTMENT: HUMAN-CENTERED COMPUTING
For the Black Hawk shootdown, only six nodes were rated high on the impact and reversibility scales.
These are enlarged and highlighted in a second version of the diagram, presented in Figure 2. These
highlighted nodes are the leverage points for decision makers to consider when trying to prevent such
accidents in the future.
Figure 1. Snook’s causal network for the Black Hawk shootdown (reproduced with permission
from Friendly Fire: The Accidental Shootdown of US Black Hawks over Northern Iraq;
Princeton University Press, 2002).
There isn’t much we can do about some of the nodes at the top of Snook's original diagram, such as the
“Shrinking Defense Budget,” or the “Changing World Order,” or the “Long History of Inter-Service
Rivalry.” Other causes, such as the “Few Joint-Training Opportunities for Air Force and Army Pilots,”
are difficult to alter. In contrast, the highlighted nodes, such as the “No Helo Reps at Weekly Coord.
Meetings,” are easy to remedy—just invite someone from the helicopter community to sit in on these
weekly meetings. The node about “Confusion over Responsibility for Helicopter Operations within
OPC” is readily resolved by assigning someone in Operation Provide Comfort to track helicopter mis-
sions.
This example illustrates what a causal landscape can look like. The ratings that I made (without any
pretense of expertise in this area) are just provided for illustration of how the causal landscape, as a
form of causal network, can aid decision makers as they try to redesign work processes. Further at-
tempts to use this method in diverse applications, along with psychometric analysis, would be useful in
establishing the method’s validity and utility. Generally, the creation of a causal landscape could help
people gain insights about how to navigate the multiple causes for events about which they care.
The causal landscape avoids simplistic single-cause explanations, and it also avoids exhaustive cata-
logs of the entire field of relevant causal factors. The concept of a “landscape” is intended to highlight
the most actionable causes and place them in the context of the wider array of influences so as to retain
that context—they’re the landscape in which we show the actionable causes.
85
March/April 2018 www.computer.org/inteligent
IEEE INTELLIGENT SYSTEMS
Table 1. The Reversiblity Scale.
SCORE EXAMPLES
4: Impossible to
change
For the military shootdown incident, these factors
would include the fall of the Soviet Union, which
contributed to the tragedy but cannot be undone.
Similarly, for anxious clients wanting to understand
why they are so easily overwhelmed, causes such
as childhood neglect and heredity can play a role
but can’t be undone.
3. Very difficult to
change
For the helicopter shootdown, this includes a shrink-
ing defense budget. For the anxious client, it might
include financial problems and chronic pain.
2. Changeable with
some effort
These items aren’t low-hanging fruit, they’re a basis
for making fundamental and lasting improvements.
In the friendly-fire incident, two additional nodes are
inter-service rivalry and too few joint-training exer-
cises. Making these changes could prevent or re-
duce lots of different problems. The benefits strongly
outweigh the costs. Similarly, therapists might help
anxious clients learn general strategies such as cop-
ing skills.
1. Simple to change The shootdown could have been prevented if small
changes had been made, such as arranging for hel-
icopter representatives to attend the weekly coordi-
nation meetings. Simple fixes such as this would
have prevented the shootdown but wouldn’t create
more general benefits. Similarly, treating anxious
clients with anti-anxiety medications is easy to im-
plement, but addresses only the immediate symp-
tom.
POTENTIAL APPLICATIONS
The causal landscape format can be useful for accident investigation in domains such as aviation and
healthcare to help decision and policy makers avoid the “blame game” that accompanies reductive
thinking. However, we often want to do more than just diagnose a reason for the accident. We want to
direct the causal landscape forward, to prevent future accidents.
The friendly fire diagram shows a causal network, which is familiar to computer scientists, but the
causal landscape can be used to make the networks actionable. In the friendly fire example, we might
want to attack the deeper-rooted general conditions (for example, the inter-service rivalry, few joint
training exercises, and so on), which have increased the likelihood of not only this particular accident
but also an entire family of others.
86
March/April 2018 www.computer.org/inteligent
DEPARTMENT: HUMAN-CENTERED COMPUTING
Figure 2. Snook’s causal network highlighting the key nodes based on the scalar analysis.
(Reprinted with permission.)
For this to happen, the impact score should address each case as a general problem, and not just map
the causal relations for a current accident or outcome. Thus, a military planner could use the friendly
fire tragedy to see if there is a way to improve coordination between the Army and Air Force. Simi-
larly, a psychotherapist explained to me how he used the method to help clients gain insights into the
conditions and triggers that lead to their anxiety episodes. Causal landscapes can also help teams build
common ground by having the team members generate their individual causal landscapes and then
compare these.
Finally, the causal landscape has several potential applications in computer science. Artificial intelli-
gence researchers could use it as a protocol for having complex systems explain the basis for their rec-
ommendations. In this application, a causal landscape could present information that explains AI
systems globally (namely, their mechanisms) and also locally, such as how the AI makes decisions or
categorizations for particular cases or instances. In addition, project teams could use the causal land-
scape as an interview format to enable subject-matter experts to explain why they made decisions; we
know that people are not reliable when answering “why” questions and perhaps the causal landscape
might help broaden that kind of inquiry. Also, the causal landscape might be a way to represent users’
mental models. Computer scientists might also want to explore whether there are emergent causes
within the causal landscape—new causes formed by the intersection of existing lines of influence. This
information could be used to redesign models and analyses.
CONCLUSION
Currently, professionals in a variety of settings strive for single-cause explanations of why some event
happened, which oversimplifies the situation. We know that multiple causes lead to specific events.
Some communities do seek to identify a range of causes. An example would be the use of Root Cause
Analyses in hospitals. One problem with Root Cause Analyses is that hospitals often rely on a standard
set of potential causes, making the exercise sterile after a few iterations. These causes include: training,
87
March/April 2018 www.computer.org/inteligent
IEEE INTELLIGENT SYSTEMS
human factors issues, fatigue, failure to follow procedures, unavailability of specialists, and so forth.
Usually several of these issues are flagged. But flagging contributing causes isn’t the same as diagnos-
ing what went wrong in a specific case, or showing how the different causes relate to each other, or,
most important, formulating a cost-effective plan to reduce the chances of similar adverse effects in the
future.
Aviation has its own protocols for reviewing adverse incidents, with its own standard set of causal fac-
tors such as the aircrew’s currency, its medical status, recent personal history, aircraft’s maintenance
history, Air Traffic Control issues, environmental conditions, policies, command climate, flight plan-
ning process, completeness and accuracy of the flight execution, potential hardware or systems prob-
lems, and so forth. Sometimes these causal factors are weighted based on how much they contributed
to the incident, or they’re at least rated as major or minor. The actions typically also involve policy,
procedural, and/or training changes.
The causal landscape is certainly consistent with these kinds of approaches. It goes beyond them in a
few ways, however. First, it represents the range of contributing causes and shows the relationships
among these causes. Second, it systematically assesses the degree to which each cause contributed to
the outcome, much the same as the aviation community but with a more nuanced rating scale. Third, it
assesses the relative ease of addressing each cause. In these ways, the causal landscape conveys the
richness of the causal field while at the same time helping people avoid getting overwhelmed or dis-
couraged by complexity.
ACKNOWLEDGMENTS
I would like to thank Robert Hoffman for his many contributions to my thinking about causal reason-
ing, and for his careful and patient editing of this paper. I also appreciate suggestions provided by Mat-
thew Johnson. The original work on the causal landscape was supported by the US Air Force under
Contract FA8650-04-D-6546, Task Order 13 (“Naturalistic Model of Causal Reasoning”). The prepara-
tion of this manuscript was supported by the DARPA Explainable AI Program, Award No. FA8650-
17-2-7711.
REFERENCES
1. K. Schulz, “Final Forms: What Death Certificates Can Tell Us and What They Can’t,” The New
Yorker, 7 April 2014, pp. 32–37.
2. R.R. Hoffman, S.T. Mueller, and G. Klein, “Explaining Explanation, Part 2: Empirical Foundations,”
IEEE Intelligent Systems, vol. 32, no. 4, 2017, pp. 78–86.
3. R.R. Hoffman and G. Klein, “Explaining Explanation, Part 1: Theoretical Foundations,” IEEE
Intelligent Systems, vol. 32, no. 3, pp. 68–73.
4. T. Miller, P. Howe, and L. Sonenberg, “Explainable AI: Beware of Inmates Running the Asylum. Or:
How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences,” Proc. Int’l Joint Conf.
Artificial Intelligence (IJCAI 17), 2017, pp. 36-43.
5. S. Snook, Friendly fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq, Princeton
University Press, 2002.
ABOUT THE AUTHOR
Gary Klein is a senior scientist at MacroCognition LLC. He is a Fellow of the American Psychologi-
cal Association and the Human Factors and Ergonomics Society. Contact him at gary@macrocogni-
tion.com.
88
March/April 2018 www.computer.org/inteligent
... Defining the criteria for a good explanation might pose another serious challenge [32,42]. Should explanations prioritize providing the correct answer or ensuring that users can understand the answer? ...
Preprint
Full-text available
This paper focuses on a critical yet often overlooked aspect of data in digital systems and services-deletion. Through a review of existing literature we highlight the challenges that user face when attempting to delete data from systems and services, the lack of transparency in how such requests are handled or processed and the lack of clear assurance that the data has been deleted. We highlight that this not only impacts users' agency over their data but also poses issues with regards to compliance with fundamental legal rights such as the right to be forgotten. We propose a new paradign-explainable deletion-to improve users' agency and control over their data and enable systems to deliver effective assurance, transparency and compliance. We discuss the properties required of such explanations and their relevance and benefit for various individuals and groups involved or having an interest in data deletion processes and implications. We discuss various design implications pertaining to explainable deletion and present a research agenda for the community.
... The success of learning methods based on neural networks has rekindled interest, over the last five years, in the problem of explanation, by raising the problem of explaining the outcome of "black box" methods; see for example [11,13,[16][17][18]24]. Explanation in neural networks is often seen as a problem of sensitivity analysis [39,41], even if a logical analysis is possible in terms of abductive explanations (answering a "why?" question) or contrastive (answering a "why not?" question) [29]. ...
Conference Paper
In this article, we offer an introduction to the notion of analogical explanations. Because analogical reasoning is a widely used type of reasoning, we take the view that analogy-based explanations will be acceptable for humans. The cornerstone of the approach is the concept of analogical proportion (i.e., statements of the form "a is to b as c is to d"), comparing 2 pairs of items. Analogical proportions are not simply based on similarity but also involve differences between items. The approach applies to the explanation of the label of an item in a repository, whether the couple (item, label) belongs to a sample of a given population or the label is predicted via an algorithm. The output can be in terms of abductive/factual explanations (answering a "why?" question and providing examples having the same label) or contrastive/ counter-factual (answering a "why not?" question and providing examples having a different label). For preliminary experiments, we build Boolean data sets where relevant attributes are known. Our results show that analogical proportion-based explanations can be effective. Abstract. In this article, we offer an introduction to the notion of ana-logical explanations. Because analogical reasoning is a widely used type of reasoning, we take the view that analogy-based explanations will be acceptable for humans. The cornerstone of the approach is the concept AQ1 of analogical proportion (i.e., statements of the form "a is to b as c is to d"), comparing 2 pairs of items. Analogical proportions are not simply based on similarity but also involve differences between items. The approach applies to the explanation of the label of an item in a repository, whether the couple (item, label) belongs to a sample of a given population or the label is predicted via an algorithm. The output can be in terms of abductive/factual explanations (answering a "why?" question and providing examples having the same label) or contrastive/counter-factual (answering a "why not?" question and providing examples having a different label). For preliminary experiments, we build Boolean data sets where relevant attributes are known. Our results show that analog-ical proportion-based explanations can be effective.
... As mentioned in Chapter 1, energy management strategies based on reinforcement learning have made a lot of progress in the automotive field as a new generation of intelligent management methods, and the application of Q-learning method is the most representative. On this basis, this paper applies the Q-learning idea to the solution of the Bellman optimality equation for the energy management of hybrid electric vehicles [18]. For the solution of this kind of sequential optimization decisionmaking problem with delayed return, this paper adopts the Q-learning algorithm. ...
Article
Full-text available
This paper takes the intelligent engineering as an example and uses the Work Breakdown Structure (WBS) tool to decompose the intelligent engineering into specific subsystems and describe each subsystem. In the field of intelligent engineering construction, it is impossible to ensure that the construction project will not be changed. To reduce scope changes, efforts can be made in three aspects: first, to accurately locate project requirements and find enough work to be done; second, to scientifically define the scope of the project and delete unnecessary work; and third, to effectively control scope changes. Ensure that all work is done to achieve the project goals. This paper starts from the actual project management experience, conducts a comprehensive summary and refinement, proposes key strategies and methods for project scope management, and puts forward the following points of view. Intelligent design should be completed by professional design agencies, and intelligent construction should be completed by professional construction teams. There should be standards for intelligent acceptance to follow. Hope this article will be helpful for project managers.
... The success of learning methods based on neural networks has rekindled interest, over the last five years, in the problem of explanation, by raising the problem of explaining the outcome of "black box" methods; see for example [14,16,21,15,10,12]. Explanation in neural networks is often seen as a problem of sensitivity analysis [36,38], even if a logical analysis is possible in terms of abductive explanations (answering a "why?" question) or contrastive (answering a "why not?" question) [26]. ...
Conference Paper
In this article, we offer a brief introduction to the notion of analogical explanations. Because analogical reasoning is a widely used type of reasoning, we take the view that analogy-based explanations will be acceptable for humans. The cornerstone of the approach is the concept of analogical proportion (i.e., statements of the form "a is to b as c is to d"), comparing 2 pairs of items. Analogical proportions are not simply based on similarity but also involve differences between items. The approach applies to the explanation of the label of an item in a repository, whether the couple (item, label) belongs to a sample of a given population or the label is predicted via an algorithm. The output can be in terms of abductive/factual explanations (answering a "why?" question and providing examples having the same label) or contrastive/counter-factual (answering a "why not?" question and providing examples having a different label). For preliminary experiments, we build Boolean data sets where relevant attributes are known. Our results show that analogy proportion-based explanations can be effective.
... In this section, a new categorization of ML models is given based on the current research flows towards ML trustworthiness [18], [19], [20]. A further step has been taken to examine the user action after the prediction generation from ML models. ...
Conference Paper
Full-text available
Trustworthy Machine Learning (TML) represents a set of mechanisms and explainable layers, which enrich the learning model in order to be clear, understood, thus trusted by users. A literature review has been conducted in this paper to provide a comprehensive analysis on TML perception. A quantitative study accompanied with qualitative observations have been discussed by categorizing machine learning algorithms and emphasizing deep learning ones, the latter models have achieved very high performance as real-world function approximations (e.g., natural language and signal processing, robotics, etc.). However, to be fully adapted by humans, a level of transparency needs to be guaranteed which makes the task harder regarding recent techniques (e.g., fully connected layers in neural networks , dynamic bias, parallelism, etc.). The paper covered both academics and practitioners works, some promising results have been covered, the goal is a high trade-off transparency/accuracy achievement towards a reliable learning approach.
... Applications of social and cognitive science to the field of xAI [14,[41][42][43] highlight the non-idealised way in which human beings make decisions, and accordingly, interact with decisionmaking systems. Notably, Wang et al. [18] build upon this theoretical work [44][45][46][47] to propose concrete guidelines for the design of user-centric xAI that aims to mitigate sources of bias and error in human reasoning. ...
Article
Full-text available
The increasing prevalence of digitized workflows in diagnostic pathology opens the door to life-saving applications of artificial intelligence (AI). Explainability is identified as a critical component for the safety, approval and acceptance of AI systems for clinical use. Despite the cross-disciplinary challenge of building explainable AI (xAI), very few application- and user-centric studies in this domain have been carried out. We conducted the first mixed-methods study of user interaction with samples of state-of-the-art AI explainability techniques for digital pathology. This study reveals challenging dilemmas faced by developers of xAI solutions for medicine and proposes empirically-backed principles for their safer and more effective design.
... feature revealed that users often found the ad explanations misleading and incomplete [3]. There is also a burgeoning field of human-centered XAI where scholars draw from formal theories in HCI to inform the design of explanation interfaces [15,16,20,27] and conduct empirical studies to examine how users interact with explanations [6,8,9,18]. Existing literature reveals that XAI has the capability to not only explain the algorithmic decision making process but also provide a signal of how the algorithm will behave in future [30]. ...
Preprint
Full-text available
In this position paper, we propose the use of existing XAI frameworks to design interventions in scenarios where algorithms expose users to problematic content (e.g. anti vaccine videos). Our intervention design includes facts (to indicate algorithmic justification of what happened) accompanied with either fore warnings or counterfactual explanations. While fore warnings indicate potential risks of an action to users, the counterfactual explanations will indicate what actions user should perform to change the algorithmic outcome. We envision the use of such interventions as `decision aids' to users which will help them make informed choices.
... Although some XAI are focused on an explanation that might be presented to a non-developer [270,328], little justification is provided for choosing different explanation types or representations, and it is unclear why these explanations will be feasibly useful to actual users or simply understood [329]. Already existing formal psychological theories that are greatly summarized for XAI in [330][331][332][333], are poorly used to guide explanations facilities, as argued in [13,21]. The last concern is essential to move towards human-centric AI since it is essential to understand how humans think as well as being able to adapt to different ways of thinking. ...
Article
Full-text available
Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain the aviation safety. It is agreed that without significant improvement in this domain, the safety objectives defined by the International Organisations cannot be achieved and a risk of more incidents/accidents is envisaged. Nowadays, computer science plays a major role in data management and decisions made in ATM. Nonetheless, despite this, Artificial Intelligence (AI) which is one of the most researched topics in computer science has not quite reached the end users in ATM domain. In this paper, we analyse the state of the art with regards to usefulness of AI within aviation/ATM domain. It includes research work of the last decade of AI in ATM, the extraction of relevant trends and features, and the extraction of representative dimensions. We analysed how the general and ATM eXplainable Artificial Intelligence (XAI) works, analysing where and why XAI is needed, how it is currently provided, and what are the limitations, then synthesise the findings into a conceptual framework, named DPP model, and provide an example of its application in a scenario in 2030. It concludes that AI systems within ATM need further research for their acceptance by the end-users. The development of appropriate XAI methods including the validation by appropriate authorities and end-users are key issues that needs to be addressed.
Chapter
From a philosophical perspective and in practice, it is important to take FATE in ones hands and shape the way forward. With this introductory line I wish to raise awareness of a method which goes by the acronym FATE. It is a method developed through research conducted within the auspices of the NATO Systems Analysis Studies Research Task Group 123 (NATO SAS-123; Adlakha-Hutcheon et al., Futures Assessed alongside socio-Technical Evolutions (FATE) Final report NATO SAS-123. Canadian DRDC Publishing, DRDC-RDDC-2021-N242, 2021a) where FATE stands for Futures Assessed alongside socio Technical Evolutions. The acronym also lacks the S from socio in its fold. This was an intentional omission, primarily to highlight the fact that most foresight-related studies develop scenarios and/or list technologies that will be disruptive. In so doing, they miss out on the input from practitioners and society at large, that enable adoption or dismissal of a technology, be it emerging and/or disruptive. In defence circles, studying emerging and disruptive technologies is a must for obvious reasons, yet their extension to a larger context for securing our world demands the inclusion of variables from a wider societal lens. FATE can be used to anticipate how technologies evolve in just such a context. This chapter will examine what can happen when FATE is put into practice, and by contrast what may be missed in its absence.
Chapter
In the present paper, we propose a general logical approach for reasoning about probability functions, belief functions, lower probabilities and the corresponding duals. The logical setting we consider combines the modal logic S5, Łukasiewicz logic and an additional modality P that applied to boolean formulas formalises probability functions. The modality P together with an S5 modal \(\Box \) provides a language rich enough to characterise probability, belief and lower probability theories.KeywordsFuzzy logicDempster-Shafer belief functionsProbability functionsImprecise probabilitiesModal logic
Article
Full-text available
In his seminal book `The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge of design decisions, rather than interaction designers. As a result, programmers design software for themselves, rather than for their target audience; a phenomenon he refers to as the `inmates running the asylum'. This paper argues that explainable AI risks a similar fate. While the re-emergence of explainable AI is positive, this paper argues most of us as AI researchers are building explanatory agents for ourselves, rather than for the intended users. But explainable AI is more likely to succeed if researchers and practitioners understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and if evaluation of these models is focused more on people than on technology. From a light scan of literature, we demonstrate that there is considerable scope to infuse more results from the social and behavioural sciences into explainable AI, and present some key results from these fields that are relevant to explainable AI.
Article
Full-text available
This is the first in a series of essays that addresses the manifest programmatic interest in developing intelligent systems that help people make good decisions in messy, complex, and uncertain circumstances by exploring several questions: What is an explanation? How do people explain things? How might intelligent systems explain their workings? How might intelligent systems help humans be better understanders as well as better explainers? This article addresses the theoretical foundations.
Article
On April 14, 1994, two U.S. Air Force F-15 fighters accidentally shot down two U.S. Army Black Hawk Helicopters over Northern Iraq, killing all twenty-six peacekeepers onboard. In response to this disaster the complete array of military and civilian investigative and judicial procedures ran their course. After almost two years of investigation with virtually unlimited resources, no culprit emerged, no bad guy showed himself, no smoking gun was found. This book attempts to make sense of this tragedy--a tragedy that on its surface makes no sense at all. With almost twenty years in uniform and a Ph.D. in organizational behavior, Lieutenant Colonel Snook writes from a unique perspective. A victim of friendly fire himself, he develops individual, group, organizational, and cross-level accounts of the accident and applies a rigorous analysis based on behavioral science theory to account for critical links in the causal chain of events. By explaining separate pieces of the puzzle, and analyzing each at a different level, the author removes much of the mystery surrounding the shootdown. Based on a grounded theory analysis, Snook offers a dynamic, cross-level mechanism he calls "practical drift"--the slow, steady uncoupling of practice from written procedure--to complete his explanation. His conclusion is disturbing. This accident happened because, or perhaps in spite of everyone behaving just the way we would expect them to behave, just the way theory would predict. The shootdown was a normal accident in a highly reliable organization.
Final Forms: What Death Certificates Can Tell Us and What They Can't
  • schulz
K. Schulz, "Final Forms: What Death Certificates Can Tell Us and What They Can't," The New Yorker, 7 April 2014, pp. 32-37.
Friendly fire: The Accidental Shootdown of
  • S Snook
S. Snook, Friendly fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq, Princeton University Press, 2002.