Conference PaperPDF Available

Towards XAI: Structuring the Processes of Explanations

Authors:
  • University of Oklahoma

Abstract and Figures

Explainable Artificial Intelligence describes a process to reveal the logical propagation of operations that transform a given input to a certain output. In this paper, we investigate the design space of explanation processes based on factors gathered from six research areas, namely, Pedagogy, Story-telling, Argumentation, Programming, Trust-Building, and Gamification. We contribute a conceptual model describing the building blocks of explanation processes, including a comprehensive overview of explanation and verification phases, pathways, mediums, and strategies. We further argue for the importance of studying effective methods of explainable machine learning, and discuss open research challenges and opportunities. Figure 1: The proposed explanation process model. On the highest level, each explanation consists of different phases that structure the whole process into defined elements. Each phase contains explanation blocks, i.e., self-contained units to explain one phenomenon based on a selected strategy and medium. At the end of each explanation phase, an optional verification block ensures the understanding of the explained aspects. Lastly, to transition between phases and building blocks, different pathways are utilized.
Content may be subject to copyright.
Towards XAI: Structuring
the Processes of Explanations
Mennatallah El-Assady, Wolfgang Jentner, Rebecca Kehlbeck, Udo Schlegel, Rita
Sevastjanova, Fabian Sperrle, Thilo Spinner, Daniel Keim
University of Konstanz
Konstanz, Germany
ABSTRACT
Explainable Artificial Intelligence describes a process to reveal the logical propagation of operations
that transform a given input to a certain output. In this paper, we investigate the design space of
explanation processes based on factors gathered from six research areas, namely, Pedagogy, Story-
telling, Argumentation, Programming, Trust-Building, and Gamification. We contribute a conceptual
model describing the building blocks of explanation processes, including a comprehensive overview of
explanation and verification phases, pathways, mediums, and strategies. We further argue for the
importance of studying eective methods of explainable machine learning, and discuss open research
challenges and opportunities.
Figure 1: The proposed explanation pro-
cess model. On the highest level, each ex-
planation consists of dierent phases that
structure the whole process into defined
elements. Each phase contains explana-
tion blocks, i.e., self-contained units to
explain one phenomenon based on a se-
lected strategy and medium. At the end of
each explanation phase, an optional veri-
fication block ensures the understanding
of the explained aspects. Lastly, to transi-
tion between phases and building blocks,
dierent pathways are utilized.
INTRODUCTION
Sparked by recent advances in machine learning, lawmakers are reacting to the increasing dependence
on automated decision-making with protective regulations, such as the General Data Protection
Regulation of the European Union. These laws prescribe that decisions based on fully automated
algorithms need to provide clear-cut reasonings and justifications for aected people. Hence, to
address this demand, the field of E
x
plainable
A
rtificial
I
ntelligence (XAI) accelerated, combining
expertise from dierent backgrounds in computer science and other related fields to tackle the
challenges of providing logical and trustworthy decision reasonings.
The act of making something explainable entails a process that reveals and describes an underlying
phenomenon. This has been the subject of study and research of dierent fields over the centuries.
Therefore, to establish a solid foundation for explainable artificial intelligence, we need a structured
approach based on insights, as well as well-studied practices on explanation processes. Structuring
these methodologies and adapting them to the novel challenges facing AI research is inevitable for
advancing eective XAI.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
In this paper, we contribute a conceptual framework for eective explanation processes based on the
analysis of strategies and best practices in six dierent research fields.
We postulate that accelerat-Position Statement:
Multiple research do-
mains established well-studied processes to
communicate information and knowledge.
Learning and transferring these processes will
enable us to build eective methodological
foundations for XAI.
ing the maturation of XAI and ensuring its eectiveness has to rely on the study of relevant
research domains that established well-studied processes to communicate information and
knowledge.
Consequently, learning and transferring these processes will bootstrap XAI and advance
the development of tailored methodologies based on challenges unique to this young field.
Background
To place our model in context, we studied and analyzed related work from six dierent research areas:
Pedagogy, Storytelling, Argumentation, Programming, Trust-Building, and Gamification.
Due to space constraints, this section com-
pactly describes the main ideas gathered from
our analysis of the related work and is based on
up to three of the most relevant research
articles
for each of the six fields. A more com-
plete overview of other works is provided in the
appendix of this paper.
Pedagogy
. Proper methods of Pedagogy develop insight and understanding of how to do it [1].
However, good education involves many dierent strategies, such as induction and deduction [1, 2],
methods, and mediums. Some methods, for instance, are explicit explanations using examples [1],
group work and discussion [3], and self-explaining of students to students [3].
Storytelling
in combination with data visualization is oen practiced to explain complex phenom-
ena in data and provide background knowledge [4]. Various strategies, e.g., martini glass structure,
interactive slideshow, and drill-down story, exist to structure the narratives [5].
Dialog and Argumentation.
In the humanities, many models for dialog and argumentation exist.
Fan and Toni argue that “argumentation can be seen as the process of generating explanations” [6]
and propose a theoretic approach [6]. Miller [7] provides an extensive survey of insights from the
social sciences that can be transferred to XAI. Madumal et al. [8] propose a dialog model for explaining
artificial intelligence.
Methodology:
Based on the analysis of the
six research areas, we derived a number of dif-
ferent eective explanation strategies and best
practices. These are grouped and categorized
(over three iterations) into dierent elements,
which are then put together to build the pro-
posed model for explanation processes in XAI.
In every step, the integrity of the model is
cross-referenced with the original strategies
(extracted from the research areas) to ensure
their compatibility. The resulting conceptual
model is described in the next section.
Programming.
Programming languages are inherently structural, and soware should be self-
explanatory. A popular and consent way to achieve this are design-paerns [9] and other concepts [10].
Design-paerns imply abstraction which is crucial for XAI.
Trust Building. There exists no active trust building scheme for AI. Instead, trust relies on expla-
nation and transparency of the system [11]. Miller et al. and Siao et al. argue that the system has to
continuously clear doubts over time to increase the user’s trust, and that factors such as reliability
and false alarm rate are essential for AI systems [12, 13].
Gamification
is an integration of game elements and game thinking in non-gaming systems or
activities with the goal to motivate the users and foster their engagement [14, p. 10]. To support
the defined goal, the interaction of the system needs to be adapted "to a given user with game-like
targeted communication" [15]. Such systems are usually designed to have several levels or modes
with increasing complexity [14, p. 11]. It is important, though, that the user can specify which task to
realize as next [16].
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
MODELLING OF EXPLANATION PROCESSES IN XAI
Figure 2: Fruit classification example: the
first phase starts with a module that ex-
plains a decision tree classifier using a
video. This transitions to two alternative
explanation blocks where the le one uses
visualizations of the model and the right
block demonstrates the features using
verbalization. The following verification
block ensures that the previously learned
material is understood. The second phase
induces another two explanation blocks
to diagnose the model. The le follows
the visualization mantra of phase one. The
right block uses a visualization to depict
the data with its features.
We define an explanation process as a sequence of
phases
that, in turn, consist of
explanation
and
verification blocks
. Each of these
building blocks
uses a
medium
and a
strategy
for explanation
or verification, respectively. The connection between these blocks is defined through
pathways
. A
schematic representation of our model is depicted in Figure 1.
In this example, the explanation process is comprised of two phases (for AI understanding and
diagnosis, respectively). The first phase consists of three explanation blocks, followed by a verification
block. A linear pathway connects all building blocks in the process. However, some explanation
blocks are positioned on alternative paths. For example, users start the explanation in Phase 1 watching
avideo that uses a simplification strategy for explanation; then they can choose between a
visualization or a verbalization component as a second step. Aer choosing, for example, the
verbal ,explanation by abnormality block, they can transition to a verification block, which
uses a flipped classroom strategy and verbalization asamedium.
Our explanation process model is instantiated in Figure 2 as a simplistic example of explaining the
classification of fruits (using an analogous structure to the abstract process of Figure 1).
In the following, all elements of our model are discussed in more detail.
Pathways.
An explanation process is comprised of dierent modules, i.e., phases that contain
explanation and verification blocks. To connect these modules into a global construct, we define
transitions, the so-called pathways. These can be
linear or iterative
, allowing building blocks in
the process to be visited once or multiple times. Additionally, the navigation defined by them can be
guided or serendipitous, enabling a strict framing or open exploration.
Mediums.
Lipton [17] states that common approaches to describe how a model behaves and why,
usually include verbal (natural language) explanations, visualizations, or explanations by example. In
the current explainable AI systems, visualization is the most frequently applied medium. However,
Sevastjanova et al. [18] argue for a combination of visualization and verbalization techniques, which
can lead to deeper insights, and a beer understanding of the ML model. For instance, the user
could engage with an agent through a dialog system, by interacting with visualization and stating
questions in natural language in order to understand the decisions made by the model. In storytelling,
a combination of text and visual elements are used in diverse formats to communicate about the data
eectively. Comics, illustrated texts, and infographics are three widely applied formats, which dier
in the level of user guidance, and the way how text and visual elements are aligned [19]. In addition
to the previously mentioned mediums, one might employ multimedia (e.g., video, audio, image, video
game) to either engage the user on exploring the ML model in more detail (e.g., if the explanation is
an integral part of a video game), or to provide explanations from another perspective.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
Explanation Strategies.
In logic and philosophy, oen two opposing strategies for reasoning
Pathways
Linear vs. Iterative
Guided vs. Serendipitous
Mediums
Visualizations
Verbalizations
Infographics
Illustrated Text
Comics
Videos
Audios
Images
Video Games
Dialog Systems
Explanation Strategies
Inductive (Boom Up)
simplification
metaphorical narrative
divide and conquer
explanation by example
dynamic programming
depth first - breadth first
describe and define
teaching by categories
Deductive (Top Down)
transfer learning
teaching by association
overview first, details on demand
drill down story
define and describe
Contrastive (Comparison)
opposite and similar
example by abnormality
Verification Strategies
flipped classroom
reproduction
transfer
are named [2]. These are called inductive and deductive reasoning. The first strategy,
inductive
reasoning
, is defined by Aristotle as “the conclusion process for a general knowledge out of observed
events” [2]. The second strategy,
deductive reasoning
, builds the opposite and is defined as “the
conclusion process from given premises to a logical closure” [2]. Such basic strategies can be found
throughout literature in dierent fields, i.e., inductive (boom-up) explanations vs. deductive (top-
down) approaches. Inductive strategies first explain smaller and observable details, followed by
complex relations. Hence, the explanation of the details should facilitate the understanding of the
general and abstract concept. Examples of inductive strategies include; simplifications, explanation by
example, or metaphorical narratives. Deductive strategies start with the whole picture (general idea) as
an overview, then more details get added and explained to show a more complete view. Examples of
deductive strategies include; overview first, details on demand or transfer learning.
In addition to these two groups, we identified another useful explanation method based on compar-
ative analysis, so-called
contrastive explanation
. Such strategies rely on puing two phenomena
side-by-side in a comparison and showing o their contrast. The explanation could then be performed
using induction or deduction. One noteworthy example for this category is the strategy “explaining by
abnormality” where the unusual manifestations of a phenomenon is shown to contrast the “normal”
state and prevent misconceptions.
Lastly, it is with noting that the overall structure of the phases in an explanation process can be
designed based on guidelines derived from explanation strategies (optionally increasing the complexity
of the process to more intricate or recursive explanations).
Verification Strategies.
To ensure that users have gained an encompassing and sound under-
standing of the underlying subject maer, explanation processes need to include verification strategies.
We propose optional verification blocks at the end of each phase to establish a stable common ground
as a conclusion for that phase, before allowing users to advance to the next one (typically increasing
the complexity). In contrast to explanation strategies, verification strategies usually require users
to demonstrate the learned phenomena. They include strategies that are based on questions for
reproducing or transferring knowledge, as well as “flipping the classroom”, i.e., having users explain to
the system the learned concepts.
DISCUSSION: BEST PRACTICES, GUIDELINES, AND RESEARCH OPPORTUNITIES
Several considerations have to be made to select and structure the presented strategies. The decisions
should be mainly based on (i) the targeted level of detail; (ii) the target audience; (iii) the desired
level of interactivity of the target audience. The level of detail considerably impacts the choice of
the strategies and their respective structure and sequence. The spectrum ranges from answering
the question of what the respective machine learning model(s) are achieving to how the model(s)
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
work in detail. To answer the former, a possible consideration could be the use of metaphorical
narratives [20] while the laer needs more accurate and precise descriptions conveyed through
mathematical notations and pseudocode. Two things have to be considered regarding the target
audience. Their size and composition aect the use of mediums and the level of interactivity while
their background knowledge on the subject maer reduces the required distance to reach the desired
level of detail. The interactivity may vary during the phases. It is beneficial to increase the level of
interactivity for verification phases to receive more and more profound feedback.
Research Opportunities
Implement, test, and verify dierent ex-
planation strategies. Is the knowledge
from other domains transferable to XAI
explanation processes?
Identify the most suitable processes for
dierent seings, tasks, and AI models.
Extend strategies to tailor them to XAI.
Evaluate the proposed explanation
model, e.g., through user studies and
texting out of alternative models.
Make XAI processes reactive to the users’
interaction through automatic pathway
generation, e.g., through active learning.
Tailoring the explanation strategies:
which strategy works best in which en-
vironment and for specific target groups,
as well as various levels of AI complexity.
Designing Visual Analytics systems that
integrate the users’ interactions into a
mixed-initiative model-refinement cycle.
Take Home Messages
Studying and combining explanation
processes is critical to establish eective
XAI methodologies and mature this re-
search field.
Best practices and tailored explanation
processes can streamline XAI and ac-
count for dierent circumstances, such
as task complexity, data characteristics,
model type, and user expertise.
Given clear problem specifications, as
well as well-studied and detailed guide-
lines, we can progress toward automat-
ically generating XAI processes as de-
sign templates for successful explana-
tions and model refinements.
It is possible to engage users using gamification to raise their motivational support [14, p. 10] while
continuously receiving and providing feedback to the explanatory and the user [15]. The feedback
aspect can be well exploited in verification blocks whereas the motivational support may drive the
user to explore multiple pathways of the explanation process as well as exploring the machine learning
model itself in more detail. Tracking and displaying the progress serves as an extrinsic motivation [14,
p. 52] allowing the user to beer navigate the various pathways.
CONCLUSIONS
Valuable strategies can be extracted and abstracted from varying research areas. These strategies
serve an important and well-researched baseline to bootstrap the process of explainable machine
learning. Our proposed model classifies these strategies and combines them as building blocks to
actualize an explanation process for machine learning while keeping the flexibility of using dierent
mediums and transitioning paths. The list of collated strategies is not inclusive, yet the proposed
model allows many variations and extensions which provides space for further research opportunities.
Additionally, existing XAI approaches can be analyzed and deconstructed to extract the building
blocks to validate whether our proposed model can be adopted. Successful explanation processes can
then be compared and analyzed regarding common paerns.
REFERENCES
[1]
Odora Ronald James. 2014. Using Explanation as a Teaching Method: How Prepared are High
School Technology Teachers in Free State Province, South Africa. In Journal of Social Sciences,
38:71–81.
[2] Robert J. Sternberg and Karin Sternberg. 2016. Cognitive psychology. Nelson Education.
[3]
Richard Gunstone, editor. 2015. Explaining as a Teaching Strategy.Encyclopedia of Science
Education. Springer Netherlands, Dordrecht, 423–425.
[4]
Robert Kosara and Jock D. Mackinlay. 2013. Storytelling: The Next Step for Visualization. IEEE
Computer, 46, 5, 44–50.
[5]
Edward Segel and Jerey Heer. 2010. Narrative Visualization: Telling Stories with Data. IEEE
Trans. Vis. Comput. Graph., 16, 6, 1139–1148.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[6]
Xiuyi Fan and Francesca Toni. 2014. On Computing Explanations in Abstract Argumentation.
In Proc. of the ECAI. IOS Press, 1005–1006.
[7]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artifical
Intelligence, 267, 1–38.
[8]
Prashan Madumal, Tim Miller, Frank Vetere, and Liz Sonenberg. 2018. Towards a Grounded
Dialog Model for Explainable Artificial Intelligence. In Workshop on SCS.
[9]
Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Paerns: Elements
of Reusable Object-oriented Soware. Addison-Wesley Longman Publishing.
[10]
Mark Dominus. 2006. Design paerns of 1972 — The Universe of Discourse. [Online; accessed
7-February-2019]. (2006). hps://blog.plover.com/prog/design-paerns.html.
[11]
Wolter Pieters. 2011. Explanation and Trust: What to Tell the User in Security and AI? Ethics
and information technology, 13, 1, 53–64.
[12]
Keng Siau and Weiyu Wang. 2018. Building Trust in Artificial Intelligence, Machine Learning,
and Robotics. Cuer Business Technology Journal, 31, 47–53.
[13]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running
the Asylum. CoRR, abs/1712.00547.
[14]
Karl M. Kapp. 2012. The Gamification of Learning and Instruction: Game-based Methods and
Strategies for Training and Education. (1st edition). Pfeier & Company.
[15]
Cathie Marache-Francisco and Eric Brangier. 2013. Process of Gamification. Proceedings of the
6th Centric, 126–131.
[16]
Jakub Swacha and Karolina Muszynska. 2016. Design Paerns for Gamification of Work. In
Proc. of TEEM.
[17] Zachary Chase Lipton. 2016. The Mythos of Model Interpretability. CoRR, abs/1606.03490.
[18]
Rita Sevastjanova, Fabian Beck, Basil Ell, Cagatay Turkay, Rafael Henkin, Miriam Bu, Daniel A.
Keim, and Mennatallah El-Assady. 2018. Going beyond Visualization: Verbalization as Com-
plementary Medium to Explain Machine Learning Models. In Proc. of VISxAI Workshop, IEEE
VIS.
[19]
Zezhong Wang, Shunming Wang, Maeo Farinella, Dave Murray-Rust, Nathalie Henry Riche,
and Benjamin Bach. 2019. Comparing Eectiveness and Engagement of Data Comics and
Infographics. In Proc. of ACM CHI.
[20]
Wolfgang Jentner, Rita Sevastjanova, Florian Stoel, Daniel A. Keim, Jürgen Bernard, and
Mennatallah El-Assady. 2018. Minions, Sheep, and Fruits: Metaphorical Narratives to Explain
Artificial Intelligence and Build Trust. In Proc. of VISxAI Workshop, IEEE VIS.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
APPENDIX: RELATED WORK
Complete overview of surveyed fields, seing the most relevant related work in context.
Pedagogy
“Good teaching is good explanation.” [21] Proper methods of Pedagogy develop insight and understand-
ing of how to do it [1]. However, good education involves many dierent strategies, like induction and
deduction [1, 2], methods, and mediums. Some methods, for instance, are explicit explanations using
examples [1], group work and discussion [3], and self-explaining of students to students [3]. These
methods and strategies include the logic and philosophical base with induction and deduction [1,
2]. Further logical operations can be incorporated to extend these methods and strategies, namely
comparison, analysis, synthesis, and analogy. [1] In this context, three dierent parts of an explanation
exist, something that is to be explained, an explainer, the one who explains, and the explainee, who
gets the explanation. [22] If the explainer wants to provide a good explanation to the explainee, the
explanation has to be clearly structured and interesting to the explainer [23]. Good explanations can
invoke understanding. However, bad explanations may lead to confused and bored explainee and
explainer [23]. Brown and Atkins [24] describe three types of explanations: descriptive, interpretive,
and reason giving. A descriptive explanation can be defined as describe and define and explains the
processes and procedures [24]. An interpretive explanation specifies the central meaning of a term
and can be seen as define and describe. And last, reason giving explanation shows reasons based on
generalizations and can be interpreted as teaching by categories. There are more proposed strategies
and methods, e.g. Wragg [23] or Brown and Armstrong [25], but in general, they can be summarized
with the explanation strategies and methods above.
Storytelling
Storytelling has been used for millennia in human history to communicate information, transfer
knowledge, and entertainment [5]. Outlining the complete field is almost impossible as storytelling
is as diverse as humanity. However, commonalities appear when looking at this field at a more
abstract level. We explicitly focus on works of storytelling in combination with data visualization as
this is oen practiced to explain complex phenomena in data and provide background [4]. Machine
learning follows this goal, however, it is not suicient to understand the phenomena in the data. A
user must also learn about the reason how and why this phenomena appears to verify and validate
the phenomena. This further aects the trust building process positively. Various strategies exist
to structure narratives uniting data visualization [5] and best practices have been extracted and
summarized to improve storytelling for visualizations [26]. We transfer and provide a taxonomy for
these strategies that we deem useful to explain machine learning.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
Dialog and Argumentation
In the humanities, many models for dialog and argumentation exist, with Hilton stating that “causal
explanation takes the form of conversation” [27]. This conversation involves both cognitive and
social processes [8] and is “first and foremost a form of social interaction” [7]. According to Grice,
uerances in this conversation should follow the four maxims of quantity,quality,relation (relevance)
and manner [28]. Miller notes that “questions could be asked by interacting with a visual object,
and answers could similarly be provided in a visual way.” [7] Many of these principles have been
applied to explainable AI, as surveyed by Miller [7]. Fan and Toni argue that “argumentation can
be seen as the process of generating explanations” [6] and propose two theoretic approaches [6,
29]. Madumal et al. propose a dialog model for explaining artificial intelligence [8], and Zeng et al.
introduced an argumentation-based approach for context-based explainable decisions [30]. Here,
the “schemes for practical reasoning” and “schemes for applying rules to cases” from Walton and
Macagno’s classification system for argumentation schemes [31] seem particularly interesting.
Programming
Typically, during programming, common soware design paerns and best-practices are followed.
While the main goal of such paerns is to provide “general, reusable solution[s] to [.. . ] commonly
occurring problem[s]” [32], they oen act as a self-explanation strategy for complex soware systems.
For the programmer, soware design paerns improve readability, traceability and help by building
up a mental model of the system. Many soware design paerns can be classified using the categories
introduced in section
Modelling of Explanation Processes in XAI
. The program flow is the pathway
of code: it can be linear (block), iterative (loop) or part of itself (recursion). Algorithms can follow a
top-down or boom-up approach. The main strategy followed in soware design paerns is abstraction.
Abstraction is the core concept of modern high-level programming languages [33] and is closely
related to Shneiderman’s mantra “overview first, [.. . ] details on demand” [34]. While, again, the
strategy in the first instance has a practical use, it also takes an explanatory role for the programmer.
He can understand the full program on a higher level and, if needed, can go deeper to view the
details. Abstraction does not only occur as a concept of language design, but also in many discrete
programming paerns. This ranges from the simple concept of subroutines [10] up to many of the
design paerns for object-oriented programming proposed by Gamma et al. (GoF) [9], e.g. facade or
iterator.
Trust Building
Trust in machine learning systems is highly dependent on the system itself and how it can be explained.
Glass, Mcguinness, and Wolverton find that "trust depends on the granularity of explanations and
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
transparency of the system" [11, 35]. As our trust building is usually done with humans, many natural
trust mechanism rely on a person’s presence. However, these are not available in AI systems, and
therefore the explanation and transparency of the system become the most important factors that
influence the user’s trust. Pieters argues there is a dierence between having trust a system, where
the user completely understands the decisions of the system itself and therefore make active decisions
about the result, and confidence in a system, where the user does not need to know inner workings
in order to use its results [11]. Miller, Howe, and Sonenberg and Siau and Wang argue that trust is
dynamic and build up in a gradual manner. Furthermore there is a dierentiation between initial trust
that the user has obtained through external factors, e.g. cultural aspects, and the trust that he builds
while using the system. The system should continuously clear potential doubts over time by providing
additional user-driven information. Factors such as reliability, validity, robustness, and false alarm rate
influence how the user develops trust in the system, and should play an integral role when designing
the system [12]. Lombrozo shows that the people disproportionately prefer simpler explanations over
more likely explanations. Therefore explanations should aim to only carry the appropriate amount of
information. Furthermore, they found that people prefer contrastive explanations at certain parts of
the system, because otherwise the cognitive burden of a complete explanations is too great [13].
Gamification
Gamification is an integration of game elements and game thinking in non-gaming systems or
activities [14, p. 10]. It aims at motivating users [37, p. 4] to foster their engagement [14, p. 10].
Gamification uses several concepts to achieve this goal. Usually, a user is asked to accomplish tasks
to earn points; these points are accumulated and based on the achieved result the user may receive
rewards. To support the defined goal, the interaction needs to be adapted "to a given user with
game-like targeted communication" [15]. Thus, to increase engagement, games are usually designed
to have several levels or modes with increasing complexity [14, p. 10]. It is important, though, that
the user can specify which task to realize as next and which pathway to take to achieve the goal [16].
In order to make these systems more aractive, game elements are designed to generate positive
emotions. Usually, it is done by applying a specific vocabulary (e.g., simplification) or narrations [15].
According to Bowser et al. [38], dierent user groups prefer a dierent type of interface. Important is,
thus, to adapt the system to the specific user profile, by showing only the elements relevant for his
particular task [15].
REFERENCES
[1]
Odora Ronald James. 2014. Using Explanation as a Teaching Method: How Prepared are High
School Technology Teachers in Free State Province, South Africa. In Journal of Social Sciences,
38:71–81.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[2] Robert J. Sternberg and Karin Sternberg. 2016. Cognitive psychology. Nelson Education.
[3]
Richard Gunstone, editor. 2015. Explaining as a Teaching Strategy.Encyclopedia of Science
Education. Springer Netherlands, Dordrecht, 423–425.
[4]
Robert Kosara and Jock D. Mackinlay. 2013. Storytelling: The Next Step for Visualization. IEEE
Computer, 46, 5, 44–50.
[5]
Edward Segel and Jerey Heer. 2010. Narrative Visualization: Telling Stories with Data. IEEE
Trans. Vis. Comput. Graph., 16, 6, 1139–1148.
[6]
Xiuyi Fan and Francesca Toni. 2014. On Computing Explanations in Abstract Argumentation.
In Proc. of the ECAI. IOS Press, 1005–1006.
[7]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artifical
Intelligence, 267, 1–38.
[8]
Prashan Madumal, Tim Miller, Frank Vetere, and Liz Sonenberg. 2018. Towards a Grounded
Dialog Model for Explainable Artificial Intelligence. In Workshop on SCS.
[9]
Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Paerns: Elements
of Reusable Object-oriented Soware. Addison-Wesley Longman Publishing.
[10]
Mark Dominus. 2006. Design paerns of 1972 — The Universe of Discourse. [Online; accessed
7-February-2019]. (2006). hps://blog.plover.com/prog/design-paerns.html.
[11]
Wolter Pieters. 2011. Explanation and Trust: What to Tell the User in Security and AI? Ethics
and information technology, 13, 1, 53–64.
[12]
Keng Siau and Weiyu Wang. 2018. Building Trust in Artificial Intelligence, Machine Learning,
and Robotics. Cuer Business Technology Journal, 31, 47–53.
[13]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running
the Asylum. CoRR, abs/1712.00547.
[14]
Karl M. Kapp. 2012. The Gamification of Learning and Instruction: Game-based Methods and
Strategies for Training and Education. (1st edition). Pfeier & Company.
[15]
Cathie Marache-Francisco and Eric Brangier. 2013. Process of Gamification. Proceedings of the
6th Centric, 126–131.
[16]
Jakub Swacha and Karolina Muszynska. 2016. Design Paerns for Gamification of Work. In
Proc. of TEEM.
[17] Zachary Chase Lipton. 2016. The Mythos of Model Interpretability. CoRR, abs/1606.03490.
[18]
Rita Sevastjanova, Fabian Beck, Basil Ell, Cagatay Turkay, Rafael Henkin, Miriam Bu, Daniel A.
Keim, and Mennatallah El-Assady. 2018. Going beyond Visualization: Verbalization as Com-
plementary Medium to Explain Machine Learning Models. In Proc. of VISxAI Workshop, IEEE
VIS.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[19]
Zezhong Wang, Shunming Wang, Maeo Farinella, Dave Murray-Rust, Nathalie Henry Riche,
and Benjamin Bach. 2019. Comparing Eectiveness and Engagement of Data Comics and
Infographics. In Proc. of ACM CHI.
[20]
Wolfgang Jentner, Rita Sevastjanova, Florian Stoel, Daniel A. Keim, Jürgen Bernard, and
Mennatallah El-Assady. 2018. Minions, Sheep, and Fruits: Metaphorical Narratives to Explain
Artificial Intelligence and Build Trust. In Proc. of VISxAI Workshop, IEEE VIS.
[21] Robert C. Calfee. 1986. Handbook of Research on Teaching. Macmillan.
[22]
Fairhurst MA. 1981. Satisfactory explanations in the primary school. Journal of Philosophy of
Education, 15, 2, 205–213.
[23] Wragg EC and Brown G. 1993. Explaining. Routledge Publishers.
[24] 1997. Explaining.The Handbook of Communication Skills. Routlege Publishers, 199–229.
[25]
1984. Explaining and explanations.Classroom Teaching Skills. Nichols Publishing Company,
121–148.
[26]
Nahum D. Gershon and Ward Page. 2001. What Storytelling can do for Information Visualization.
Commun. ACM, 44, 8, 31–37.
[27]
Denis J. Hilton. 1990. Conversational processes and causal explanation. Psychological Bulletin,
107, 1, 65–81. doi: 10.1037/0033-2909.107.1.65.
[28]
Herbert Paul Grice. 1967. Logic and Conversation. In Studies in the Way of Words. Paul Grice,
editor. Harvard University Press, 41–58.
[29]
Xiuyi Fan and Francesca Toni. 2015. On explanations for non-acceptable arguments. In Theory
and Applications of Formal Argumentation. Elizabeth Black, Sanjay Modgil, and Nir Oren, editors.
Springer International Publishing, Cham, 112–127. isbn: 978-3-319-28460-6.
[30]
Zhiwei Zeng, Xiuyi Fan, Chunyan Miao, Cyril Leung, Chin Jing Jih, and Ong Yew Soon. 2018.
Context-based and Explainable Decision Making with Argumentation. In Proceedings of the
17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS ’18).
International Foundation for Autonomous Agents and Multiagent Systems, Stockholm, Sweden,
1114–1122. hp://dl.acm.org/citation.cfm?id=3237383.3237862.
[31]
Douglas Walton and Fabrizio Macagno. 2015. A Classification System for Argumentation
Schemes. Argument and Computation, 6, 3, 219–245.
[32]
Wikipedia contributors. 2019. Soware design paern — Wikipedia, the free encyclopedia.
[Online; accessed 7-February-2019]. (2019). hps: / / en.wikipedia . org /w/index . php ?title=
Soware_design_paern&oldid=879797369.
[33]
Wikipedia contributors. 2019. High-level Programming Language — Wikipedia, The Free Ency-
clopedia. [Online; accessed 7-February-2019]. (2019). hps://en.wikipedia.org/w/index.php?
title=High-level_programming_language&oldid=879754477.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[34]
Ben Shneiderman. 1996. The Eyes have it: A Task by Data Type Taxonomy for Information
Visualizations. In Proceedings 1996 IEEE Symposium on Visual Languages. IEEE, Boulder, CO,
USA, USA, (September 1996), 336–343. doi: 10.1109/VL.1996.545307.
[35]
Alyssa Glass, Deborah Mcguinness, and Michael Wolverton. 2008. Toward Establishing Trust in
Adaptive Agents. In (January 2008), 227–236. doi: 10.1145/1378773.1378804.
[36]
Tania Lombrozo. 2007. Simplicity and probability in causal explanation. Cognitive Psychology,
55, 232–257.
[37]
Yu-kai Chou. 2015. Actionable Gamification: Beyond Points, Badges, and Leaderboards. Octalysis
Group Fremont, CA.
[38]
Anne Bowser, Derek Hansen, and Jennifer Preece. 2013. Gamifying Citizen Science: Lessons
and Future Directions. In Workshop on Designing Gamification: Creating Gameful and Playful
Experiences.
... For example, in a stock market analysis visualization, the system may offer guidance to suggest what portion of the data the user should explore next. Drawing upon previous literature [88], [34], we see that explanation could take different forms; it could be: 1) encoded as a color reference that indicates the input that caused the guidance (visual explanation), 2) taking the form of a verbal explanation of the guidance rationale (verbalization), 3) a numerical formula that demonstrates calculations at the base of the guidance (numerical explanation), or 4) a hybrid explanation delivery mediums. ...
Article
Full-text available
Guidance-enhanced approaches are used to support users in making sense of their data and overcoming challenging analytical scenarios. While recent literature underscores the value of guidance, a lack of clear explanations to motivate system interventions may still negatively impact guidance effectiveness. Hence, guidance-enhanced VA approaches require meticulous design, demanding contextual adjustments for developing appropriate explanations. Our paper discusses the concept of explainable guidance and how it impacts the user–system relationship—specifically, a user's trust in guidance within the VA process. We subsequently propose a model that supports the design of explainability strategies for guidance in VA. The model builds upon flourishing literature in explainable AI, available guidelines for developing effective guidance in VA systems, and accrued knowledge on user–system trust dynamics. Our model responds to challenges concerning guidance adoption and context-effectiveness by fostering trust through appropriately designed explanations. To demonstrate the model's value, we employ it in designing explanations within two existing VA scenarios. We also describe a design walk-through with a guidance expert to showcase how our model supports designers in clarifying the rationale behind system interventions and designing explainable guidance.
... we first consider what are the attributes of a good explanation. In the applied machine learning context, what makes a useful explanation will typically depend on the input data, the research question, the purpose the explanation will serve, and for whom the explanation is for (Buijsman, 2022;El-Assady et al., 2019;Mittelstadt et al., 2019). For example, when addressing an audience with an intuitive understanding of how the prediction model and feature importance metric are estimated, a graph illustrating importance quantities can be helpful to display which variables the model is using to make its predictions across a dataset. ...
Preprint
Full-text available
With more researchers in psychology using machine learning to model large datasets, many are also looking to eXplainable AI (XAI) methods to understand how their model works and to gain insights into the most important predictors. However, the methodological approach for establishing predictor importance in a machine learning model is not as straightforward or as well-established as with traditional statistical models. Not only are there a large number of potential XAI methods to choose from, but there are also a number of unresolved challenges when using XAI to understand psychological data. This article aims to provide an introduction to the field of XAI for psychologists. We first introduce explainability from an applied machine learning perspective and contrast it to that in psychology. Then we provide an overview of commonly used XAI approaches, namely permutation importance, impurity-based feature importance, Individual Conditional Expectation (ICE) graphs, Partial Dependence Plots (PDP), Accumulated Local Effect (ALE) graphs, Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Deep Learning Important FeaTures (DeepLIFT). Finally, we demonstrate the impact of multicollinearity on different XAI methods using a simulation analysis and discuss the implementation challenges and future directions in psychological research.
... Data storytelling has been adopted to communicate insights extracted from multimodal data. These systems automatically structure explanations from data analysis and emphasize relevant data points for particular stakeholders or users [10][11][12]. Finally, several works have proposed using machine learning tools for early identification of students at risk of dropping out [13][14][15][16]. ...
Chapter
Full-text available
Artificial Intelligence (AI) has become an integral part of our lives, and Explainable Artificial Intelligence (XAI) is becoming more essential to ensure trustworthiness and comply with regulations. XAI methodologies help to explain the automatic processing behind data analysis. This paper provides an overview of the use of XAI in the educational domain. Specifically, it analyzes some of the most commonly used XAI tools, emphasizing their characteristics to help users choose the most suitable one. Additionally, two case studies have been analyzed to demonstrate how to use XAI tools in the educational domain by exploiting a subset of the Open University dataset.
... While most current approaches focus on explanations as single-turn answers to why-questions (Chandra et al., 2024;Lewis, 1986;Anjomshoae et al., 2019), we conceive them as a co-constructive process that may encompass different explanatory questions, including What?, How?, and Why? (Rohlfing et al., 2021;Axelsson et al., 2022;El-Assady et al., 2019;Lombrozo, 2006;Miller, 2019). Crucially, in human-human explanations, we see interlocutors adapting their utterances to what they think supports their partner best. ...
Preprint
Full-text available
Adapting to the addressee is crucial for successful explanations, yet poses significant challenges for dialogsystems. We adopt the approach of treating explanation generation as a non-stationary decision process, where the optimal strategy varies according to changing beliefs about the explainee and the interaction context. In this paper we address the questions of (1) how to track the interaction context and the relevant listener features in a formally defined computational partner model, and (2) how to utilize this model in the dynamically adjusted, rational decision process that determines the currently best explanation strategy. We propose a Bayesian inference-based approach to continuously update the partner model based on user feedback, and a non-stationary Markov Decision Process to adjust decision-making based on the partner model values. We evaluate an implementation of this framework with five simulated interlocutors, demonstrating its effectiveness in adapting to different partners with constant and even changing feedback behavior. The results show high adaptivity with distinct explanation strategies emerging for different partners, highlighting the potential of our approach to improve explainable AI systems and dialogsystems in general.
Article
Full-text available
Credit risk assessment often faces significant challenges due to class imbalance and the opaque nature of machine learning models, which can result in biased predictions and hinder trust among stakeholders. To address these issues, this study proposes a framework combining the TabTransformer model with weighted loss techniques to balance class distributions and improve predictive accuracy. Applied to the BISAID and German Credit datasets, the method demonstrated notable improvements in accuracy, from 86.35% to 89.27% and 93% to 95%, respectively, along with improved minority class AUC and precision-recall metrics. To ensure transparency and interpretability, SHAP (SHapley Additive exPlanations) was employed, highlighting critical predictors such as “Financing Needs” and “Credit Amount.” By integrating fairness mechanisms through weighted loss and explainability via XAI, the proposed framework and weighted loss TabTransformer mitigate bias, enhance model performance, and provide actionable insights for borrowers and stakeholders. These findings establish a reliable, equitable, and transparent approach to credit evaluation.
Article
Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation. However, the considered output candidates of the underlying search algorithm are under-explored and under-explained. We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs. To support these tasks, we present generAItor, a visual analytics technique, augmenting the central beam search tree with various task-specific widgets, providing targeted visualizations and interaction possibilities. Our approach allows interactions on multiple levels and offers an iterative pipeline that encompasses generating, exploring, and comparing output candidates, as well as fine-tuning the model based on adapted data. Our case study shows that our tool generates new insights in gender bias analysis beyond state-of-the-art template-based methods. Additionally, we demonstrate the applicability of our approach in a qualitative user study. Finally, we quantitatively evaluate the adaptability of the model to few samples, as occurring in text-generation use cases.
Conference Paper
With the rising tendency to deploy autonomous robots, their navigational decisions will strongly influence humans. Robot navigation should be explainable to mitigate the undesirable effects of navigation faults and unexpectedness on people. To contribute to compliance between humans and autonomous robots, we present HiXRoN (Hierarchical eXplainable Robot Navigation)—a comprehensive hierarchical framework for explaining robot navigational choices. Besides providing explanations of robot navigation, our framework encompasses qualitative, quantitative, and temporal strategies for explanation conveyance. We further discuss its possibilities and limitations.
Conference Paper
The significance of data analysis in high-performance sports has largely increased in recent years offering opportunities for further exploration using machine learning techniques. As a pioneer work in the academic community, our work showcases the power of data-driven approaches in enhancing performance and decision-making at high-performance sailing events. Specifically, we explore the application of data mining techniques on the dataset collected at a high-performance sailing event in Bermuda in 2021. By analyzing data from Race 4, the study aims to gain valuable insights into the relationship between variables such as wind speed, wind direction, foil usage, and daggerboard adjustments, and their impact on boat speed. Various prediction models, including Gradient Boosting, Random Forest, and a stacked model, were employed and evaluated using performance metrics like R2 score and mean squared error. The results demonstrate the models’ ability to accurately predict boat speed. These findings can be utilized to refine race strategies, optimize sail and rudder settings, and improve overall performance in SailGP races. Future plans include collaboration with SailGP to work with larger datasets and integrate the models into live racing scenarios.
Conference Paper
Full-text available
This paper compares the effectiveness of data comics and infographics for data-driven storytelling. While infographics are widely used, comics are increasingly popular for explaining complex and scientific concepts. However, empirical evidence comparing the effectiveness and engagement of infographics, comics and illustrated texts is still lacking. We report on the results of two complementary studies, one in a controlled setting and one in the wild. Our results suggest participants largely prefer data comics in terms of enjoyment, focus, and overall engagement and that comics improve understanding and recall of information in the stories. Our findings help to understand the respective roles of the investigated formats as well as inform the design of more effective data comics and infographics.
Conference Paper
Full-text available
Advanced artificial intelligence models are used to solve complex real-world problems across different domains. While bringing along the expertise for their specific domain problems, users from these various application fields often do not readily understand the underlying artificial intelligence models. The resulting opacity implicates a low level of trust of the domain expert, leading to an ineffective and hesitant usage of the models. We postulate that it is necessary to educate the domain experts to prevent such situations. Therefore, we propose the metaphorical narrative methodology to transitively conflate the mental models of the involved modeling and domain experts. Metaphorical narratives establish an uncontaminated, unambiguous vocabulary that simplifies and abstracts the complex models to explain their main concepts. Elevating the domain experts in their methodological understanding results in trust building and an adequate usage of the models. To foster the methodological understanding, we follow the Visual Analytics paradigm that is known to provide an effective interface for the human and the machine. We ground our proposed methodology on different application fields and theories, detail four successfully applied metaphorical narratives, and discuss important aspects, properties, and pitfalls.
Conference Paper
Full-text available
In this position paper, we argue that a combination of visualization and verbalization techniques is beneficial for creating broad and versatile insights into the structure and decision-making processes of machine learning models. Explainability of machine learning models is emerging as an important area of research. Hence, insights into the inner workings of a trained model allow users and analysts, alike, to understand the models, develop justifications, and gain trust in the systems they inform. Explanations can be generated through different types of media, such as visualization and verbalization. Both are powerful tools that enable model interpretability. However, while their combination is arguably more powerful than each medium separately, they are currently applied and researched independently. To support our position that the combination of the two techniques is beneficial to explain machine learning models, we describe the design space of such a combination and discuss arising research questions, gaps, and opportunities.
Article
Full-text available
In this article, we look at trust in artificial intelligence, machine learning (ML), and robotics. We first review the concept of trust in AI and examine how trust in AI may be different from trust in other technologies. We then discuss the differences between interpersonal trust and trust in technology and suggest factors that are crucial in building initial trust and developing continuous trust in artificial intelligence.
Article
Full-text available
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that, if these techniques are to succeed, the explanations they generate should have a structure that humans accept. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
Conference Paper
Full-text available
Gamification of work provides a chance of rising employees' engagement, improving their attitude towards work, and, in turn, increasing their productivity. As the problems faced when applying gamification to various work environments are similar, so are the solutions. Unexpectedly, even though a number of reusable gamification elements, techniques or patterns were described in the literature, due to the way they are presented, they offer little guidance for their use by a work gamification designer. This work aims to address this gap by providing a concise description format for gamification design patterns, their practical classification, and definitions of 21 patterns that can be used for designing gamification of work.
Article
Full-text available
Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but inter-pretable. And yet the task of interpretation appears underspecified. Papers provide diverse and sometimes non-overlapping motivations for in-terpretability, and offer myriad notions of what attributes render models interpretable. Despite this ambiguity, many papers proclaim inter-pretability axiomatically, absent further explanation. In this paper, we seek to refine the discourse on interpretability. First, we examine the motivations underlying interest in interpretabil-ity, finding them to be diverse and occasionally discordant. Then, we address model properties and techniques thought to confer interpretability, identifying transparency to humans and post-hoc explanations as competing notions. Throughout, we discuss the feasibility and desirability of different notions, and question the oft-made assertions that linear models are interpretable and that deep neural networks are not.
Article
The objective of this study is twofold: to explore students’ perceptions regarding the use of explanation as a teaching method in classroom teaching and to determine the impact of communication skills on teachers’ explanation skills. A total of 120 participants from twelve High schools offering technology subjects completed the questionnaire, which assessed teachers’ competences in the use of orientation, keys, summaries and communication skills. The first and second objective was addressed by calculating the frequencies and the percentages of questionnaire survey data. All data were analyzed using the Statistical Package for Social Science (SPSS v18 2010). The results revealed that while majority (85%) of technology teachers have adequate technology subject content knowledge, not all of them have the necessary competence in using various explanation approaches effectively. In particular, the study revealed that nearly half the number of technology teachers surveyed was found to have limited skills in the use of orientation, keys, summaries and communication. These findings suggest that ineffective use of various explanation strategies in teaching could be attributed to a number of factors, including lack of adequate preparation, lack of skills in designing explanations and inadequate training and practice in explanation during initial teacher training. The implications of these findings demand that teacher trainers should devote more time training student teachers in explanation skills during their initial teacher training.
Conference Paper
Argumentation has the unique advantage of giving explanations to reasoning processes and results. Recent work studied how to give explanations for arguments that are acceptable, in terms of arguments defending it. This paper studies the counterpart of this problem by formalising explanations for arguments that are not acceptable. We give two different views (an argument-view and an attack-view) in explaining the non-acceptability of an argument and show the computation of explanations with debate trees.