Conference PaperPDF Available

Towards XAI: Structuring the Processes of Explanations

Authors:
  • University of Oklahoma

Abstract and Figures

Explainable Artificial Intelligence describes a process to reveal the logical propagation of operations that transform a given input to a certain output. In this paper, we investigate the design space of explanation processes based on factors gathered from six research areas, namely, Pedagogy, Story-telling, Argumentation, Programming, Trust-Building, and Gamification. We contribute a conceptual model describing the building blocks of explanation processes, including a comprehensive overview of explanation and verification phases, pathways, mediums, and strategies. We further argue for the importance of studying effective methods of explainable machine learning, and discuss open research challenges and opportunities. Figure 1: The proposed explanation process model. On the highest level, each explanation consists of different phases that structure the whole process into defined elements. Each phase contains explanation blocks, i.e., self-contained units to explain one phenomenon based on a selected strategy and medium. At the end of each explanation phase, an optional verification block ensures the understanding of the explained aspects. Lastly, to transition between phases and building blocks, different pathways are utilized.
Content may be subject to copyright.
Towards XAI: Structuring
the Processes of Explanations
Mennatallah El-Assady, Wolfgang Jentner, Rebecca Kehlbeck, Udo Schlegel, Rita
Sevastjanova, Fabian Sperrle, Thilo Spinner, Daniel Keim
University of Konstanz
Konstanz, Germany
ABSTRACT
Explainable Artificial Intelligence describes a process to reveal the logical propagation of operations
that transform a given input to a certain output. In this paper, we investigate the design space of
explanation processes based on factors gathered from six research areas, namely, Pedagogy, Story-
telling, Argumentation, Programming, Trust-Building, and Gamification. We contribute a conceptual
model describing the building blocks of explanation processes, including a comprehensive overview of
explanation and verification phases, pathways, mediums, and strategies. We further argue for the
importance of studying eective methods of explainable machine learning, and discuss open research
challenges and opportunities.
Figure 1: The proposed explanation pro-
cess model. On the highest level, each ex-
planation consists of dierent phases that
structure the whole process into defined
elements. Each phase contains explana-
tion blocks, i.e., self-contained units to
explain one phenomenon based on a se-
lected strategy and medium. At the end of
each explanation phase, an optional veri-
fication block ensures the understanding
of the explained aspects. Lastly, to transi-
tion between phases and building blocks,
dierent pathways are utilized.
INTRODUCTION
Sparked by recent advances in machine learning, lawmakers are reacting to the increasing dependence
on automated decision-making with protective regulations, such as the General Data Protection
Regulation of the European Union. These laws prescribe that decisions based on fully automated
algorithms need to provide clear-cut reasonings and justifications for aected people. Hence, to
address this demand, the field of E
x
plainable
A
rtificial
I
ntelligence (XAI) accelerated, combining
expertise from dierent backgrounds in computer science and other related fields to tackle the
challenges of providing logical and trustworthy decision reasonings.
The act of making something explainable entails a process that reveals and describes an underlying
phenomenon. This has been the subject of study and research of dierent fields over the centuries.
Therefore, to establish a solid foundation for explainable artificial intelligence, we need a structured
approach based on insights, as well as well-studied practices on explanation processes. Structuring
these methodologies and adapting them to the novel challenges facing AI research is inevitable for
advancing eective XAI.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
In this paper, we contribute a conceptual framework for eective explanation processes based on the
analysis of strategies and best practices in six dierent research fields.
We postulate that accelerat-Position Statement:
Multiple research do-
mains established well-studied processes to
communicate information and knowledge.
Learning and transferring these processes will
enable us to build eective methodological
foundations for XAI.
ing the maturation of XAI and ensuring its eectiveness has to rely on the study of relevant
research domains that established well-studied processes to communicate information and
knowledge.
Consequently, learning and transferring these processes will bootstrap XAI and advance
the development of tailored methodologies based on challenges unique to this young field.
Background
To place our model in context, we studied and analyzed related work from six dierent research areas:
Pedagogy, Storytelling, Argumentation, Programming, Trust-Building, and Gamification.
Due to space constraints, this section com-
pactly describes the main ideas gathered from
our analysis of the related work and is based on
up to three of the most relevant research
articles
for each of the six fields. A more com-
plete overview of other works is provided in the
appendix of this paper.
Pedagogy
. Proper methods of Pedagogy develop insight and understanding of how to do it [1].
However, good education involves many dierent strategies, such as induction and deduction [1, 2],
methods, and mediums. Some methods, for instance, are explicit explanations using examples [1],
group work and discussion [3], and self-explaining of students to students [3].
Storytelling
in combination with data visualization is oen practiced to explain complex phenom-
ena in data and provide background knowledge [4]. Various strategies, e.g., martini glass structure,
interactive slideshow, and drill-down story, exist to structure the narratives [5].
Dialog and Argumentation.
In the humanities, many models for dialog and argumentation exist.
Fan and Toni argue that “argumentation can be seen as the process of generating explanations” [6]
and propose a theoretic approach [6]. Miller [7] provides an extensive survey of insights from the
social sciences that can be transferred to XAI. Madumal et al. [8] propose a dialog model for explaining
artificial intelligence.
Methodology:
Based on the analysis of the
six research areas, we derived a number of dif-
ferent eective explanation strategies and best
practices. These are grouped and categorized
(over three iterations) into dierent elements,
which are then put together to build the pro-
posed model for explanation processes in XAI.
In every step, the integrity of the model is
cross-referenced with the original strategies
(extracted from the research areas) to ensure
their compatibility. The resulting conceptual
model is described in the next section.
Programming.
Programming languages are inherently structural, and soware should be self-
explanatory. A popular and consent way to achieve this are design-paerns [9] and other concepts [10].
Design-paerns imply abstraction which is crucial for XAI.
Trust Building. There exists no active trust building scheme for AI. Instead, trust relies on expla-
nation and transparency of the system [11]. Miller et al. and Siao et al. argue that the system has to
continuously clear doubts over time to increase the user’s trust, and that factors such as reliability
and false alarm rate are essential for AI systems [12, 13].
Gamification
is an integration of game elements and game thinking in non-gaming systems or
activities with the goal to motivate the users and foster their engagement [14, p. 10]. To support
the defined goal, the interaction of the system needs to be adapted "to a given user with game-like
targeted communication" [15]. Such systems are usually designed to have several levels or modes
with increasing complexity [14, p. 11]. It is important, though, that the user can specify which task to
realize as next [16].
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
MODELLING OF EXPLANATION PROCESSES IN XAI
Figure 2: Fruit classification example: the
first phase starts with a module that ex-
plains a decision tree classifier using a
video. This transitions to two alternative
explanation blocks where the le one uses
visualizations of the model and the right
block demonstrates the features using
verbalization. The following verification
block ensures that the previously learned
material is understood. The second phase
induces another two explanation blocks
to diagnose the model. The le follows
the visualization mantra of phase one. The
right block uses a visualization to depict
the data with its features.
We define an explanation process as a sequence of
phases
that, in turn, consist of
explanation
and
verification blocks
. Each of these
building blocks
uses a
medium
and a
strategy
for explanation
or verification, respectively. The connection between these blocks is defined through
pathways
. A
schematic representation of our model is depicted in Figure 1.
In this example, the explanation process is comprised of two phases (for AI understanding and
diagnosis, respectively). The first phase consists of three explanation blocks, followed by a verification
block. A linear pathway connects all building blocks in the process. However, some explanation
blocks are positioned on alternative paths. For example, users start the explanation in Phase 1 watching
avideo that uses a simplification strategy for explanation; then they can choose between a
visualization or a verbalization component as a second step. Aer choosing, for example, the
verbal ,explanation by abnormality block, they can transition to a verification block, which
uses a flipped classroom strategy and verbalization asamedium.
Our explanation process model is instantiated in Figure 2 as a simplistic example of explaining the
classification of fruits (using an analogous structure to the abstract process of Figure 1).
In the following, all elements of our model are discussed in more detail.
Pathways.
An explanation process is comprised of dierent modules, i.e., phases that contain
explanation and verification blocks. To connect these modules into a global construct, we define
transitions, the so-called pathways. These can be
linear or iterative
, allowing building blocks in
the process to be visited once or multiple times. Additionally, the navigation defined by them can be
guided or serendipitous, enabling a strict framing or open exploration.
Mediums.
Lipton [17] states that common approaches to describe how a model behaves and why,
usually include verbal (natural language) explanations, visualizations, or explanations by example. In
the current explainable AI systems, visualization is the most frequently applied medium. However,
Sevastjanova et al. [18] argue for a combination of visualization and verbalization techniques, which
can lead to deeper insights, and a beer understanding of the ML model. For instance, the user
could engage with an agent through a dialog system, by interacting with visualization and stating
questions in natural language in order to understand the decisions made by the model. In storytelling,
a combination of text and visual elements are used in diverse formats to communicate about the data
eectively. Comics, illustrated texts, and infographics are three widely applied formats, which dier
in the level of user guidance, and the way how text and visual elements are aligned [19]. In addition
to the previously mentioned mediums, one might employ multimedia (e.g., video, audio, image, video
game) to either engage the user on exploring the ML model in more detail (e.g., if the explanation is
an integral part of a video game), or to provide explanations from another perspective.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
Explanation Strategies.
In logic and philosophy, oen two opposing strategies for reasoning
Pathways
Linear vs. Iterative
Guided vs. Serendipitous
Mediums
Visualizations
Verbalizations
Infographics
Illustrated Text
Comics
Videos
Audios
Images
Video Games
Dialog Systems
Explanation Strategies
Inductive (Boom Up)
simplification
metaphorical narrative
divide and conquer
explanation by example
dynamic programming
depth first - breadth first
describe and define
teaching by categories
Deductive (Top Down)
transfer learning
teaching by association
overview first, details on demand
drill down story
define and describe
Contrastive (Comparison)
opposite and similar
example by abnormality
Verification Strategies
flipped classroom
reproduction
transfer
are named [2]. These are called inductive and deductive reasoning. The first strategy,
inductive
reasoning
, is defined by Aristotle as “the conclusion process for a general knowledge out of observed
events” [2]. The second strategy,
deductive reasoning
, builds the opposite and is defined as “the
conclusion process from given premises to a logical closure” [2]. Such basic strategies can be found
throughout literature in dierent fields, i.e., inductive (boom-up) explanations vs. deductive (top-
down) approaches. Inductive strategies first explain smaller and observable details, followed by
complex relations. Hence, the explanation of the details should facilitate the understanding of the
general and abstract concept. Examples of inductive strategies include; simplifications, explanation by
example, or metaphorical narratives. Deductive strategies start with the whole picture (general idea) as
an overview, then more details get added and explained to show a more complete view. Examples of
deductive strategies include; overview first, details on demand or transfer learning.
In addition to these two groups, we identified another useful explanation method based on compar-
ative analysis, so-called
contrastive explanation
. Such strategies rely on puing two phenomena
side-by-side in a comparison and showing o their contrast. The explanation could then be performed
using induction or deduction. One noteworthy example for this category is the strategy “explaining by
abnormality” where the unusual manifestations of a phenomenon is shown to contrast the “normal”
state and prevent misconceptions.
Lastly, it is with noting that the overall structure of the phases in an explanation process can be
designed based on guidelines derived from explanation strategies (optionally increasing the complexity
of the process to more intricate or recursive explanations).
Verification Strategies.
To ensure that users have gained an encompassing and sound under-
standing of the underlying subject maer, explanation processes need to include verification strategies.
We propose optional verification blocks at the end of each phase to establish a stable common ground
as a conclusion for that phase, before allowing users to advance to the next one (typically increasing
the complexity). In contrast to explanation strategies, verification strategies usually require users
to demonstrate the learned phenomena. They include strategies that are based on questions for
reproducing or transferring knowledge, as well as “flipping the classroom”, i.e., having users explain to
the system the learned concepts.
DISCUSSION: BEST PRACTICES, GUIDELINES, AND RESEARCH OPPORTUNITIES
Several considerations have to be made to select and structure the presented strategies. The decisions
should be mainly based on (i) the targeted level of detail; (ii) the target audience; (iii) the desired
level of interactivity of the target audience. The level of detail considerably impacts the choice of
the strategies and their respective structure and sequence. The spectrum ranges from answering
the question of what the respective machine learning model(s) are achieving to how the model(s)
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
work in detail. To answer the former, a possible consideration could be the use of metaphorical
narratives [20] while the laer needs more accurate and precise descriptions conveyed through
mathematical notations and pseudocode. Two things have to be considered regarding the target
audience. Their size and composition aect the use of mediums and the level of interactivity while
their background knowledge on the subject maer reduces the required distance to reach the desired
level of detail. The interactivity may vary during the phases. It is beneficial to increase the level of
interactivity for verification phases to receive more and more profound feedback.
Research Opportunities
Implement, test, and verify dierent ex-
planation strategies. Is the knowledge
from other domains transferable to XAI
explanation processes?
Identify the most suitable processes for
dierent seings, tasks, and AI models.
Extend strategies to tailor them to XAI.
Evaluate the proposed explanation
model, e.g., through user studies and
texting out of alternative models.
Make XAI processes reactive to the users’
interaction through automatic pathway
generation, e.g., through active learning.
Tailoring the explanation strategies:
which strategy works best in which en-
vironment and for specific target groups,
as well as various levels of AI complexity.
Designing Visual Analytics systems that
integrate the users’ interactions into a
mixed-initiative model-refinement cycle.
Take Home Messages
Studying and combining explanation
processes is critical to establish eective
XAI methodologies and mature this re-
search field.
Best practices and tailored explanation
processes can streamline XAI and ac-
count for dierent circumstances, such
as task complexity, data characteristics,
model type, and user expertise.
Given clear problem specifications, as
well as well-studied and detailed guide-
lines, we can progress toward automat-
ically generating XAI processes as de-
sign templates for successful explana-
tions and model refinements.
It is possible to engage users using gamification to raise their motivational support [14, p. 10] while
continuously receiving and providing feedback to the explanatory and the user [15]. The feedback
aspect can be well exploited in verification blocks whereas the motivational support may drive the
user to explore multiple pathways of the explanation process as well as exploring the machine learning
model itself in more detail. Tracking and displaying the progress serves as an extrinsic motivation [14,
p. 52] allowing the user to beer navigate the various pathways.
CONCLUSIONS
Valuable strategies can be extracted and abstracted from varying research areas. These strategies
serve an important and well-researched baseline to bootstrap the process of explainable machine
learning. Our proposed model classifies these strategies and combines them as building blocks to
actualize an explanation process for machine learning while keeping the flexibility of using dierent
mediums and transitioning paths. The list of collated strategies is not inclusive, yet the proposed
model allows many variations and extensions which provides space for further research opportunities.
Additionally, existing XAI approaches can be analyzed and deconstructed to extract the building
blocks to validate whether our proposed model can be adopted. Successful explanation processes can
then be compared and analyzed regarding common paerns.
REFERENCES
[1]
Odora Ronald James. 2014. Using Explanation as a Teaching Method: How Prepared are High
School Technology Teachers in Free State Province, South Africa. In Journal of Social Sciences,
38:71–81.
[2] Robert J. Sternberg and Karin Sternberg. 2016. Cognitive psychology. Nelson Education.
[3]
Richard Gunstone, editor. 2015. Explaining as a Teaching Strategy.Encyclopedia of Science
Education. Springer Netherlands, Dordrecht, 423–425.
[4]
Robert Kosara and Jock D. Mackinlay. 2013. Storytelling: The Next Step for Visualization. IEEE
Computer, 46, 5, 44–50.
[5]
Edward Segel and Jerey Heer. 2010. Narrative Visualization: Telling Stories with Data. IEEE
Trans. Vis. Comput. Graph., 16, 6, 1139–1148.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[6]
Xiuyi Fan and Francesca Toni. 2014. On Computing Explanations in Abstract Argumentation.
In Proc. of the ECAI. IOS Press, 1005–1006.
[7]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artifical
Intelligence, 267, 1–38.
[8]
Prashan Madumal, Tim Miller, Frank Vetere, and Liz Sonenberg. 2018. Towards a Grounded
Dialog Model for Explainable Artificial Intelligence. In Workshop on SCS.
[9]
Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Paerns: Elements
of Reusable Object-oriented Soware. Addison-Wesley Longman Publishing.
[10]
Mark Dominus. 2006. Design paerns of 1972 — The Universe of Discourse. [Online; accessed
7-February-2019]. (2006). hps://blog.plover.com/prog/design-paerns.html.
[11]
Wolter Pieters. 2011. Explanation and Trust: What to Tell the User in Security and AI? Ethics
and information technology, 13, 1, 53–64.
[12]
Keng Siau and Weiyu Wang. 2018. Building Trust in Artificial Intelligence, Machine Learning,
and Robotics. Cuer Business Technology Journal, 31, 47–53.
[13]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running
the Asylum. CoRR, abs/1712.00547.
[14]
Karl M. Kapp. 2012. The Gamification of Learning and Instruction: Game-based Methods and
Strategies for Training and Education. (1st edition). Pfeier & Company.
[15]
Cathie Marache-Francisco and Eric Brangier. 2013. Process of Gamification. Proceedings of the
6th Centric, 126–131.
[16]
Jakub Swacha and Karolina Muszynska. 2016. Design Paerns for Gamification of Work. In
Proc. of TEEM.
[17] Zachary Chase Lipton. 2016. The Mythos of Model Interpretability. CoRR, abs/1606.03490.
[18]
Rita Sevastjanova, Fabian Beck, Basil Ell, Cagatay Turkay, Rafael Henkin, Miriam Bu, Daniel A.
Keim, and Mennatallah El-Assady. 2018. Going beyond Visualization: Verbalization as Com-
plementary Medium to Explain Machine Learning Models. In Proc. of VISxAI Workshop, IEEE
VIS.
[19]
Zezhong Wang, Shunming Wang, Maeo Farinella, Dave Murray-Rust, Nathalie Henry Riche,
and Benjamin Bach. 2019. Comparing Eectiveness and Engagement of Data Comics and
Infographics. In Proc. of ACM CHI.
[20]
Wolfgang Jentner, Rita Sevastjanova, Florian Stoel, Daniel A. Keim, Jürgen Bernard, and
Mennatallah El-Assady. 2018. Minions, Sheep, and Fruits: Metaphorical Narratives to Explain
Artificial Intelligence and Build Trust. In Proc. of VISxAI Workshop, IEEE VIS.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
APPENDIX: RELATED WORK
Complete overview of surveyed fields, seing the most relevant related work in context.
Pedagogy
“Good teaching is good explanation.” [21] Proper methods of Pedagogy develop insight and understand-
ing of how to do it [1]. However, good education involves many dierent strategies, like induction and
deduction [1, 2], methods, and mediums. Some methods, for instance, are explicit explanations using
examples [1], group work and discussion [3], and self-explaining of students to students [3]. These
methods and strategies include the logic and philosophical base with induction and deduction [1,
2]. Further logical operations can be incorporated to extend these methods and strategies, namely
comparison, analysis, synthesis, and analogy. [1] In this context, three dierent parts of an explanation
exist, something that is to be explained, an explainer, the one who explains, and the explainee, who
gets the explanation. [22] If the explainer wants to provide a good explanation to the explainee, the
explanation has to be clearly structured and interesting to the explainer [23]. Good explanations can
invoke understanding. However, bad explanations may lead to confused and bored explainee and
explainer [23]. Brown and Atkins [24] describe three types of explanations: descriptive, interpretive,
and reason giving. A descriptive explanation can be defined as describe and define and explains the
processes and procedures [24]. An interpretive explanation specifies the central meaning of a term
and can be seen as define and describe. And last, reason giving explanation shows reasons based on
generalizations and can be interpreted as teaching by categories. There are more proposed strategies
and methods, e.g. Wragg [23] or Brown and Armstrong [25], but in general, they can be summarized
with the explanation strategies and methods above.
Storytelling
Storytelling has been used for millennia in human history to communicate information, transfer
knowledge, and entertainment [5]. Outlining the complete field is almost impossible as storytelling
is as diverse as humanity. However, commonalities appear when looking at this field at a more
abstract level. We explicitly focus on works of storytelling in combination with data visualization as
this is oen practiced to explain complex phenomena in data and provide background [4]. Machine
learning follows this goal, however, it is not suicient to understand the phenomena in the data. A
user must also learn about the reason how and why this phenomena appears to verify and validate
the phenomena. This further aects the trust building process positively. Various strategies exist
to structure narratives uniting data visualization [5] and best practices have been extracted and
summarized to improve storytelling for visualizations [26]. We transfer and provide a taxonomy for
these strategies that we deem useful to explain machine learning.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
Dialog and Argumentation
In the humanities, many models for dialog and argumentation exist, with Hilton stating that “causal
explanation takes the form of conversation” [27]. This conversation involves both cognitive and
social processes [8] and is “first and foremost a form of social interaction” [7]. According to Grice,
uerances in this conversation should follow the four maxims of quantity,quality,relation (relevance)
and manner [28]. Miller notes that “questions could be asked by interacting with a visual object,
and answers could similarly be provided in a visual way.” [7] Many of these principles have been
applied to explainable AI, as surveyed by Miller [7]. Fan and Toni argue that “argumentation can
be seen as the process of generating explanations” [6] and propose two theoretic approaches [6,
29]. Madumal et al. propose a dialog model for explaining artificial intelligence [8], and Zeng et al.
introduced an argumentation-based approach for context-based explainable decisions [30]. Here,
the “schemes for practical reasoning” and “schemes for applying rules to cases” from Walton and
Macagno’s classification system for argumentation schemes [31] seem particularly interesting.
Programming
Typically, during programming, common soware design paerns and best-practices are followed.
While the main goal of such paerns is to provide “general, reusable solution[s] to [.. . ] commonly
occurring problem[s]” [32], they oen act as a self-explanation strategy for complex soware systems.
For the programmer, soware design paerns improve readability, traceability and help by building
up a mental model of the system. Many soware design paerns can be classified using the categories
introduced in section
Modelling of Explanation Processes in XAI
. The program flow is the pathway
of code: it can be linear (block), iterative (loop) or part of itself (recursion). Algorithms can follow a
top-down or boom-up approach. The main strategy followed in soware design paerns is abstraction.
Abstraction is the core concept of modern high-level programming languages [33] and is closely
related to Shneiderman’s mantra “overview first, [.. . ] details on demand” [34]. While, again, the
strategy in the first instance has a practical use, it also takes an explanatory role for the programmer.
He can understand the full program on a higher level and, if needed, can go deeper to view the
details. Abstraction does not only occur as a concept of language design, but also in many discrete
programming paerns. This ranges from the simple concept of subroutines [10] up to many of the
design paerns for object-oriented programming proposed by Gamma et al. (GoF) [9], e.g. facade or
iterator.
Trust Building
Trust in machine learning systems is highly dependent on the system itself and how it can be explained.
Glass, Mcguinness, and Wolverton find that "trust depends on the granularity of explanations and
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
transparency of the system" [11, 35]. As our trust building is usually done with humans, many natural
trust mechanism rely on a person’s presence. However, these are not available in AI systems, and
therefore the explanation and transparency of the system become the most important factors that
influence the user’s trust. Pieters argues there is a dierence between having trust a system, where
the user completely understands the decisions of the system itself and therefore make active decisions
about the result, and confidence in a system, where the user does not need to know inner workings
in order to use its results [11]. Miller, Howe, and Sonenberg and Siau and Wang argue that trust is
dynamic and build up in a gradual manner. Furthermore there is a dierentiation between initial trust
that the user has obtained through external factors, e.g. cultural aspects, and the trust that he builds
while using the system. The system should continuously clear potential doubts over time by providing
additional user-driven information. Factors such as reliability, validity, robustness, and false alarm rate
influence how the user develops trust in the system, and should play an integral role when designing
the system [12]. Lombrozo shows that the people disproportionately prefer simpler explanations over
more likely explanations. Therefore explanations should aim to only carry the appropriate amount of
information. Furthermore, they found that people prefer contrastive explanations at certain parts of
the system, because otherwise the cognitive burden of a complete explanations is too great [13].
Gamification
Gamification is an integration of game elements and game thinking in non-gaming systems or
activities [14, p. 10]. It aims at motivating users [37, p. 4] to foster their engagement [14, p. 10].
Gamification uses several concepts to achieve this goal. Usually, a user is asked to accomplish tasks
to earn points; these points are accumulated and based on the achieved result the user may receive
rewards. To support the defined goal, the interaction needs to be adapted "to a given user with
game-like targeted communication" [15]. Thus, to increase engagement, games are usually designed
to have several levels or modes with increasing complexity [14, p. 10]. It is important, though, that
the user can specify which task to realize as next and which pathway to take to achieve the goal [16].
In order to make these systems more aractive, game elements are designed to generate positive
emotions. Usually, it is done by applying a specific vocabulary (e.g., simplification) or narrations [15].
According to Bowser et al. [38], dierent user groups prefer a dierent type of interface. Important is,
thus, to adapt the system to the specific user profile, by showing only the elements relevant for his
particular task [15].
REFERENCES
[1]
Odora Ronald James. 2014. Using Explanation as a Teaching Method: How Prepared are High
School Technology Teachers in Free State Province, South Africa. In Journal of Social Sciences,
38:71–81.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[2] Robert J. Sternberg and Karin Sternberg. 2016. Cognitive psychology. Nelson Education.
[3]
Richard Gunstone, editor. 2015. Explaining as a Teaching Strategy.Encyclopedia of Science
Education. Springer Netherlands, Dordrecht, 423–425.
[4]
Robert Kosara and Jock D. Mackinlay. 2013. Storytelling: The Next Step for Visualization. IEEE
Computer, 46, 5, 44–50.
[5]
Edward Segel and Jerey Heer. 2010. Narrative Visualization: Telling Stories with Data. IEEE
Trans. Vis. Comput. Graph., 16, 6, 1139–1148.
[6]
Xiuyi Fan and Francesca Toni. 2014. On Computing Explanations in Abstract Argumentation.
In Proc. of the ECAI. IOS Press, 1005–1006.
[7]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artifical
Intelligence, 267, 1–38.
[8]
Prashan Madumal, Tim Miller, Frank Vetere, and Liz Sonenberg. 2018. Towards a Grounded
Dialog Model for Explainable Artificial Intelligence. In Workshop on SCS.
[9]
Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Paerns: Elements
of Reusable Object-oriented Soware. Addison-Wesley Longman Publishing.
[10]
Mark Dominus. 2006. Design paerns of 1972 — The Universe of Discourse. [Online; accessed
7-February-2019]. (2006). hps://blog.plover.com/prog/design-paerns.html.
[11]
Wolter Pieters. 2011. Explanation and Trust: What to Tell the User in Security and AI? Ethics
and information technology, 13, 1, 53–64.
[12]
Keng Siau and Weiyu Wang. 2018. Building Trust in Artificial Intelligence, Machine Learning,
and Robotics. Cuer Business Technology Journal, 31, 47–53.
[13]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running
the Asylum. CoRR, abs/1712.00547.
[14]
Karl M. Kapp. 2012. The Gamification of Learning and Instruction: Game-based Methods and
Strategies for Training and Education. (1st edition). Pfeier & Company.
[15]
Cathie Marache-Francisco and Eric Brangier. 2013. Process of Gamification. Proceedings of the
6th Centric, 126–131.
[16]
Jakub Swacha and Karolina Muszynska. 2016. Design Paerns for Gamification of Work. In
Proc. of TEEM.
[17] Zachary Chase Lipton. 2016. The Mythos of Model Interpretability. CoRR, abs/1606.03490.
[18]
Rita Sevastjanova, Fabian Beck, Basil Ell, Cagatay Turkay, Rafael Henkin, Miriam Bu, Daniel A.
Keim, and Mennatallah El-Assady. 2018. Going beyond Visualization: Verbalization as Com-
plementary Medium to Explain Machine Learning Models. In Proc. of VISxAI Workshop, IEEE
VIS.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[19]
Zezhong Wang, Shunming Wang, Maeo Farinella, Dave Murray-Rust, Nathalie Henry Riche,
and Benjamin Bach. 2019. Comparing Eectiveness and Engagement of Data Comics and
Infographics. In Proc. of ACM CHI.
[20]
Wolfgang Jentner, Rita Sevastjanova, Florian Stoel, Daniel A. Keim, Jürgen Bernard, and
Mennatallah El-Assady. 2018. Minions, Sheep, and Fruits: Metaphorical Narratives to Explain
Artificial Intelligence and Build Trust. In Proc. of VISxAI Workshop, IEEE VIS.
[21] Robert C. Calfee. 1986. Handbook of Research on Teaching. Macmillan.
[22]
Fairhurst MA. 1981. Satisfactory explanations in the primary school. Journal of Philosophy of
Education, 15, 2, 205–213.
[23] Wragg EC and Brown G. 1993. Explaining. Routledge Publishers.
[24] 1997. Explaining.The Handbook of Communication Skills. Routlege Publishers, 199–229.
[25]
1984. Explaining and explanations.Classroom Teaching Skills. Nichols Publishing Company,
121–148.
[26]
Nahum D. Gershon and Ward Page. 2001. What Storytelling can do for Information Visualization.
Commun. ACM, 44, 8, 31–37.
[27]
Denis J. Hilton. 1990. Conversational processes and causal explanation. Psychological Bulletin,
107, 1, 65–81. doi: 10.1037/0033-2909.107.1.65.
[28]
Herbert Paul Grice. 1967. Logic and Conversation. In Studies in the Way of Words. Paul Grice,
editor. Harvard University Press, 41–58.
[29]
Xiuyi Fan and Francesca Toni. 2015. On explanations for non-acceptable arguments. In Theory
and Applications of Formal Argumentation. Elizabeth Black, Sanjay Modgil, and Nir Oren, editors.
Springer International Publishing, Cham, 112–127. isbn: 978-3-319-28460-6.
[30]
Zhiwei Zeng, Xiuyi Fan, Chunyan Miao, Cyril Leung, Chin Jing Jih, and Ong Yew Soon. 2018.
Context-based and Explainable Decision Making with Argumentation. In Proceedings of the
17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS ’18).
International Foundation for Autonomous Agents and Multiagent Systems, Stockholm, Sweden,
1114–1122. hp://dl.acm.org/citation.cfm?id=3237383.3237862.
[31]
Douglas Walton and Fabrizio Macagno. 2015. A Classification System for Argumentation
Schemes. Argument and Computation, 6, 3, 219–245.
[32]
Wikipedia contributors. 2019. Soware design paern — Wikipedia, the free encyclopedia.
[Online; accessed 7-February-2019]. (2019). hps: / / en.wikipedia . org /w/index . php ?title=
Soware_design_paern&oldid=879797369.
[33]
Wikipedia contributors. 2019. High-level Programming Language — Wikipedia, The Free Ency-
clopedia. [Online; accessed 7-February-2019]. (2019). hps://en.wikipedia.org/w/index.php?
title=High-level_programming_language&oldid=879754477.
Towards Explainable Artificial Intelligence: Structuring the Processes of Explanations HCML Workshop at CHI’19, May 04, 2019, Glasgow, UK
[34]
Ben Shneiderman. 1996. The Eyes have it: A Task by Data Type Taxonomy for Information
Visualizations. In Proceedings 1996 IEEE Symposium on Visual Languages. IEEE, Boulder, CO,
USA, USA, (September 1996), 336–343. doi: 10.1109/VL.1996.545307.
[35]
Alyssa Glass, Deborah Mcguinness, and Michael Wolverton. 2008. Toward Establishing Trust in
Adaptive Agents. In (January 2008), 227–236. doi: 10.1145/1378773.1378804.
[36]
Tania Lombrozo. 2007. Simplicity and probability in causal explanation. Cognitive Psychology,
55, 232–257.
[37]
Yu-kai Chou. 2015. Actionable Gamification: Beyond Points, Badges, and Leaderboards. Octalysis
Group Fremont, CA.
[38]
Anne Bowser, Derek Hansen, and Jennifer Preece. 2013. Gamifying Citizen Science: Lessons
and Future Directions. In Workshop on Designing Gamification: Creating Gameful and Playful
Experiences.
... Next, the transformation projection stage applies various projection techniques on the raw time series, the Fourier-transformed data, and the attributions to reduce the dimensionality to two. After the automatic phase, the explanation phase incorporates the user into the explanation process [50]. In the first global exploration, the previously calculated results visualize an overview of the data, the transformations, and the attributions. ...
... However, what happens if we try something similar to a more advanced model. We add another Conv1D layer and increase the filters for each layer to [10,50,100,150] to improve the accuracy score to 92,82%. On the right in Figure 6, we can see the change from the first to the second line chart. ...
Preprint
Full-text available
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
... Dialogical approaches to XAI follow diverse strategies. In [24] suggest a multi-stage dialog approach (cf. Fig. 3) that starts with a general explanation phase of the model and the domain ("Phase 1: Understanding") which is followed by a Verification Phase which needs to be mastered in order to proceed to the explanation of the decision at hand ("Phase 2: Diagnosis"). ...
... Approaches to interactive machine learning [82] have first been proposed in the context of query-based machine learning [6]. More recently, interactive learning has been considered mainly in the context of human-computer interaction Fig. 3 Proposed structure of explanation process with multiple interaction types [24]. Phase 1 foresees an explanation phase which is followed by a verification phase. ...
Article
Full-text available
With the perspective on applications of AI-technology, especially data intensive deep learning approaches, the need for methods to control and understand such models has been recognized and gave rise to a new research domain labeled explainable artificial intelligence (XAI). In this overview paper we give an interim appraisal of what has been achieved so far and where there are still gaps in the research. We take an interdisciplinary perspective to identify challenges on XAI research and point to open questions with respect to the quality of the explanations regarding faithfulness and consistency of explanations. On the other hand we see a need regarding the interaction between XAI and user to allow for adaptability to specific information needs and explanatory dialog for informed decision making as well as the possibility to correct models and explanations by interaction. This endeavor requires an integrated interdisciplinary perspective and rigorous approaches to empirical evaluation based on psychological, linguistic and even sociological theories.
... However, often these explanations are conceptualised and presented in a one-off and static way that may not be sufficient for diverse stakeholders interested in them (Suresh et al., 2021;Lakkaraju et al., 2022). An increasing amount of research is currently calling for incorporation of findings from social and cognitive sciences into explanation generation to make it interactive and adaptable towards specific goals, needs, expertise and changing levels of understanding of the explainee (Miller, 2019;El-Assady et al., 2019;Shvo et al., 2020;Sokol and Flach, 2020;Dazeley et al., 2021;Rohlfing et al., 2021;Lakkaraju et al., 2022). ...
Conference Paper
Full-text available
This paper's main contribution is a Bayesian hierarchical grounding state prediction model implemented in an adaptive explainer agent assisting users with analogical problem-solving. This model lets the agent adapt dialogue moves regarding previously unmentioned domain entities that are similar to the ones already explained when they are instances of the same generalised schema in different domains. Learning such schemata facilitates knowledge transfer between domains and plays an important role in analogical reasoning. An ex-plainer agent should be able to predict to what extent the explainee has learned to induce a schema in order to build up on this in the explanation process and make it more cooperative. This paper describes the approach of hierarchical grounding state prediction, introduces the analogy-based explanation generation process and the agent architecture implemented for this approach, as well as provides some example interactions as the first developers' evaluation of the system in preparation for upcoming empirical studies.
... Explanation process As mentioned above, explanation usually takes place in an iterative fashion. Sequential analysis allows the user to query further information in an iterative manner and to understand the model and its decisions over time, in accordance with the users' capabilities and the given context (El-Assady et al. 2019;Finzel et al. 2021b). ...
Article
Full-text available
In the meantime, a wide variety of terminologies, motivations, approaches, and evaluation criteria have been developed within the research field of explainable artificial intelligence (XAI). With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method based on traits required by a specific use-case context. Many taxonomies for XAI methods of varying level of detail and depth can be found in the literature. While they often have a different focus, they also exhibit many points of overlap. This paper unifies these efforts and provides a complete taxonomy of XAI methods with respect to notions present in the current state of research. In a structured literature analysis and meta-study, we identified and reviewed more than 50 of the most cited and current surveys on XAI methods, metrics, and method traits. After summarizing them in a survey of surveys, we merge terminologies and concepts of the articles into a unified structured taxonomy. Single concepts therein are illustrated by more than 50 diverse selected example methods in total, which we categorize accordingly. The taxonomy may serve both beginners, researchers, and practitioners as a reference and wide-ranging overview of XAI method traits and aspects. Hence, it provides foundations for targeted, use-case-oriented, and context-sensitive future research.
... Thus, extracting the proper explanation for each group involves different evaluations. At first, the XAI technique needs to be evaluated, and then a suitable medium for the explanation needs to be found and presented to users [161]. ...
Article
Full-text available
Time series data is increasingly used in a wide range of fields, and it is often relied on in crucial applications and high-stakes decision-making. For instance, sensors generate time series data to recognize different types of anomalies through automatic decision-making systems. Typically, these systems are realized with machine learning models that achieve top-tier performance on time series classification tasks. Unfortunately, the logic behind their prediction is opaque and hard to understand from a human standpoint. Recently, we observed a consistent increase in the development of explanation methods for time series classification justifying the need to structure and review the field. In this work, we (a) present the first extensive literature review on Explainable AI (XAI) for time series classification, (b) categorize the research field through a taxonomy subdividing the methods into time points-based, subsequences-based and instance-based, and (c) identify open research directions regarding the type of explanations and the evaluation of explanations and interpretability.
... The concept of data storytelling was adopted to use narrative to communicate insights extracted from the multimodal data. The idea of data storytelling has been suggested, both by the general XAI research community (El-Assady et al., 2019) and the emerging XAI sub-community within learning analytics (De Laet et al., 2018), as a way to automatically structure explanations from data analysis and to emphasise some data points that are relevant to particular stakeholders or users. Here we discuss two ways used to incorporate data storytelling in the multimodal learning analytics interfaces for healthcare simulation. ...
Article
Full-text available
There are emerging concerns about the Fairness, Accountability, Transparency, and Ethics (FATE) of educational interventions supported by the use of Artificial Intelligence (AI) algorithms. One of the emerging methods for increasing trust in AI systems is to use eXplainable AI (XAI), which promotes the use of methods that produce transparent explanations and reasons for decisions AI systems make. Considering the existing literature on XAI, this paper argues that XAI in education has commonalities with the broader use of AI but also has distinctive needs. Accordingly, we first present a framework, referred to as XAI-ED, that considers six key aspects in relation to explainability for studying, designing and developing educational AI tools. These key aspects focus on the stakeholders, benefits, approaches for presenting explanations, widely used classes of AI models, human-centred designs of the AI interfaces and potential pitfalls of providing explanations within education. We then present four comprehensive case studies that illustrate the application of XAI-ED in four different educational AI tools. The paper concludes by discussing opportunities, challenges and future research needs for the effective incorporation of XAI in education.
... While many metrics have been devised to assess the quality of the outcomes produced by explainability techniques [76][77][78][79], explanations must also be evaluated and measured from a human-centric perspective. Attention must be devoted to ensuring the explanations convey meaningful information to different user profiles according to their purpose [80][81][82] such that they promote curiosity to increase learning and engagement [83], and provide means to develop trust through exploration [84]. Furthermore, it is desired that the explanations are actionable [85,86], and inform conditions that could change the forecast outcome [87]. ...
Article
Full-text available
Artificial intelligence models are increasingly used in manufacturing to inform decision making. Responsible decision making requires accurate forecasts and an understanding of the models’ behavior. Furthermore, the insights into the models’ rationale can be enriched with domain knowledge. This research builds explanations considering feature rankings for a particular forecast, enriching them with media news entries, datasets’ metadata, and entries from the Google knowledge graph. We compare two approaches (embeddings-based and semantic-based) on a real-world use case regarding demand forecasting. The embeddings-based approach measures the similarity between relevant concepts and retrieved media news entries and datasets’ metadata based on the word movers’ distance between embeddings. The semantic-based approach recourses to wikification and measures the Jaccard distance instead. The semantic-based approach leads to more diverse entries when displaying media events and more precise and diverse results regarding recommended datasets. We conclude that the explanations provided can be further improved with information regarding the purpose of potential actions that can be taken to influence demand and to provide “what-if” analysis capabilities.
... While many metrics have been devised to assess the quality of the outcomes produced by explainability techniques [74][75][76][77], explanations must also be evaluated and measured from a human-centric perspective. Attention must be devoted to ensuring the explanations convey meaningful information to different user profiles according to their purpose [78][79][80], that they promote curiosity to increase learning and engagement [81], and provide means to develop trust through exploration [82]. Furthermore, it is desired that the explanations are actionable [83,84], and inform conditions that could change the forecast outcome [85]. ...
Preprint
Full-text available
Artificial Intelligence models are increasingly used in manufacturing to inform decision-making. Responsible decision-making requires accurate forecasts and an understanding of the models' behavior. Furthermore, the insights into models' rationale can be enriched with domain knowledge. This research builds explanations considering feature rankings for a particular forecast, enriching them with media news entries, datasets' metadata, and entries from the Google Knowledge Graph. We compare two approaches (embeddings-based and semantic-based) on a real-world use case regarding demand forecasting.
Chapter
Artificial Intelligence is the technology that is being used to develop machines that could work like humans or simply can have the intelligence relatable to that of humans. But the development of this kind of technology model that mimics humans involves a lot of complex calculations and complex algorithms that are difficult to explain and understand. For this problem, the concept of explainable artificial intelligence (XAI) is developed and introduced. It is the technology that is developed to ease the understanding process of machine learning solutions for humans. It is the concept that is being developed for making it convenient for humans to understand and interpret machine language. Black model machine learning (ML) algorithms are very hard to understand for humans who have not developed them. AI models that involve the methods like genetic algorithms or deep learning concepts are very difficult to understand. It sometimes becomes a very hard task for the domain experts too to understand the ML algorithms of the black block models, so the need for the development of this type of technology was felt. Many times, results are developed with very high accuracy are quite easy to understand for the domain experts. But Explainable artificial intelligence has a great potential to make a change in domains like finance, medicines, etc. It plays a vital role where it is important to understand the results to build trustworthy algorithms. XAI can play a great role in “third-wave AI systems” which include machines that can interact directly with the environment and that can build explanatory models that allow them to develop the characteristics of real-world phenomena. XAI has the potential to play a great role where the organizations need to build trustworthy AI models and to make them trustworthy the explainability of the AI models should be there for others as well. This technology is developed primarily to make AI understandable to those who are practitioners. This book chapter presents a wide and insightful view of XAI and its application in various fields. This chapter also includes the future scope of this technology and the need for the growth of this type of technology.
Article
Collaborative human-AI problem-solving and decision-making rely on effective communications between both agents. Such communication processes comprise explanations and interactions between a sender and a receiver. Investigating these dynamics is crucial to avoid miscommunication problems. Hence, in this paper, we propose a communication dynamics model , examining the impact of the sender's explanation intention and strategy on the receiver's perception of explanation effects. We further present potential biases and reasoning pitfalls with the aim of contributing to the design of hybrid intelligence systems. Lastly, we propose six desiderata for human-centered explainable AI and discuss future research opportunities.
Conference Paper
Full-text available
This paper compares the effectiveness of data comics and infographics for data-driven storytelling. While infographics are widely used, comics are increasingly popular for explaining complex and scientific concepts. However, empirical evidence comparing the effectiveness and engagement of infographics, comics and illustrated texts is still lacking. We report on the results of two complementary studies, one in a controlled setting and one in the wild. Our results suggest participants largely prefer data comics in terms of enjoyment, focus, and overall engagement and that comics improve understanding and recall of information in the stories. Our findings help to understand the respective roles of the investigated formats as well as inform the design of more effective data comics and infographics.
Conference Paper
Full-text available
Advanced artificial intelligence models are used to solve complex real-world problems across different domains. While bringing along the expertise for their specific domain problems, users from these various application fields often do not readily understand the underlying artificial intelligence models. The resulting opacity implicates a low level of trust of the domain expert, leading to an ineffective and hesitant usage of the models. We postulate that it is necessary to educate the domain experts to prevent such situations. Therefore, we propose the metaphorical narrative methodology to transitively conflate the mental models of the involved modeling and domain experts. Metaphorical narratives establish an uncontaminated, unambiguous vocabulary that simplifies and abstracts the complex models to explain their main concepts. Elevating the domain experts in their methodological understanding results in trust building and an adequate usage of the models. To foster the methodological understanding, we follow the Visual Analytics paradigm that is known to provide an effective interface for the human and the machine. We ground our proposed methodology on different application fields and theories, detail four successfully applied metaphorical narratives, and discuss important aspects, properties, and pitfalls.
Conference Paper
Full-text available
In this position paper, we argue that a combination of visualization and verbalization techniques is beneficial for creating broad and versatile insights into the structure and decision-making processes of machine learning models. Explainability of machine learning models is emerging as an important area of research. Hence, insights into the inner workings of a trained model allow users and analysts, alike, to understand the models, develop justifications, and gain trust in the systems they inform. Explanations can be generated through different types of media, such as visualization and verbalization. Both are powerful tools that enable model interpretability. However, while their combination is arguably more powerful than each medium separately, they are currently applied and researched independently. To support our position that the combination of the two techniques is beneficial to explain machine learning models, we describe the design space of such a combination and discuss arising research questions, gaps, and opportunities.
Article
Full-text available
In this article, we look at trust in artificial intelligence, machine learning (ML), and robotics. We first review the concept of trust in AI and examine how trust in AI may be different from trust in other technologies. We then discuss the differences between interpersonal trust and trust in technology and suggest factors that are crucial in building initial trust and developing continuous trust in artificial intelligence.
Article
Full-text available
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that, if these techniques are to succeed, the explanations they generate should have a structure that humans accept. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
Conference Paper
Full-text available
Gamification of work provides a chance of rising employees' engagement, improving their attitude towards work, and, in turn, increasing their productivity. As the problems faced when applying gamification to various work environments are similar, so are the solutions. Unexpectedly, even though a number of reusable gamification elements, techniques or patterns were described in the literature, due to the way they are presented, they offer little guidance for their use by a work gamification designer. This work aims to address this gap by providing a concise description format for gamification design patterns, their practical classification, and definitions of 21 patterns that can be used for designing gamification of work.
Article
Full-text available
Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but inter-pretable. And yet the task of interpretation appears underspecified. Papers provide diverse and sometimes non-overlapping motivations for in-terpretability, and offer myriad notions of what attributes render models interpretable. Despite this ambiguity, many papers proclaim inter-pretability axiomatically, absent further explanation. In this paper, we seek to refine the discourse on interpretability. First, we examine the motivations underlying interest in interpretabil-ity, finding them to be diverse and occasionally discordant. Then, we address model properties and techniques thought to confer interpretability, identifying transparency to humans and post-hoc explanations as competing notions. Throughout, we discuss the feasibility and desirability of different notions, and question the oft-made assertions that linear models are interpretable and that deep neural networks are not.
Article
The objective of this study is twofold: to explore students’ perceptions regarding the use of explanation as a teaching method in classroom teaching and to determine the impact of communication skills on teachers’ explanation skills. A total of 120 participants from twelve High schools offering technology subjects completed the questionnaire, which assessed teachers’ competences in the use of orientation, keys, summaries and communication skills. The first and second objective was addressed by calculating the frequencies and the percentages of questionnaire survey data. All data were analyzed using the Statistical Package for Social Science (SPSS v18 2010). The results revealed that while majority (85%) of technology teachers have adequate technology subject content knowledge, not all of them have the necessary competence in using various explanation approaches effectively. In particular, the study revealed that nearly half the number of technology teachers surveyed was found to have limited skills in the use of orientation, keys, summaries and communication. These findings suggest that ineffective use of various explanation strategies in teaching could be attributed to a number of factors, including lack of adequate preparation, lack of skills in designing explanations and inadequate training and practice in explanation during initial teacher training. The implications of these findings demand that teacher trainers should devote more time training student teachers in explanation skills during their initial teacher training.
Conference Paper
Argumentation has the unique advantage of giving explanations to reasoning processes and results. Recent work studied how to give explanations for arguments that are acceptable, in terms of arguments defending it. This paper studies the counterpart of this problem by formalising explanations for arguments that are not acceptable. We give two different views (an argument-view and an attack-view) in explaining the non-acceptability of an argument and show the computation of explanations with debate trees.