Preprint

Scorecard for Self-Explaining Capabilities of AI Systems

Authors:
  • ShadowBox LLC & MacroCognition LLC
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

This report describes a Self-Explaining Scorecard for appraising the self-explanatory support capabilities of XAI systems. The Scorecard might be useful in conceptualizing the various ways in which XAI system developers are supporting users, and might also help in comparing and contrasting the various approaches.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

... Two contributions that have not yet been recognized as relevant to XAI are explanations used within intelligent tutoring systems (ITS) and self-explanations (Hausmann and Chi 2002;Klein, Hoffman, and Mueller 2021;Mueller et al. 2021). Explanations within ITS are relevant because these systems require explanatory interactions, which became feasible with the incorporation of models for the user, instruction, and pedagogical process (Clancey and Hoffman 2021). ...
... Explanations within ITS are relevant because these systems require explanatory interactions, which became feasible with the incorporation of models for the user, instruction, and pedagogical process (Clancey and Hoffman 2021). Selfexplanations places the explanatory process in the hands of the users who might request information to bridge gaps in their own self-explanation process; see Klein, Hoffman, and Mueller (2021) for its benefits to XAI. ...
... Studies evaluating explanation interactions with models of trust are not easy to find. Hausmann and Chi (2002) describe an experiment adopting self-explaining (Hausmann and Chi 2002;Klein, Hoffman, and Mueller 2021;Mueller et al. 2021), which is aligned with the humanrelated concept of explanation interaction. The literature provides better resources on specific aspects of explanations such as those that answer why questions, which have been explored and discussed extensively (Myers et al. 2006;Ko 2008;Patel et al. 2008;Miller 2019). ...
Article
Full-text available
Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI, the lack of definitional cohesion, precision, and adoption, the issues with motivations for XAI research, and limited and inconsistent evaluations. As we delve into their potential underlying sources, our analysis finds these problems seem to originate from AI researchers succumbing to the pitfalls of interdisciplinarity or from insufficient scientific rigor. Analyzing these potential factors, we discuss the literature at times coming across unexplored research questions. Hoping to alleviate existing problems, we make recommendations on precautions against the challenges of interdisciplinarity and propose directions in support of scientific rigor.
Article
Explainability is central to trust and accountability in artificial intelligence (AI) applications. The field of human‐centered explainable AI (HCXAI) arose as a response to mainstream explainable AI (XAI) which was focused on algorithmic perspectives and technical challenges, and less on the needs and contexts of the non‐expert, lay user. HCXAI is characterized by putting humans at the center of AI explainability. Taking a sociotechnical perspective, HCXAI prioritizes user and situational contexts, preferences reflection over acquiescence, and promotes the actionability of explanations. This review identifies the foundational ideas of HCXAI, how those concepts are operationalized in system design, how legislation and regulations might normalize its objectives, and the challenges that HCXAI must address as it matures as a field.
Article
Full-text available
The goals of this study are to evaluate a relatively novel learning environment, as well as to seek greater understanding of why human tutoring is so effective. This alternative learning environment consists of pairs of students collaboratively observing a videotape of another student being tutored. Comparing this collaboratively observing environment to four other instructional methods-one-on-one human tutoring, observing tutoring individually, collaborating without observing, and studying alone-the results showed that students learned to solve physics problems just as effectively from observing tutoring collaboratively as the tutees who were being tutored individually. We explain the effectiveness of this learning environment by postulating that such a situation encourages learners to become active and constructive observers through interactions with a peer. In essence, collaboratively observing combines the benefit of tutoring with the benefit of collaborating. The learning outcomes of the tutees and the collaborative observers, along with the tutoring dialogues, were used to further evaluate three hypotheses explaining why human tutoring is an effective learning method. Detailed analyses of the protocols at several grain sizes suggest that tutoring is effective when tutees are independently or jointly constructing knowledge: with the tutor, but not when the tutor independently conveys knowledge.
Article
Explanations play an important role in learning and inference. People often learn by seeking explanations, and they assess the viability of hypotheses by considering how well they explain the data. An emerging body of work reveals that both children and adults have strong and systematic intuitions about what constitutes a good explanation, and that these explanatory preferences have a systematic impact on explanation-based processes. In particular, people favor explanations that are simple and broad, with the consequence that engaging in explanation can shape learning and inference by leading people to seek patterns and favor hypotheses that support broad and simple explanations. Given the prevalence of explanation in everyday cognition, understanding explanation is therefore crucial to understanding learning and inference.
Article
Several earlier studies have found the amount learned while studying worked-out examples is proportional to the number of self-explanations generated while studying examples. A self-explanation is a comment about an example statement that contains domain-relevant information over and above what was stated in the example line itself. This article analyzes the specific content of self-explanations generated by students while studying physics examples. In particular, the content is analyzed into pieces of constituent knowledge that were used in the comments. These were further analyzed in order to trace the source of knowledge from which self-explanations. The first is deduction from knowledge acquired earlier while reading the text part of the chapter, usually by simply instantiating a general principle, concept, or procedure with information in the current example statements. Such construction of the content of the example statements yield new general knowledge that helps complete the students' otherwise incomplete understanding of the domain principles and concepts. The relevance of this research for instruction and models of explanation-based learning is discussed. Keywords: Learning from examples, Self explanations, Problem solving, Physics, Examples, Explanations. (JHD)
Article
Learning involves the integration of new information into existing knowledge. Generating explanations to oneself (self-explaining) facilitates that integration process. Previously, self-explanation has been shown to improve the acquisition of problem-solving skills when studying worked-out examples. This study extends that finding, showing that self-explanation can also be facilitative when it is explicitly promoted, in the context of learning declarative knowledge from an expository text. Without any extensive training, 14 eighth-grade students were merely asked to self-explain after reading each line of a passage on the human circulatory system. Ten students in the control group read the same text twice, but were not prompted to self-explain. All of the students were tested for their circulatory system knowledge before and after reading the text. The prompted group had a greater gain from the pretest to the posttest. Moreover, prompted students who generated a large number of self-explanations (the high explainers) learned with greater understanding than low explainers. Understanding was assessed by answering very complex questions and inducing the function of a component when it was only implicitly stated. Understanding was further captured by a mental model analysis of the self-explanation protocols. High explainers all achieved the correct mental model of the circulatory system, whereas many of the unprompted students as well as the low explainers did not. Three processing characteristics of self-explaining are considered as reasons for the gains in deeper understanding.
Article
The present paper analyzes the self-generated explanations (from talk-aloud protocols) that “Good” and “Poor” students produce while studying worked-out examples of mechanics problems, and their subsequent reliance on examples during problem solving. We find that “Good” students learn with understanding: They generate many explanations which refine and expand the conditions for the action parts of the example solutions, and relate these actions to principles in the text. These self-explanations are guided by accurate monitoring of their own understanding and misunderstanding. Such learning results in example-independent knowledge and in a better understanding of the principles presented in the text. “Poor” students do not generate sufficient self-explanations, monitor their learning inaccurately, and subsequently rely heavily on examples. We then discuss the role of self-explanations in facilitating problem solving, as well as the adequacy of current AI models of explanation-based learning to account for these psychological findings.
Article
Explaining new ideas to oneself can promote transfer, but how and when such self-explanation is effective is unclear. This study evaluated whether self-explanation leads to lasting improvements in transfer success and whether it is more effective in combination with direct instruction or invention. Third- through fifth-grade children (ages 8-11; n=85) learned about mathematical equivalence under one of four conditions varying in (a) instruction on versus invention of a procedure and (b) self-explanation versus no explanation. Both self-explanation and instruction helped children learn and remember a correct procedure, and self-explanation promoted transfer regardless of instructional condition. Neither manipulation promoted greater improvements on an independent measure of conceptual knowledge. Microgenetic analyses provided insights into potential mechanisms underlying these effects.
TA-2 Suggestions for Experimental Design Based on the Gate-2 Reports of the TA-1 Performer Teams
  • R R Hoffman
  • G Klein
  • S T Mueller
Hoffman, R.R., Klein, G., & Mueller, S.T. (2019). "TA-2 Suggestions for Experimental Design Based on the Gate-2 Reports of the TA-1 Performer Teams." Report on Award No. FA8650-17-2-7711, DARPA XAI Program.
Metrics for Explainable AI: Challenges and Prospects
  • R R Hoffman
  • S T Mueller
  • G Klein
  • J Litman
Hoffman, R.R., Mueller, S.T., Klein, G., & Litman, J. (2019). "Metrics for Explainable AI: Challenges and Prospects." Technical Report on Award No. FA8650-17-2-7711, DARPA XAI Program.
Naturalistic Psychological Model of Explanatory Reasoning: How People Explain Things to Others and to Themselves
  • G Klein
  • R Hoffman
  • S T Mueller
Klein, G., Hoffman, R., Mueller, S.T. (2019, April). " Naturalistic Psychological Model of Explanatory Reasoning: How People Explain Things to Others and to Themselves." Technical Report on Award No. FA8650-17-2-7711, DARPA XAI Program.
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
  • S T Mueller
  • R R Hoffman
  • W Clancey
  • A Emrey
  • G Klein
Mueller, S.T., Hoffman, R.R., Clancey, W, Emrey, A., & Klein, G. (2019). "Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI." Technical Report on Award No. FA8650-17-2-7711, DARPA XAI Program.