PresentationPDF Available

Interactive Natural Language Technology for Human-centric Explainable Artificial Intelligence



The main goal of this talk is to provide audience with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to Interactive Natural Language Technology for XAI (i.e., semantic-grounded knowledge representation, natural language and argumentation technologies as well as human-machine interaction). We will first introduce the general ideas behind XAI, motivating principles and definitions, by referring to real-world problems that would take great benefit from XAI technologies. Also, some of the most recent governmental and social initiatives which favor the introduction of XAI solutions in industry, professional activities and private lives will be highlighted. Then, we will present the main methods for XAI at the state of the art. The idea of “opening the black box” by means of white and gray box models will be stressed; considering as “black-boxes” models designed through non-transparent Machine Learning techniques (e.g., random forest or deep neural networks). Then, we will pay special attention to the generation of interactive factual and counterfactual multi-modal explanations (i.e., explanations which comprise graphics along with sentences in natural language); with the focus on the outstanding role of fuzzy logic for human-centric computing and XAI. Indeed, fuzzy logic offers a mathematical framework to manage information granules (i.e., concepts which correspond to objects put together in terms of their indistinguishability, similarity, proximity or functionality). Information granularity can be properly represented by fuzzy sets. Fuzzy rules relate fuzzy sets and make it feasible to infer meaningful information granules at certain level of abstraction. Fuzzy modeling favors fairness, accountability, transparency, trustfulness and explainability. Moreover, Interpretable Fuzzy Models represent knowledge in a way close to natural language that is easy to interpret and understand even by non-expert users because the models are endowed with linguistic interpretability and global semantics. Explainable Fuzzy Systems wrap interpretable fuzzy models with an interactive linguistic interface that makes them self-explanatory. Finally, the talk will end with an enumeration of XAI open challenges.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
The importance of Trustworthy and Explainable Artificial Intelligence (XAI) is recognized in academia, industry and society. This book introduces tools for dealing with imprecision and uncertainty in XAI applications where explanations are demanded, mainly in natural language. Design of Explainable Fuzzy Systems (EXFS) is rooted in Interpretable Fuzzy Systems, which are thoroughly covered in the book. The idea of interpretability in fuzzy systems, which is grounded on mathematical constraints and assessment functions, is firstly introduced. Then, design methodologies are described. Finally, the book shows with practical examples how to design EXFS from interpretable fuzzy systems and natural language generation. This approach is supported by open source software. The book is intended for researchers, students and practitioners who wish to explore EXFS from theoretical and practical viewpoints. The breadth of coverage will inspire novel applications and scientific advancements.