Jose M Alonso

Jose M Alonso
University of Santiago de Compostela | USC · Centro Singular de Investigación en Tecnoloxías Intelixentes CITIUS

PhD
Natural Language Technology for Paving the Way from Interpretable Fuzzy Systems to Explainable ArtificiaI Intelligence

About

210
Publications
66,747
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
3,866
Citations
Introduction
I received the MSc (2003) and PhD (2007) degrees in Telecommunication Engineering, both from the Technical University of Madrid (UPM), Spain. I am with the Department of Computer Sciences of the University of Santiago de Compostela, Chair of the IEEE-CIS Task Force on Explainable Fuzzy Systems, and Associate Editor of IEEECIM, IJAR and IJCIS. My main research interest is on explainable artificial intelligence, with focus on interpretable fuzzy systems, NLG and open source sw.
Additional affiliations
June 2016 - January 2018
University of Santiago de Compostela
Position
  • PostDoc Position
November 2012 - May 2016
European Centre for Soft Computing
Position
  • Researcher
September 2012 - September 2012
Position
  • University of Granada

Publications

Publications (210)
Poster
Full-text available
Lecture given at SFLA2024, Toledo, 2-6 September 2024, https://eventos.uclm.es/111541/detail/v-european-summer-school-on-fuzzy-logic-and-applications-sfla2024.html
Presentation
Full-text available
Lecture given at SFLA2024, Toledo, 2-6 September 2024, https://eventos.uclm.es/111541/detail/v-european-summer-school-on-fuzzy-logic-and-applications-sfla2024.html
Article
Full-text available
The European Union’s regulatory ecosystem presents challenges balancing legal and sociotechnical drivers for explainable AI systems. Core tensions emerge on dimensions of oversight, user needs and litigation. This paper maps provisions on algorithmic transparency and explainability across major EU data, AI, and platform policies using qualitative a...
Article
Full-text available
Machine learning models are widely used in real-world applications. However, their complexity makes it often challenging to interpret the rationale behind their decisions. Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in eXplainable Artificial Intelligence (XAI). CE provides actionabl...
Article
Fuzzy systems are known to provide not only accurate but also interpretable predictions. However, their explainability may be undermined if non-semantically grounded linguistic terms are used. Additional non-trivial challenges would arise if a prediction were to be explained counterfactually, i.e., in terms of hypothetical, non-predicted outputs. I...
Article
Full-text available
Among the existing eXplainable AI (XAI) approaches, Feature Attribution methods are a popular option due to their interpretable nature. However, each method leads to a different solution, thus introducing uncertainty regarding their reliability and coherence with respect to the underlying model. This work introduces TextFocus , a metric for evalu...
Conference Paper
Full-text available
The growing importance of Explainable Artificial Intelligence (XAI) has highlighted the need to understand the decision-making processes of black-box models. Surrogation, emulating a black-box model (BB) with a white-box model (WB), is crucial in applications where BBs are unavailable due to security or practical concerns. Traditional fidelity meas...
Chapter
In this work, we have introduced a new way of speculative reasoning for intelligent systems. In addition, we have illustrated the utility of this way of reasoning in the context of a use case on art genre classification, where explainability and trustworthiness are a matter of major concern. Speculative reasoning is natural for humans and it turns...
Article
Full-text available
Explainable artificial intelligence has become a vitally important research field aiming, among other tasks, to justify predictions made by intelligent classifiers automatically learned from data. Importantly, efficiency of automated explanations may be undermined if the end user does not have sufficient domain knowledge or lacks information about...
Article
Full-text available
This paper presents the art painting style explainable classifier named ANYXI. The classifier is based on art specialists’ knowledge of art styles and human-understandable color traits. ANYXI overcomes the principal flaws in the few art painting style classifiers in the literature. In this way, we first propose, using the art specialists’ studies,...
Preprint
Full-text available
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13\% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable...
Article
Full-text available
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model’s decision-making. Thus, the need for eXplainable AI (XAI) methods f...
Article
Full-text available
Medical applications of Artificial Intelligence (AI) have consistently shown remarkable performance in providing medical professionals and patients with support for complex tasks. Nevertheless, the use of these applications in sensitive clinical domains where high-stakes decisions are involved could be much more extensive if patients, medical profe...
Article
Full-text available
The assessment of explanations by humans presents a significant challenge within the context of Explainable and Trustworthy AI. This is attributed not only to the absence of universal metrics and standardized evaluation methods, but also to complexities tied to devising user studies that assess the perceived human comprehensibility of these explana...
Article
The explanatory capacity of interpretable fuzzy rule-based classifiers is usually limited to offering explanations for the predicted class only. A lack of potentially useful explanations for non-predicted alternatives can be overcome by designing methods for the so-called counterfactual reasoning. Nevertheless, state-of-the-art methods for counterf...
Presentation
Full-text available
In this colloquium, we will begin with a non-technical introduction to the field of TAI (i.e., revisiting definitions and fundamentals, reviewing the state of the art and enumerating open challenges). Then, we will briefly review the history of fuzzy systems from the pioneer works of L.A. Zadeh to the most recent developments on EXFS, with special...
Article
Predicting Alzheimer’s disease (AD) progression is crucial for improving the management of this chronic disease. Usually, data from AD patients are multimodal and time series in nature. This study proposes a novel ensemble learning framework for AD progression incorporating heterogeneous base learners into an integrated model using the stacking tec...
Conference Paper
Full-text available
Hallucinations and omissions need to be carefully handled when using neural models for performing Natural Language Generation tasks. In the particular case of data to text applications, neural models are usually trained on large-scale datasets and sometimes generate text with divergences in respect to the data input. In this paper, we show the impa...
Conference Paper
In this work we link the understandability of machine learning models to the complexity of their SHapley Additive exPlanations (SHAP). Thanks to this reframing we introduce two novel metrics for understandability: SHAP Length and SHAP Interaction Length. These are model-agnostic, efficient, intuitive and theoretically grounded metrics that are anch...
Presentation
Full-text available
This was a webinar for the University of Urbino (Italy): In the era of the Internet of Things and Big Data, data scientists are aimed for finding out valuable knowledge from data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Explainable AI (XAI)...
Presentation
Full-text available
In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Explainable AI (XAI) is an endeavor to evolve AI methodologies and te...
Article
Full-text available
Electronic health records provide rich, heterogeneous data about the evolution of the patients’ health status. However, such data need to be processed carefully, with the aim of extracting meaningful information for clinical decision support. In this paper, we leverage interpretable (deep) learning and signal processing tools to deal with multivari...
Article
Full-text available
This Special Issue is supported by the IEEE CIS Task Force on Explainable Fuzzy Systems, with the aim of providing readers with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and reasoning but also regarding ways to enhance human-machine interaction t...
Article
Full-text available
In this work, we present a complete system to produce an automatic linguistic reporting about the customer activity patterns inside open malls, a mixed distribution of classical malls joined with the shops on the street. These reports can assist to design marketing campaigns by means of identifying the best places to catch the attention of customer...
Article
Full-text available
Artificial Intelligence provides accurate predictions for critical applications (e.g., healthcare, finance), but lacks the ability to explain its internal mechanism in most applications which require high interaction with humans. Even if many studies analyze machine learning models and their learning behaviour and eventually provide an interpretati...
Conference Paper
Full-text available
We introduce a novel framework to deal with fairness, accountability and explainability of intelligent systems. This framework puts together several tools to deal with bias at the level of data, algorithms and human cognition. The framework makes use of intelligent classifiers endowed with fuzzy-grounded linguistic explainability. As a result, it f...
Chapter
Full-text available
We have defined an interdisciplinary program for training a new generation of researchers who will be ready to leverage the use of Artificial Intelligence (AI)-based models and techniques even by non-expert users. The final goal is to make AI self-explaining and thus contribute to translating knowledge into products and services for economic and so...
Chapter
Since 2016, there is an increasing interest on research topics such as fairness, accountability and interpretability, in the entire community of researchers in Artificial Intelligence. However, there were researchers working hard on these research topics much earlier. For example, Michalski published his comprehensibility postulate in the 1980s whi...
Chapter
Fuzzy sets and fuzzy logic are powerful tools widely used to represent human knowledge and mimic human reasoning capabilities, being the main constituents of fuzzy systems. Among the different approaches to fuzzy systems, fuzzy rule-based systems represent the one offering a better framework for interpretability considerations. Their applications r...
Chapter
Interpretability is one of the most valuable properties of fuzzy systems. Despite the effort made by the research community for characterizing interpretability, there is not a consensus about how to measure interpretability yet. It is admitted that the analysis of interpretability is subjective because it depends on the background of the person who...
Chapter
Fuzzy systems have found widespread application in several contexts and proved their suitability in tackling a number of diverse real-world problems. However, the realization of such systems must be well grounded on some solid theoretical bases that scientists and developers should properly master. In this chapter we discuss the key elements of fuz...
Chapter
Explainable Artificial Intelligence is a novel paradigm conjugating the effectiveness of machine learning with the new requirements coming from the integration of intelligent systems in the human society. Explainable Artificial Intelligence can find successful application in a plethora of contexts, endowing classical intelligent systems with a cruc...
Chapter
We describe step by step how to design, implement and validate an interpretable fuzzy rule-based beer style classifier endowed with explanation capability. First, we revise some preliminary work regarding both interpretable fuzzy modeling methodologies and related software. Second, we introduce the use case on beer style classification. Third, we b...
Chapter
Fuzzy systems are commonly considered suitable tools to express knowledge in a human comprehensible fashion. This kind of characterization makes them eligible for being applied in several contexts where interpretability is a major issue and humans may profit from a self-explanatory form of automatic computation. However, fuzzy systems are not inter...
Article
Full-text available
Sorption of pesticides by soils holds a major consequence for their fate in the environment. As such, sorption coefficient (K d /K oc), which is derived from laboratory or field experiments is a fundamental parameter used in almost all screening tools to evaluate the fate or mobility of these compounds. The value of this coefficient is controlled b...
Book
Full-text available
The importance of Trustworthy and Explainable Artificial Intelligence (XAI) is recognized in academia, industry and society. This book introduces tools for dealing with imprecision and uncertainty in XAI applications where explanations are demanded, mainly in natural language. Design of Explainable Fuzzy Systems (EXFS) is rooted in Interpretable F...
Article
Full-text available
Alzheimer’s disease (AD) is the most common type of dementia. Its diagnosis and progression detection have been intensively studied. Nevertheless, research studies often have little effect on clinical practice mainly due to the following reasons: (1) Most studies depend mainly on a single modality, especially neuroimaging; (2) diagnosis and progres...
Article
Full-text available
A number of algorithms in the field of artificial intelligence offer poorly interpretable decisions. To disclose the reasoning behind such algorithms, their output can be explained by means of so-called evidence-based (or factual) explanations. Alternatively, contrastive and counterfactual explanations justify why the output of the algorithms is no...
Conference Paper
Full-text available
The opaque nature of many machine learning techniques prevents the wide adoption of powerful information processing tools for high stakes scenarios. The emerging field eXplainable Artificial Intelligence (XAI) aims at providing justifications for automatic decision-making systems in order to ensure reliability and trustworthiness in the users. For...
Conference Paper
Full-text available
The evaluation of Natural Language Generation (NLG) systems has recently aroused much interest in the research community, since it should address several challenging aspects, such as readability of the generated texts, adequacy to the user within a particular context and moment and linguistic quality-related issues (e.g., correctness, coherence, un...
Presentation
Full-text available
The main goal of this talk is to provide audience with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to Interactive Natural Language Technology for XAI (i.e., semantic-grounded knowledge representation, natural language and argumentation technologies as well as human-machine interaction). We...
Article
The prevalence of Alzheimer’s disease (AD) in the growing elderly population makes accurately predicting AD progression crucial. Due to AD’s complex etiology and pathogenesis, an effective and medically practical solution is a challenging task. In this paper, we developed and evaluated two novel hybrid deep learning architectures for AD progression...
Poster
Full-text available
All details about the session are at: https://sites.google.com/view/xai-fuzzieee2021 The aim of this session is to offer an opportunity for researchers and practitioners to identify new promising research directions on eXplainable Artificial Intelligence (XAI) and to provide a forum to disseminate and discuss XAI, with special attention to Interpr...
Chapter
In this work we present a method to estimate the activity patterns made by shoppers in open malls based on localization information and process mining techniques. We present our smart phone application for logging information from sensors and a process mining system to discover what kind of activity pattern is made by the shoppers based in the key...
Poster
Full-text available
This Special Issue is supported by the IEEE CIS Task Force on Explainable Fuzzy Systems (TF-EXFS). The mission of the TF-EXFS is to lead the development of a new generation of Explainable Fuzzy Systems, with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representat...
Chapter
In this chapter, we describe how to generate not only interpretable but also self-explaining fuzzy systems. Such systems are expected to manage information granules naturally as humans do. We take as starting point the Fuzzy Unordered Rule Induction Algorithm (FURIA for short) which produces a good interpretability-accuracy trade-off. FURIA rules h...
Conference Paper
We have defined an interdisciplinary program for training a new generation of researchers who will be ready to leverage the use of Artificial Intelligence (AI)-based models and techniques even by non-expert users. The final goal is to make AI self-explaining and thus contribute to translating knowledge into products and services for economic and so...
Book
Full-text available
This Doctoral Consortium (DC) is under the umbrella of the Mentoring and Communication for Starting Researchers (MC4SR) ECAI Program. In addition to the DC, MC4SR included other events such as Meeting with a EurAI Fellow, Job Fair, and the 9th European Starting AI Researchers’ Symposium (STAIRS). The DC-ECAI 2020 provides a unique opportunity for P...
Conference Paper
Full-text available
Data-driven classification algorithms have proven highly effective in a range of complex tasks. However, their output is sometimes questioned, as the reasoning behind it may remain unclear due to a high number of poorly interpretable parameters used when training. Evidence-based (factual) explanations for single classifications answer the question...
Conference Paper
Full-text available
Artificial Intelligence (AI) has become a first class citizen in the cities of the 21st century. New applications are including features based on opportunities that AI brings, like medical diagnostic support systems, recommendation systems or intelligent assistance systems that we use every day. Also, each day, people are more concerned regarding t...
Conference Paper
Full-text available
Fairness, Accountability, Transparency and Explainability have become strong requirements in most practical applications of Artificial Intelligence (AI). Fuzzy sets and systems are recognized world-wide because of their outstanding contribution to model AI systems with a good interpretability-accuracy tradeoff. Accordingly, fuzzy sets and systems a...
Article
Full-text available
Artificial Intelligence (AI) is part of our everyday life and has become one of the most outstanding and strategic technologies. Explainable AI (XAI) is expected to endow intelligent systems with fairness, accountability, transparency and explanation ability when interacting with humans. This paper describes how to teach fundamentals of XAI to high...
Data
This is a classification dataset made up of 400 instances perfectly balanced with 50 instances per class. The classification task consists of identifying one out of 8 beer styles (Blanche, Lager, Pilsner, IPA, Stout, Barleywine, Porter, and Belgian Strong Ale) in terms of 3 attributes (color, bitterness and strength). The original file is available...
Article
Fuzzy rule-based systems (FRBSs) have been successfully applied to a wide range of real-world problems. However, they suffer from some design issues related to the difficulty to implement them on different hardware platforms without additional efforts. To bridge this gap, recently, the IEEE Computational Intelligence Society has sponsored the publi...
Article
Full-text available
We describe an applied methodology to build fuzzy models of geographical expressions, which are meant to be used for natural language generation purposes. Our approach encompasses a language grounding task within the development of an actual data-to-text system for the generation of textual descriptions of live weather data. For this, we gathered d...
Chapter
Full-text available
The amount of data to analyze in virtual learning environments (VLEs) grows exponentially everyday. The daily interaction of students with VLE platforms represents a digital foot print of the students’ engagement with the learning materials and activities. This big and worth source of information needs to be managed and processed to be useful. Educ...
Conference Paper
Full-text available
JFML is an open source Java library aimed at facilitating interoperability of fuzzy systems by implementing the IEEE Std 1855-2016-the IEEE Standard for Fuzzy Markup Language (FML) that is sponsored by the IEEE Computational Intelligence Society. We developed a Python wrapper for JFML that enables to use all the functionalities of JFML through a Py...
Article
Full-text available
Fuzzy rule-based systems (FRBSs) have been successfully applied to a wide range of real-world problems. However, they suffer from some design issues related to the difficulty to implement them on different hardware platforms without additional efforts. To bridge this gap, recently, the IEEE Computational Intelligence Society has sponsored the publi...
Cover Page
Full-text available
The Research Centre in Information Technologies of the University of Santiago de Compostela (https://citius.usc.es/) is seeking applications for a PhD position in the area of eXplainable Artificial Intelligence (XAI), Natural Language Generation (NLG), Argumentation and Human-Machine Interaction. Further details at https://citius.usc.es/n/2061 A...
Chapter
Full-text available
In recent years, there has been a huge effort connecting all kind of devices to Internet. From small devices (e.g., e-health monitoring sensors or mobile phones) that we carry daily in what is called the body-area-network, to big devices (such as cars), passing by all devices (e.g., TVs or refrigerators) at home. In modern cities, everything (at wo...
Chapter
Full-text available
The European Commission has identified Artificial Intelligence (AI) as the “most strategic technology of the 21st century” [7].
Chapter
Full-text available
Explainable Artificial Intelligence (XAI) is a relatively new approach to AI with special emphasis to the ability of machines to give sound motivations about their decisions and behavior. Since XAI is human-centered, it has tight connections with Granular Computing (GrC) in general, and Fuzzy Modeling (FM) in particular. However, although FM has be...
Chapter
This paper deals with modeling e-service quality. It combines Marketing methods (qualitative and quantitative methods) and Computational Theory of Perceptions (Fuzzy Logic). We apply interpretable fuzzy modeling to human perceptions collected through fuzzy rating scale-based questionnaires. The proposal is validated with a case study regarding Busi...
Presentation
Retention of pesticides by soils is both spatially variable and also one of the most sensitive factors determining losses to surface and groundwater. To date, only a few work has been done to explain this process in tropical soils especially, and generally to uncover the factors that govern the process in both temperate and tropical soils. The purp...
Presentation
Retention of pesticides by soils is both spatially variable and also one of the most sensitive factors determining losses to surface and groundwater. To date, only a few work has been done to explain this process in tropical soils especially, and generally to uncover the factors that govern the process in both temperate and tropical soils. The purp...
Poster
Full-text available
This is the call for paper of a session that is proposed to be held in FUZZ-IEEE 2019
Article
Full-text available
Fuzzy Logic Systems are useful for solving problems in many application fields. However, these systems are usually stored in specific formats and researchers need to rewrite them to use in new problems. Recently, the IEEE Computational Intelligence Society has sponsored the publication of the IEEE Standard 1855- 2016 to provide a unified and well-d...
Conference Paper
Full-text available
We present a data resource which can be useful for research purposes on language grounding tasks in the context of geographical referring expression generation. The resource is composed of two data sets that encompass 25 different geographical descriptors and a set of associated graphical representations, drawn as polygons on a map by two groups of...
Preprint
Full-text available
We present a data resource which can be useful for research purposes on language grounding tasks in the context of geographical referring expression generation. The resource is composed of two data sets that encompass 25 different geographical descriptors and a set of associated graphical representations, drawn as polygons on a map by two groups of...