Performance comparison of visual explainability with regard to the applying methods (average drop: lower is better, rate of increase in confidence: higher is better).

Performance comparison of visual explainability with regard to the applying methods (average drop: lower is better, rate of increase in confidence: higher is better).

Source publication
Article
Full-text available
In some DL applications such as remote sensing, it is hard to obtain the high task performance ( e.g . accuracy) using the DL model on image analysis due to the low resolution characteristics of the imagery. Accordingly, several studies attempted to provide visual explanations or apply the attention mechanism to enhance the reliability on the imag...

Contexts in source publication

Context 1
... case of DropAtt + Discrim., three layer-wise episodic explanations are derived from the task model by applying our DropAtt, and they are integrated into a single explanation by mediating the three sampled explanations with the initial reflecting ratios inferred from the generator that is trained with the proposed class-wise feature discriminator. Whole Proposed Scheme is applying our final Algorithm 1 that mediates explanations from multiple episodes and target layers, and the presented results in Table 6 and Fig. 6 are derived on the settings of "Sample 3" in Table 4 as an example. ...
Context 2
... the comparing methods, we evaluated the visual explainability of each method based on the aforementioned two main metrics: average drop (lower is better) and rate of increase in confidence (higher is better). As shown in Table 6, Grad-CAM or ABN itself show relatively lower explainability than our methods. Moerover, applying DropAtt only can not achieve a significant improvements over two baselines show- ing rather higher value on average drop. ...

Citations

Article
Full-text available
In recent years, black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing. Despite the potential benefits of uncovering the inner workings of these models with explainable AI, a comprehensive overview summarizing the used explainable AI methods and their objectives, findings, and challenges in remote sensing applications is still missing. In this paper, we address these issues by performing a systematic review to identify the key trends of how explainable AI is used in remote sensing and shed light on novel explainable AI approaches and emerging directions that tackle specific remote sensing challenges. We also reveal the common patterns of explanation interpretation, discuss the extracted scientific insights in remote sensing, and reflect on the approaches used for explainable AI methods evaluation. As such, our review provides a complete summary of the state-of-the-art of explainable AI in remote sensing. Further, we give a detailed outlook on the challenges and promising research directions, representing a basis for novel methodological development and a useful starting point for new researchers in the field.
Article
Full-text available
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems.
Chapter
A currently upcoming direction in the research of explainable artificial intelligence (XAI) is focusing on the involvement of stakeholders to achieve human-centered explanations. This work conducts a structured literature review to asses the current state of stakeholder involvement when applying XAI methods to remotely sensed image data. Additionally it is assessed, which goals are pursued for integrating explainability. The results show that there is no intentional stakeholder involvement. The majority of work is focused on improving the models performance and gaining insights into the models internal properties, which mostly benefits developers. Closing, future research directions, that emerged from the results of this work, are highlighted.