Conference PaperPDF Available

Explainable AI in Healthcare

Authors:

Abstract and Figures

Artificial Intelligence (AI) is an enabling technology that when integrated into healthcare applications and smart wearable devices such as Fitbits etc. can predict the occurrence of health conditions in users by capturing and analysing their health data. The integration of AI and smart wearable devices has a range of potential applications in the area of smart healthcare but there is a challenge in the black box operation of decisions made by AI models which have resulted in a lack of accountability and trust in the decisions made. Explainable AI (XAI) is a domain in which techniques are developed to explain predictions made by AI systems. In this paper, XAI is discussed as a technique that can used in the analysis and diagnosis of health data by AI-based systems and a proposed approach presented with the aim of achieving accountability. transparency, result tracing, and model improvement in the domain of healthcare.
Content may be subject to copyright.
Explainable AI in Healthcare
Urja Pawar
Cork Institute of Technology,
Ireland
urja.pawar@mycit.ie
Donna O’Shea
Cork Institute of Technology,
Ireland
donna.oshea@cit.ie
Susan Rea
Cork Institute of Technology,
Ireland
susan.rea@cit.ie
Ruairi O’Reilly
Cork Institute of Technology,
Ireland,
ruairi.oreilly@cit.ie
Abstract—Artificial Intelligence (AI) is an enabling technology
that when integrated into healthcare applications and smart
wearable devices such as Fitbits etc. can predict the occurrence of
health conditions in users by capturing and analysing their health
data. The integration of AI and smart wearable devices has a range
of potential applications in the area of smart healthcare but there
is a challenge in the black box operation of decisions made by
AI models which have resulted in a lack of accountability and
trust in the decisions made. Explainable AI (XAI) is a domain
in which techniques are developed to explain predictions made
by AI systems. In this paper, XAI is discussed as a technique
that can used in the analysis and diagnosis of health data by AI-
based systems and a proposed approach presented with the aim of
achieving accountability. transparency, result tracing, and model
improvement in the domain of healthcare.
Keywords—Explainable AI, Smart healthcare, Personalised
Connected Healthcare
I. INTRODUCTION
Smart healthcare refers to the use of technologies such as
Cloud computing, Internet of Things (IoT) and AI to enable an
efficient, convenient, and personalized healthcare system [1].
Such technologies facilitate real-time health monitoring using
healthcare applications on smartphones or wearable devices
encouraging individuals to be in control of their well-being.
Health information collected at a user level can also be shared
with clinicians for further diagnosis [1] and together with AI
can be used in health screening, early diagnosis of diseases,
and treatment plan selection [2]. In the healthcare domain, the
ethical issue of transparency associated with AI and the lack of
trust in the black-box operation of AI systems creates the need
for AI models that can be explained [3]. The AI techniques
used for explaining AI models and their predictions are known
as explainable AI (XAI) methods [2].
This paper proposes the involvement of XAI techniques to
present the rationale behind predictions made by AI-based
systems to the stakeholders in healthcare to gain the following
benefits:
Increased transparency: As XAI methods explain why
an AI system arrived at a specific decision, it increases
transparency in the way AI systems operate and can lead
to increased levels of trust [3].
Result tracing: The explanations generated by XAI
methods can be used to trace the factors that affected the
AI system to predict an outcome [4].
Model improvement: AI systems learn from data for
making a prediction. Sometimes, the learned rules are erro-
neous and can lead to erroneous predictions. Explanations
generated from XAI methods can assist in understanding
the learned rules so that errors can be identified in them
and models can be improved [3].
Given the above objective, Section II presents an overview
of XAI, whereas Section III presents a proposed approach to
leverage XAI in the smart healthcare domain with conclusions
presented in Section IV.
II. RE LATE D WOR K
Over the past number of years, various solutions in the
domain of XAI have been proposed, many of which have been
applied to the healthcare domain. In the field of XAI, some
AI models are self-explainable simply by their design, such as
decision sets where researchers in [4] have leveraged and used
them to explain the prediction of diseases (asthma, diabetes,
lung cancer) based on a patient’s health record and are self-
explainable as they were developed by mapping an instance
of data to an outcome using IF-THEN rules [4]. For example,
decision sets will learn to predict lung cancer using a condition:
IF the person is a smoker and already has respiratory illness
THEN predict lung cancer. However, the challenge with AI
Models that are self-explainable is that they restrict the choice
of using other AI models that can be used to achieve greater
accuracy. To address explainability in the wider range of AI
models, there has been a surge in interest in XAI methods that
can explain any AI model [3]. These XAI methods that are
independent of the AI model that needs to be explained are
known as model-agnostic XAI methods [3].
Researchers in [5] proposed one of the commonly used
model-agnostic methods, Local Interpretable Model-Agnostic
Explanation (LIME): a framework used to explain predictions
by quantifying the contribution of all the factors involved in
calculating prediction. Researchers in [2] have used LIME to
explain the prediction of heart failure by Recurrent Neural Net-
works (RNNs) where their explanations helped in identifying
the most common health conditions such as kidney failure,
anemia, and diabetes that increases the risk of heart failure
in an individual. Various other model-agnostic XAI methods
such as Anchors, Shapley values [6; 7] have been developed
and used in the healthcare domain.
In [8], a framework was proposed to use the knowledge
of human reasoning in designing XAI methods the idea of
which was to develop better explanations by involving the user’s
reasoning goals. The framework developed can be extended in
specific domains such as in smart healthcare to generate human-
friendly insights to explain the operation of AI-based systems
using XAI techniques at different stages to assist in clinical
decision-making [8].
There are certain challenges in the adoption of XAI tech-
niques. The explanations generated by XAI methods should be
useful for the end-users that can be clinicians having expertise
in medical domain or normal individuals [9]. The development
of appropriate user interfaces to effectively display explanations
can be done [8]. Challenges related to increased computational
cost and assumption-based operation of model-agnostic XAI
methods remain an open area for research [3].
III. PROPOSED APPROACH
In this paper, we propose to use existing XAI models in
conjunction with clinical knowledge to obtain more benefits in
AI-based systems. As demonstrated in Figure 1, the proposed
approach is explained as the following:
1) Smart healthcare applications capture the health informa-
tion (1) of individuals and use the trained AI models
(2) to predict the probability of certain abnormalities or
diseases.
2) The predictions (3) along with the health data(1) are used
by XAI methods(4) to generate explanations (5).
3) These explanations (5) can be analysed with the help of
a clinician’s knowledge (6). This analysis will enable
validation of predictions made by the AI model by
clinicians to enable transparency.
4) If predictions are correct, then explanations along with
clinical knowledge can be used to generate valuable
insights and recommendations (7).
5) If predictions are incorrect, then the contradiction be-
tween explanations and clinician’s knowledge can be used
to trace factors for inaccurate predictions and enable
improvement (8) in the deployed AI model (2).
Fig. 1. Generating insights using XAI and Clinical expertise
An example application of this concept will be if there is an
increase in the blood sugar level, clinicians will be sent a report
along with the data of heart rate, body temperature, and calorie
intake. An XAI model will explain that the feature primarily
responsible for this prediction is calorie intake. Clinicians can
then take a look at the features and recommend appropriate
medicines/activities.
In order to maximise the benefit of XAI the explanations
generated should be useful and presented appropriately i.e. GUI
for the end-users that can be clinicians having expertise in
medical domain or normal individuals [8].
IV. CONCLUSION
The growing research in explainable AI (XAI) is address-
ing the development of frameworks and models that help in
interpreting and understanding the decisions being made by AI
systems. As the European General Data Protection Regulation
(GDPR and ISO/IEC 27001) calls for making results generated
by autonomous systems used in businesses traceable, XAI
techniques can be utilised to make results from AI-based
autonomous systems explainable and traceable. The incorpo-
ration of XAI techniques faces some challenges as discussed
in Section II. The domain of XAI needs to be developed
continuously and start getting applied in AI-based systems in
healthcare to enable better improvements related to its adoption
and usage.
REFERENCES
[1] “Smart healthcare: making medical care more intelligent,”
Global Health Journal, vol. 3, no. 3, pp. 62–65, 2019.
[2] S. Khedkar, V. Subramanian, G. Shinde, and P. Gandhi,
“Explainable AI in Healthcare,” SSRN Electronic Journal,
2019.
[3] F. Doshi-Velez and B. Kim, “Towards A Rigorous Science
of Interpretable Machine Learning,” no. Ml, pp. 1–13, 2017.
[4] H. Lakkaraju, S. H. Bach, and J. Leskovec, “Interpretable
decision sets: A joint framework for description and pre-
diction,” Proceedings of the ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining,
vol. 13-17-August-2016, pp. 1675–1684, 2016.
[5] M. T. Ribeiro and C. Guestrin, ““ Why Should I Trust You
?” Explaining the Predictions of Any Classifier,” pp. 1135–
1144, 2016.
[6] M. T. Ribeiro and C. Guestrin, “Anchors : High-Precision
Model-Agnostic Explanations,”
[7] S. M. Lundberg and S.-I. Lee, “A unified approach to
interpreting model predictions,” in Advances in neural
information processing systems, pp. 4765–4774, 2017.
[8] D. Wang, Q. Yang, A. Abdul, and B. Y. Lim, “Designing
Theory-Driven User-Centric Explainable AI,Proceedings
of the 2019 CHI Conference on Human Factors in Com-
puting Systems - CHI ’19, pp. 1–15, 2019.
[9] A. Holzinger, C. Biemann, C. S. Pattichis, and D. B. Kell,
“What do we need to build explainable AI systems for the
medical domain?,” no. Ml, pp. 1–28, 2017.
... The proposed approach promotes the explainability of ML models and workflows, making them easily integrated into standard ML workflows. The paper presents three approaches for demonstrating the relative ranking of features by ML models, based on the inclusion/exclusion of features, and the association of performance metrics [15]. ...
Article
Full-text available
As technology progresses, so does everything around us, such as televisions, mobile phones, and robots, which grow wiser. Of these technologies, artificial intelligence (AI) is used to aid the computer in making decisions comparable to humans, and this intelligence is supplied to the machine as a model. As AI deals with the concept of Black-Box, the model’s decisions were poorly comprehended by the end users. Explainable AI (XAI) is where humans can understand the judgments and decisions made by the AI. Earlier, the predictions made by the AI were not as easy as we know the data now, and there was some confusion regarding the predictions made by the AI. The intention for the use of XAI is to improve the user interface of products and services by helping them trust the decisions made by AI. The machine learning (ML) model White-box shows us the result that can be understood by the people in that domain, wherein the end users cannot understand the decisions. To further enhance traffic signal detection using XAI, the concept called local interpretable model- agnostic explanation (LIME) algorithm has been taken into consideration and the performance is improved in this paper.
... Many patients remain skeptical about the transparency of AI-driven healthcare decisions, raising the need for Explainable AI (XAI) to foster trust and enhance the interpretability of AI's role in diagnostics and care recommendations. As wearable technology continues to evolve, the integration of AI and ML will remain a cornerstone in transforming preventive and personalized healthcare (Pawar et al., 2020;. ...
Article
Wearable health technology has undergone rapid development, evolving from simple fitness trackers to sophisticated medical devices that provide real-time monitoring and data collection. As these technologies become increasingly integrated into healthcare systems, their potential to shift from reactive treatment to proactive preventive care is gaining attention. This review paper explores the current state of wearable health devices, including the key technologies that enable monitoring and data analysis, and discusses the latest advancements such as AI integration, smart fabrics, and implantable wearables. The paper also delves into the significant role these technologies play in preventive healthcare by enabling continuous monitoring, early disease detection, and personalized health interventions. Furthermore, it addresses the challenges facing wearables, including issues of data accuracy, privacy, and user compliance. Finally, the paper explores future directions and ethical considerations, emphasizing the potential for wearables to reshape healthcare by promoting more accessible, equitable, and preventive healthcare solutions.
Article
The potential of explainable artificial intelligence (XAI) in detection of neurodegenerative disorders (ND) holds great promise in the field of healthcare. These diseases interfere with the daily functioning and independence of a person. The current studies lack in highlighting the aspect of explainability in their predictions and the various algorithms cannot provide any plausible explanations for their predictions making it difficult for medical professionals to place trust in their findings. Thus, the proposed framework aims to bridge this gap by exploring the development of a trustworthy framework for XAI-based ND detection, focusing on key aspects that can significantly impact its effectiveness and acceptance. The framework makes use of Trust-based SHAP (SHapley Additive exPlanations) values in classification. By computing trust values, the framework ensures more reliable predictions and increases interpretability, instilling confidence in clinicians and patients. The results show that with the inclusion of the trust-driven framework, the accuracy of the algorithm increased from 93.33% in the normal circumstances to 98.21%, highlighting the efficacy of the framework as compared to the other works. This shows that a trustworthy framework for XAI-driven ND detection can reshape care by enabling early detection, personalized treatment plans and enhancing decision-making process.
Article
Full-text available
The main objective of this paper is to highlight the research directions and explain the main roles of current Artificial Intelligence (AI)/Machine Learning (ML) frameworks and available cloud infrastructures in building end-to-end ML lifecycle management for healthcare systems and sensitive biomedical data. We identify and explore the versatility of many genuine techniques from distributed computing and current state-of-the-art ML research, such as building cognition-inspired learning pipelines and federated learning (FL) ecosystem. Additionally, we outline the advantages and highlight the main obstacles of our methodology utilizing contemporary distributed secure ML techniques, such as FL, and tools designed for managing data throughout its lifecycle. For a robust system design, we present key architectural decisions essential for optimal healthcare data management, focusing on security, privacy and interoperability. Finally, we discuss ongoing efforts and future research directions to overcome existing challenges and improve the effectiveness of AI/ML applications in the healthcare domain.
Article
Full-text available
With the development of information technology, the concept of smart healthcare has gradually come to the fore. Smart healthcare uses a new generation of information technologies, such as the internet of things (loT), big data, cloud computing, and artificial intelligence, to transform the traditional medical system in an all-round way, making healthcare more efficient, more convenient, and more personalized. With the aim of introducing the concept of smart healthcare, in this review, we first list the key technologies that support smart healthcare and introduce the current status of smart healthcare in several important fields. Then we expound the existing problems with smart healthcare and try to propose solutions to them. Finally, we look ahead and evaluate the future prospects of smart healthcare.
Conference Paper
Full-text available
From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.
Article
Full-text available
Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. The central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles, they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.
Conference Paper
Full-text available
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
Article
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.
Conference Paper
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.
Conference Paper
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust in a model. Trust is fundamental if one plans to take action based on a prediction, or when choosing whether or not to deploy a new model. Such understanding further provides insights into the model, which can be used to turn an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We further propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). The usefulness of explanations is shown via novel experiments, both simulated and with human subjects. Our explanations empower users in various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and detecting why a classifier should not be trusted.
Towards A Rigorous Science of Interpretable Machine Learning
  • F Doshi-Velez
  • B Kim