Content uploaded by Ruairi O'Reilly
Author content
All content in this area was uploaded by Ruairi O'Reilly on Jul 01, 2020
Content may be subject to copyright.
Explainable AI in Healthcare
Urja Pawar
Cork Institute of Technology,
Ireland
urja.pawar@mycit.ie
Donna O’Shea
Cork Institute of Technology,
Ireland
donna.oshea@cit.ie
Susan Rea
Cork Institute of Technology,
Ireland
susan.rea@cit.ie
Ruairi O’Reilly
Cork Institute of Technology,
Ireland,
ruairi.oreilly@cit.ie
Abstract—Artificial Intelligence (AI) is an enabling technology
that when integrated into healthcare applications and smart
wearable devices such as Fitbits etc. can predict the occurrence of
health conditions in users by capturing and analysing their health
data. The integration of AI and smart wearable devices has a range
of potential applications in the area of smart healthcare but there
is a challenge in the black box operation of decisions made by
AI models which have resulted in a lack of accountability and
trust in the decisions made. Explainable AI (XAI) is a domain
in which techniques are developed to explain predictions made
by AI systems. In this paper, XAI is discussed as a technique
that can used in the analysis and diagnosis of health data by AI-
based systems and a proposed approach presented with the aim of
achieving accountability. transparency, result tracing, and model
improvement in the domain of healthcare.
Keywords—Explainable AI, Smart healthcare, Personalised
Connected Healthcare
I. INTRODUCTION
Smart healthcare refers to the use of technologies such as
Cloud computing, Internet of Things (IoT) and AI to enable an
efficient, convenient, and personalized healthcare system [1].
Such technologies facilitate real-time health monitoring using
healthcare applications on smartphones or wearable devices
encouraging individuals to be in control of their well-being.
Health information collected at a user level can also be shared
with clinicians for further diagnosis [1] and together with AI
can be used in health screening, early diagnosis of diseases,
and treatment plan selection [2]. In the healthcare domain, the
ethical issue of transparency associated with AI and the lack of
trust in the black-box operation of AI systems creates the need
for AI models that can be explained [3]. The AI techniques
used for explaining AI models and their predictions are known
as explainable AI (XAI) methods [2].
This paper proposes the involvement of XAI techniques to
present the rationale behind predictions made by AI-based
systems to the stakeholders in healthcare to gain the following
benefits:
•Increased transparency: As XAI methods explain why
an AI system arrived at a specific decision, it increases
transparency in the way AI systems operate and can lead
to increased levels of trust [3].
•Result tracing: The explanations generated by XAI
methods can be used to trace the factors that affected the
AI system to predict an outcome [4].
•Model improvement: AI systems learn from data for
making a prediction. Sometimes, the learned rules are erro-
neous and can lead to erroneous predictions. Explanations
generated from XAI methods can assist in understanding
the learned rules so that errors can be identified in them
and models can be improved [3].
Given the above objective, Section II presents an overview
of XAI, whereas Section III presents a proposed approach to
leverage XAI in the smart healthcare domain with conclusions
presented in Section IV.
II. RE LATE D WOR K
Over the past number of years, various solutions in the
domain of XAI have been proposed, many of which have been
applied to the healthcare domain. In the field of XAI, some
AI models are self-explainable simply by their design, such as
decision sets where researchers in [4] have leveraged and used
them to explain the prediction of diseases (asthma, diabetes,
lung cancer) based on a patient’s health record and are self-
explainable as they were developed by mapping an instance
of data to an outcome using IF-THEN rules [4]. For example,
decision sets will learn to predict lung cancer using a condition:
IF the person is a smoker and already has respiratory illness
THEN predict lung cancer. However, the challenge with AI
Models that are self-explainable is that they restrict the choice
of using other AI models that can be used to achieve greater
accuracy. To address explainability in the wider range of AI
models, there has been a surge in interest in XAI methods that
can explain any AI model [3]. These XAI methods that are
independent of the AI model that needs to be explained are
known as model-agnostic XAI methods [3].
Researchers in [5] proposed one of the commonly used
model-agnostic methods, Local Interpretable Model-Agnostic
Explanation (LIME): a framework used to explain predictions
by quantifying the contribution of all the factors involved in
calculating prediction. Researchers in [2] have used LIME to
explain the prediction of heart failure by Recurrent Neural Net-
works (RNNs) where their explanations helped in identifying
the most common health conditions such as kidney failure,
anemia, and diabetes that increases the risk of heart failure
in an individual. Various other model-agnostic XAI methods
such as Anchors, Shapley values [6; 7] have been developed
and used in the healthcare domain.
In [8], a framework was proposed to use the knowledge
of human reasoning in designing XAI methods the idea of
which was to develop better explanations by involving the user’s
reasoning goals. The framework developed can be extended in
specific domains such as in smart healthcare to generate human-
friendly insights to explain the operation of AI-based systems
using XAI techniques at different stages to assist in clinical
decision-making [8].
There are certain challenges in the adoption of XAI tech-
niques. The explanations generated by XAI methods should be
useful for the end-users that can be clinicians having expertise
in medical domain or normal individuals [9]. The development
of appropriate user interfaces to effectively display explanations
can be done [8]. Challenges related to increased computational
cost and assumption-based operation of model-agnostic XAI
methods remain an open area for research [3].
III. PROPOSED APPROACH
In this paper, we propose to use existing XAI models in
conjunction with clinical knowledge to obtain more benefits in
AI-based systems. As demonstrated in Figure 1, the proposed
approach is explained as the following:
1) Smart healthcare applications capture the health informa-
tion (1) of individuals and use the trained AI models
(2) to predict the probability of certain abnormalities or
diseases.
2) The predictions (3) along with the health data(1) are used
by XAI methods(4) to generate explanations (5).
3) These explanations (5) can be analysed with the help of
a clinician’s knowledge (6). This analysis will enable
validation of predictions made by the AI model by
clinicians to enable transparency.
4) If predictions are correct, then explanations along with
clinical knowledge can be used to generate valuable
insights and recommendations (7).
5) If predictions are incorrect, then the contradiction be-
tween explanations and clinician’s knowledge can be used
to trace factors for inaccurate predictions and enable
improvement (8) in the deployed AI model (2).
Fig. 1. Generating insights using XAI and Clinical expertise
An example application of this concept will be if there is an
increase in the blood sugar level, clinicians will be sent a report
along with the data of heart rate, body temperature, and calorie
intake. An XAI model will explain that the feature primarily
responsible for this prediction is calorie intake. Clinicians can
then take a look at the features and recommend appropriate
medicines/activities.
In order to maximise the benefit of XAI the explanations
generated should be useful and presented appropriately i.e. GUI
for the end-users that can be clinicians having expertise in
medical domain or normal individuals [8].
IV. CONCLUSION
The growing research in explainable AI (XAI) is address-
ing the development of frameworks and models that help in
interpreting and understanding the decisions being made by AI
systems. As the European General Data Protection Regulation
(GDPR and ISO/IEC 27001) calls for making results generated
by autonomous systems used in businesses traceable, XAI
techniques can be utilised to make results from AI-based
autonomous systems explainable and traceable. The incorpo-
ration of XAI techniques faces some challenges as discussed
in Section II. The domain of XAI needs to be developed
continuously and start getting applied in AI-based systems in
healthcare to enable better improvements related to its adoption
and usage.
REFERENCES
[1] “Smart healthcare: making medical care more intelligent,”
Global Health Journal, vol. 3, no. 3, pp. 62–65, 2019.
[2] S. Khedkar, V. Subramanian, G. Shinde, and P. Gandhi,
“Explainable AI in Healthcare,” SSRN Electronic Journal,
2019.
[3] F. Doshi-Velez and B. Kim, “Towards A Rigorous Science
of Interpretable Machine Learning,” no. Ml, pp. 1–13, 2017.
[4] H. Lakkaraju, S. H. Bach, and J. Leskovec, “Interpretable
decision sets: A joint framework for description and pre-
diction,” Proceedings of the ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining,
vol. 13-17-August-2016, pp. 1675–1684, 2016.
[5] M. T. Ribeiro and C. Guestrin, ““ Why Should I Trust You
?” Explaining the Predictions of Any Classifier,” pp. 1135–
1144, 2016.
[6] M. T. Ribeiro and C. Guestrin, “Anchors : High-Precision
Model-Agnostic Explanations,”
[7] S. M. Lundberg and S.-I. Lee, “A unified approach to
interpreting model predictions,” in Advances in neural
information processing systems, pp. 4765–4774, 2017.
[8] D. Wang, Q. Yang, A. Abdul, and B. Y. Lim, “Designing
Theory-Driven User-Centric Explainable AI,” Proceedings
of the 2019 CHI Conference on Human Factors in Com-
puting Systems - CHI ’19, pp. 1–15, 2019.
[9] A. Holzinger, C. Biemann, C. S. Pattichis, and D. B. Kell,
“What do we need to build explainable AI systems for the
medical domain?,” no. Ml, pp. 1–28, 2017.