ArticlePDF Available

An AI-Empowered Home-Infrastructure to Minimize Medication Errors

Authors:

Abstract and Figures

This article presents an Artificial Intelligence (AI)-based infrastructure to reduce medication errors while following a treatment plan at home. The system, in particular, assists patients who have some cognitive disability. The AI-based system first learns the skills of a patient using the Actor–Critic method. After assessing patients’ disabilities, the system adopts an appropriate method for the monitoring process. Available methods for monitoring the medication process are a Deep Learning (DL)-based classifier, Optical Character Recognition, and the barcode technique. The DL model is a Convolutional Neural Network (CNN) classifier that is able to detect a drug even when shown in different orientations. The second technique is an OCR based on Tesseract library that reads the name of the drug from the box. The third method is a barcode based on Zbar library that identifies the drug from the barcode available on the box. The GUI demonstrates that the system can assist patients in taking the correct drug and prevent medication errors. This integration of three different tools to monitor the medication process shows advantages as it decreases the chance of medication errors and increases the chance of correct detection. This methodology is more useful when a patient has mild cognitive impairment.
Content may be subject to copyright.


Citation: Naeem, M.; Coronato, A.
An AI-Empowered Home-
Infrastructure to Minimize
Medication Errors. J. Sens. Actuator
Netw. 2022,11, 13. https://
doi.org/10.3390/jsan11010013
Academic Editor: Mohamed
Elhoseny
Received: 21 December 2021
Accepted: 5 February 2022
Published: 9 February 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Journal of
Actuator Networks
Sensor and
Article
An AI-Empowered Home-Infrastructure to Minimize
Medication Errors
Muddasar Naeem * and Antonio Coronato
Institute of High Performance Computing and Networking, National Research Council of Italy,
80131 Napoli, Italy; antonio.coronato@icar.cnr.it
*Correspondence: muddasar.naeem@icar.cnr.it
Abstract:
This article presents an Artificial Intelligence (AI)-based infrastructure to reduce medication
errors while following a treatment plan at home. The system, in particular, assists patients who
have some cognitive disability. The AI-based system first learns the skills of a patient using the
Actor–Critic method. After assessing patients’ disabilities, the system adopts an appropriate method
for the monitoring process. Available methods for monitoring the medication process are a Deep
Learning (DL)-based classifier, Optical Character Recognition, and the barcode technique. The DL
model is a Convolutional Neural Network (CNN) classifier that is able to detect a drug even when
shown in different orientations. The second technique is an OCR based on Tesseract library that
reads the name of the drug from the box. The third method is a barcode based on Zbar library that
identifies the drug from the barcode available on the box. The GUI demonstrates that the system can
assist patients in taking the correct drug and prevent medication errors. This integration of three
different tools to monitor the medication process shows advantages as it decreases the chance of
medication errors and increases the chance of correct detection. This methodology is more useful
when a patient has mild cognitive impairment.
Keywords: artificial intelligence; reinforcement learning; deep learning; medical treatment; medica-
tion error; optical character recognition; barcode detection
1. Introduction
A trend of shifting more and more patients (not with several symptoms) from hospitals
to homes for treatment has emerged recently [
1
]. This trend strengthened further during the
COVID-19 crises due to the effects of hospitalization on humans’ emotional
statuses [2]
. As
a result, loads on the hospitals and costs on healthcare infrastructure have been
reduced [3]
.
Moreover, in many countries, including Italy, Japan, the USA, and many European countries,
the number of senior people is increasing at a fast rate. Elder people need more healthcare
services compared to the youth. Secondly, adherence to the therapy is another issue during
the treatment process. The World Healthcare Organization (WHO) has defined adherence
to the therapy as “the extent to which the patient follows medical instructions”. A recent
report of WHO [
4
] indicates 50–80% patients worldwide follow medical instructions and
the treatment plan.
Furthermore, it is challenging for the patients to continue a treatment process by
themselves at home if they have some cognitive disability. In such a scenario, the chance of
medication errors increases [
5
] and sometimes, it may result in severe implications [
6
]. For
example, the United States Institute of Medicine has estimated that medication errors affect
150,000 people yearly and 7000 patients die every year in the USA. The same situation of
medication errors has been reported in Europe [7].
In addition to severe complications and deaths caused by medication errors, there
are also the economic impacts of medication errors [
8
]. According to an estimate, the
cost of hospitalization due to failure in adherence to medication therapy is around USD
13.35 billion
annually, only in the USA [
9
]. Similarly, in Europe, the expense of medication
J. Sens. Actuator Netw. 2022,11, 13. https://doi.org/10.3390/jsan11010013 https://www.mdpi.com/journal/jsan
J. Sens. Actuator Netw. 2022,11, 13 2 of 14
errors is in the range from
4.5 billion to
21.8 billion annually, according to an estimate of
the European Medicines Agency [7].
There are various forms of medication errors such as wrong frequency, omission,
wrong dosage, or wrong medication, as classified by WHO [
6
]. The WHO has emphasized
that “the senior people are more prone to special issues related to medication errors. The
risks and consequent impacts of the medication errors have been reviewed in different
surveys [
5
,
10
]. These studies emphasize the need for systems that are able to assist the
elderly and patients during medical treatment at home.
Other factors that cause medication errors are insufficient knowledge of the pill, and
physical and/or cognitive impairments, which brings difficulty in following the medication
process correctly. Designing the improved solution to monitor a patient’s actions and, in
particular, the medicine that a patient is going to take will improve the degree of adherence
to the medication plan and minimize adverse events that can occur due to medication
errors. Two hundred and fifty-six residents that were recruited in 55 care homes were
monitored in [
5
] by considering a mean of 8.0 medicines. It was observed that about 69.5%
(178) of them had one or more medication errors. The mean number according to the
study was 1.9 errors per resident. The mean potential harm from prescribing, monitoring,
administration, and dispensing errors was estimated as 2.6, 3.7, 2.1, and 2.0, respectively,
the scale being (
0 = no harm
, 10 = death). The authors highlighted that the majority of
patients being at risk for medication errors is of concern. We can address this problem by
taking the benefit of computing technology [
11
], which has brought a revolution in many
areas. Machine learning (ML) tools such as Reinforcement
Learning [12]
have introduced
many useful solutions to healthcare problems [
13
16
], including risk management in
different environments
[1720]
. We propose an Artificial Intelligence (AI)-based system
that assists patients and the elderly during the medication process at home in order to
minimize medication errors. The AI-based system employs a combination of Reinforcement
Learning (RL), Deep Learning (DL), Optical Character Recognition (OCR), and barcode
technologies [
21
]. The designed intelligent agent can monitor the drug-taking process.
The major component of the proposed work is the RL agent that integrates multiple
AI methods (DL, OCR, and barcode) and can provide assistance not only to the elderly
but also to patients with cognitive disabilities in their medical treatment at home. The
proposed architecture considers patients with good cognitive skills or patients that have
some cognitive impairment. A feedback in audio and video form is produced when the
person finishes the medication process. Such an AI-based system is an intelligent multi-
agent infrastructure that assists patients in taking correct medicines. The RL agent is based
on the Actor–Critic method that further integrates three different methods for monitoring a
patient medicine-handling process. The first technique is an OCR that tries to read the name
of the drug from the box. The second method is a barcode reader that identifies the drug
from the barcode available on the box. The last method is a Convolutional Neural Network
(CNN) classifier that is able to detect a drug even when shown in different orientations [
22
].
The advantage of integrating three different methods to monitor the medication process
is that it decreases the chance of medication errors and increases the chance of correct
detection. This methodology is more useful when a patient has mild cognitive impairment.
Section 2presents literature and Section 3discusses background about RL and DL.
Section 4introduces the proposed architecture, and results are reported in Section 5. Finally,
in Section 6, we will present our concluding remarks.
2. Related Work
This section will recall relevant literature reviews and highlight existing limitations.
Few AI-based intelligent systems have been proposed to assist the older population. The
Assisted Cognition Project developed by [
23
] uses AI methods to support and amplify the
quality of life and independence for patients with Alzheimer’s disease. Another project
(Aware Home) is proposed in [
24
], which aims to develop situation-aware environments to
help senior people maintain their independence. Similarly, the Nursebot Project developed
J. Sens. Actuator Netw. 2022,11, 13 3 of 14
in [
25
] targets mobile robotic assistants to aid physical and mild cognitive decline. However,
all these solutions are not suitable to monitor the medication process, and in some cases are
unable to assist patients with cognitive problems.
The Autominder System [
26
] applies partially observable Markov decision processes
to plan and schedule the Nursebot system to provide assistance for home therapy. However,
the proposed architecture is not capable of preventing medication errors and is mainly de-
signed only for the reminding process. The work of [
27
] uses smartphones to identify drugs
by quantifying properties like color, size, and shape. However, for accurate estimation,
such a methodology requires a marker to be used with known dimensions. The authors
of [
28
] have presented a technique for the detection of some key points in the medicine
box and then applied mapping using a database. The approach showed good results, but
it was tested only for few boxes. In the work of [
29
], an intelligent pill reminder system
is presented that consists of a pill reminder component and a verification component,
however only one tool is used for the recognition of the medicine boxes.
The other two methods that could be useful in the medication process monitoring
are OCR and barcode tools. These two tools are not used in many solutions that focus
on assisting patients in their medication process. OCR is largely used for detection and
reading text [
30
], and could be used for identification of the drug. Similarly, a method
to identify the medicine is to use the drug box for real-time detection, identification,
and information retrieval. A barcode scanner is developed in [
31
] to recognize the drug
correctly. However, it needs the medicine box to be presented to the camera in a specific
position. A working method is developed in [
32
] on the usefulness of a smart home to assist
patients with a treatment process. The system initiates when a new drug prescription is
advised by the doctor. An electronic system produces a QR code that is delivered with the
prescription, indicating time period, visit details, and medication workflow information.
The set of information is utilized by an expert system that manages all data produced by the
prescription. The methodology assists the subjects with no cognitive disability. There is no
customization of the solution depending on the patient’s skills. Implementation of different
AI and IoT-based proposals for remote healthcare monitoring have been reviewed in [
33
].
A system based on ambient intelligence and IoT devices for student health monitoring is
proposed in [
34
]. Authors have also employed wireless sensor networks to collect data
required by ambient environments. Similarly, AI-empowered sensors for health monitoring
are studied in [
35
]. However, both studies do not consider cognitive disability of patients.
We have observed the absence verification mechanism in the reviewed papers that can
validate the ingestion of the correct drug by the subject. Most of the papers limit their
target to the medicine reminder, but do not have the verification mechanism. In a few
frameworks, it is the patient himself that communicates the assumption of the medicine.
3. Technical Background
Reinforcement Learning (RL) is a subfield of ML where an agent tries to learn the dy-
namics of an unknown environment. To learn the characteristics of the given environment,
the agent chose a certain action
at
from a set of actions in a certain state
st
(there is a set of
state for every given environment) at time slot
t
and, based on the transition model of the
environment, the agent reaches a new state
at+1
and receives a numerical reward
rt
. After a
lot of trial and error, the RL agent can learn the optimal policy for a given environment. The
optimal policy tells an agent which action to choose in a given state to maximize long-term
aggregated reward. An RL problem is first modeled as a Markov Decision Process (MDP),
as shown in Figure 1, and then an appropriate RL algorithm is employed based on the
dynamics of the underlying environment. A brief introduction to MDP is given next.
A MDP is a tuple hS,A,R,P,γi; where
Sis used to denote states;
Ais used to denote actions;
Ris used to denote a reward function;
Pindicates transition probability;
J. Sens. Actuator Netw. 2022,11, 13 4 of 14
γis a discount factor: γ[0, 1].
Figure 1. The Reinforcement Learning problem.
The Markov property, i.e., next state, is dependent only on the previous state that is
assumed. A finite MDP is described by actions, states, and the environment’s dynamics. For
any state–action (
s
,
a
) pair, the probability of resulted state and the corresponding reward
(s0,r) is given as in Equation (1):
p(s0,r|s,a).
=Pr{St+1=s0,Rt+1=r|St=s,At=a(1)
Informally, the target of the RL agent is to maximize the reward. This is to say, with the
list of rewards
Rt+1
,
Rt+2
,. . . after time period
t
, the goal is to maximize the reward function
as given in Equation (2):
Gt=Rt+1+Rt+2+· · · +RT(2)
where Tis the last time interval.
The return Gtis the sum of discounted rewards obtained after time t.
Gt=
T
k=0
γkRt+k+1(3)
A policy
π
defined in Equation (4) tells an agent which action to take in a given state.
π(a|s).
=P[At=a|St=s](4)
Having the policy
π
and the return
Gt
, two value functions can be defined, i.e., state–
value and the action–value functions. The state–value function
vπ
(
s
) is the expected return
starting from a state sand following the policy πas given in Equation (5).
vπ(s).
=Eπ[Gt|St=s] = Eπ[
k=0
γkRt+k+1|St=s](5)
The action–value function
qπ
(
s
,
a
) is the expected return starting from a state
s
, taking
action a, by following the policy π.
The optimal value function is one that obtains the best gains in terms of returns, as
given in Equation (6).
v(s) = max
πvπ(s),sS(6)
After defining the MDP and the selection of an RL technique for a given problem, the
next issue is to maintain a delicate balance between exploration and exploitation. At each
time step, the RL agent can select the best rewarding action based on its current knowledge
of the environment. On the other hand, an RL agent can explore more available actions
that may provide even more rewarding actions. Therefore, exploration and exploitation
may not be good strategies and an RL agent should learn a trade-off between exploration
and exploitation for a certain problem.
J. Sens. Actuator Netw. 2022,11, 13 5 of 14
For further details on RL algorithms in general and the application of RL in healthcare
in particular, the reader may refer to [12,13], respectively.
Deep learning, in particular, CNN, has brought significant contribution and revolu-
tion to computer vision and object detection. Recently, several new networks have been
designed and implemented to attain greater accuracy in the competition of ImageNet
large-scale visual recognition challenge. Few famous CNN-based models have achieved
significant enhancement in object detection as well as in classification. For example, the
AlexNet model was able to minimize the error rate to 16% in 2012, which was 25% in
2011 [36]
. Moreover, the models of GoogLeNet [
37
] and VGGNet [
38
] won the top two
positions, respectively, in 2014.
However, these models require a large amount of data for training. Transfer Learning [
39
]
can be adopted as a solution to this problem in cases of custom and small data-sets. Transfer
learning employs optimized parameters of a pre-trained model and performs training only
on a few extra layers according to the needs of the underlying model. Availability of a huge
database on ImageNet (http://imagenet.org/ImageNet , accessed on 20 December 2021) is
useful for different studies to train the feature extraction layers. The identification of medicine
can be categorized as a classification task. However, we have a small data-set, and thus we
used a DL classifier using Transfer Learning. More details on DL are found in [40].
4. System Model
A major component of the system model is the RL Actor–Critic-based agent. It is
an intelligent agent that first learns the cognitive skills of the patient by trial and error.
After emulating the patient skills i.e., a patient with Cognitive Impairment (CI) or a patient
with Normal Cognition (NC), the AI agent has to select one technique or a combination
of techniques (DL classifer, OCR, and barcode) for monitoring the medication process
as shown in
Figure 2
. The block diagram of the proposed work is shown in Figure 3,
which presents the methodological workflow of different AI agents. All agents have the
medication plan of a patient. The RL agents controls the other three AI agents (DL, OCR,
barcode) and selects them as its actions according to the skills of a patient. The chosen
AI agents monitor the process of medication and generates alerts if the patient is going to
take the wrong drug, thus helping the patient to avoid taking the wrong medicine. The
technical detail of each method is presented below.
Figure 2. Working of RL algorithm.
J. Sens. Actuator Netw. 2022,11, 13 6 of 14
Figure 3. Block diagram of the System.
4.1. Actor–Critic Algorithm
As described in Section 3, after modeling the given problem as an MDP, one needs to
choose a suitable RL algorithm to solve the modeled MDP. In our problem, as explained
in Figure 3, the RL agent has to choose a suitable monitoring method based on emulated
skills of the patient. Therefore, we need a method that can continuously receive feedback
on actions taken and update policy. Actor–Critic (AC) is a hybrid RL method that employs
value- and policy-based schemes. The critic part of the AC algorithm estimates the value
function and the actor part updates the policy distribution based on critic feedback. The
pseudo code of the Actor–Critic scheme is given in Algorithm 1. The step-wise explanation
of the methodology is given next.
Algorithm 1 actor critic algorithm
Emulate
CI,NC
Initialize
Rewards for all state–action pairs, Rs,a
Qto zero,
Initialize tuning parameters
Initialize s
1. Select OCR, BC, or DL method (at)based on patient condition st.
2. Get the next state st+1(Right or wrong drug box)
3. Get the reward (positive in case of right drug box and negative in case of wrong
drug box).
4. Update state stutility function (critic).
U(st)U(st) + α[rt+1+γU(st+1)U(st)]
5. Update the probability of the action using error (actor).
δ=rt+1+γU(st+1)U(st)
until terminal state
Initially, the RL agent selects an action under the current policy. We used softmax
function to opt a particular action, as given in Equation (7):
P{at=a|st=s}=ep(s,a)
bep(s,b)(7)
J. Sens. Actuator Netw. 2022,11, 13 7 of 14
In the next step, the resulting state and reward is observed as given in the Algorithm 1.
In the third step, the utility of the current state
st
, next state
st+1
, and the reward is plugged
in the update rule used in Temporal Difference zero
TD(
0
)
, as given below in Equation (8):
U(st)U(st) + α[rt+1+γU(st+1U(st)] (8)
In step 4 of Algorithm 1, error estimation
δ
is used to update policy. Practically,
step 4 is used to weaken or strengthen the probability of a certain action based on
δ
and
non-negative step-size β, as can be seen in Equation (9):
p(st,at)p(st,at) + βδt(9)
For the Actor–Critic algorithm, we need a set of eligibility traces for both actor and
critic. For the latter part, s trace is stored for every state and updated as given below in
Equation (10):
et(s) = γt1(s)i f s 6=st;
et(s) = γt1(s) + 1i f s =st;(10)
After estimating the trace, the state can be updated as follows in Equation (11):
U(st)U(st) + αδtet(s)(11)
Similarly, for the actor, the trace is stored for every state–action pair and updated as
given in Equation (12):
et(s,a) = γt1(s,a) + 1i f s =stand a =at;
et(s,a) = γt1(s)otherwise;(12)
At the end, the probability of selecting a certain action is updated as given below in
Equation (13):
p(st,at)p(st,at) + αδtet(s)(13)
4.2. Dl Classifier
Training the Convolutional Neural Network (CNN) model on a small data-set is
difficult [
41
]. To mitigate this problem, we took advantage of transfer learning and chose
VGG16 [
38
] as our pre-trained CNN model. In addition, using a pre-trained network that
has been trained on millions of images is also helpful to compensate data-set bias, which
may occur in applying DL model on small data [16].
The data-set for our CNN model was created in these steps. Firstly, we started to
capture images of 12 drugs that are available in Italy. We captured images in different
orientations and light conditions, as shown in Figure 4. Next, we applied preprocessing
techniques such as: black background, rescaling, gray scaling, sample wise centering,
standard normalization, and feature-wise centering to remove inconsistencies and incom-
pleteness in the raw data and clean it up for model consumption. At the end, we employed
methods like rotation, horizontal and vertical shift, flip, zooming, and shearing to improve
the quality and quantity of data-sets.
We have arranged 700 images for each drug (total 8400 for all drugs) and used 80%,
i.e., 6720 images, and 20%, i.e., 1680 images, for training and testing, respectively. Next, we
fine-tuned the last four convolution layers of the original VGG-16 network [
42
]. A dropout
rate of 0.5 was used between fully connected layers to avoid over-fitting and we replaced
1000 classes with 12 classes. The categorical cross-entropy was utilized as a loss function.
For optimization, a momentum of 0.9 to the stochastic gradient descent and a learning rate
of 0.0001 has been used.
J. Sens. Actuator Netw. 2022,11, 13 8 of 14
Figure 4. Some manually captured images of the drug ‘Medrol’.
4.3. Optical Character Recognition
One unique feature of any medicine box is that the name of the drug also serves as a
distinctive identifier. Some medicine boxes may have the same name but differ in number
of dosage, pills, and company. All this information is available on the drug box and can be
decoded. The whole OCR method is summarized next.
1. From video, do extraction of image and apply gray scale thresholding.
2. Then apply Otsu’s method for separation of dark and light regions.
3. Then look for set of connected pixels and recognize characters and ignore logos,
stripes, and barcodes.
4. Then the overlapping between bounding boxes is computed using identifica-
tion method.
5. Next, apply Tesseract for character recognition.
6. At the end, apply Levenshtein distance tool for comparison on obtained string.
4.4. Barcode Method
A barcode is a technique of representing data in a visual, machine-readable form and
is used widely around the globe in various contexts. At the start, barcodes represented
data by varying the spacings and widths of parallel lines. Barcode identification is broadly
applied in the healthcare sector, ranging from (1) For identifying patient; (2) To create the
subjective, objective, assessment, and plan with barcodes; (3) For medication management.
Many medicines are available in the market with a variable number of pills and
different dosages. In Italy, an unequivocal identifier is given to each medicine box. The
availability of barcodes for each drug makes the identification process easy and fast. Zbar
(http://zbar.sourceforge.net/, accessed on 20 December 2021) and ZXing (https://github.
com/zxing/zxing, accessed on 20 December 2021) are two publicly available libraries
used for barcode decoding. We use the Zbar library due to its superior performance
with orientation and integrate it with the OpenCV (https://opencv.org/, accessed on 20
December 2021) library that can read any image.
The general procedure for barcode reading is as follows:
White and black bars are used in the structure of a barcode. Data retrieval is performed
by shining a light from the scanner at a barcode, then capturing the reflected light and
replacing the white and black bars with binary digital signals.
Reflections are weak in black areas while strong in white areas. A sensor receives
reflections to get analog waveforms.
The analog waveforms are then converted into a digital signal using an analog to
digital converter called binarization.
Data retrieval is done when a code system is identified from the digital signal using
the decoding process.
5. Results
Figure 5shows the first step of the GUI demonstration of the proposed system.
Figure 5
refers to the stage when an image of the drug is presented to the patient that
J. Sens. Actuator Netw. 2022,11, 13 9 of 14
he/she has to take in accordance with their advised medication plan. The next stage is the
selection of an appropriate monitoring tool (DL, OCR, barcode) while the patient is taking
their drug.Therefore, we can see in Figure 6that the choice (taken action) of RL agent is to
use a DL classifier for drug identification.
Figure 5. GUI demonstration-1.
Figure 6. GUI demonstration-2.
Similarly, Figure 7shows us the case when the RL agent selects a barcode technique to
monitor which drug that patient is going to take. The ultimate goal of the RL agent is to
learn the most suitable tool out of the three available techniques in order to perform correct
identification of the drug, which is being handled by the patient. When the patient is going
J. Sens. Actuator Netw. 2022,11, 13 10 of 14
to take the correct medicine, the positive feedback is returned to the RL agent. Otherwise,
an alert is generated for the patient to prevent him/her from taking the wrong medicine.
As can be seen in Figure 8, a confirmation message is communicated that he/she is going
to take the correct medicine.
Figure 7. GUI demonstration-3.
It is important to observe that, in the case of DL identification method, the system
is able to recognize the drug box from the video in any orientation and light condition.
However, in OCR and barcode methods, it is important that a patient presents the drug in
a specific position and orientation to the camera.
Figure 9shows the confusion that is being computed for twelve used medicines. We
can extract that the trained model performs well for most of the drugs. In fact, the difference
of performance for different drugs is due to their sizes and color combination. For example,
the drug Omperazen has a larger drug box and fair color combination, while the drug
Muscoril is small in size, so both have comparatively high and low identification accuracy,
respectively.
The performance of a DL component using metrics of classification accuracy and loss
function is shown in Figure 10. The top image in Figure 10 shows accuracy curves for both
training and testing data, while bottom image of the Figure 10 presents loss performance,
respectively. The model has obtained 98.00% accuracy and a loss of 0.0583 on the testing
data-set.
Figure 11 presents a performance of our chosen Actor–Critic algorithm and three other
RL algorithms in terms of learning rate against number of iterations. It is evident that the
choice of Actor–Critc algorithm to solve our problem is fairly correct.
J. Sens. Actuator Netw. 2022,11, 13 11 of 14
Figure 8. GUI demonstration-4.
Figure 9. Confusion Matrix for 12 drugs.
Figure 10. Accuracy and loss performance DL classifier.
J. Sens. Actuator Netw. 2022,11, 13 12 of 14
Figure 11. Learn curves of the RL algorithms.
6. Conclusions
We have demonstrated an AI-based infrastructure that assists patients and elderly
during the medication process at home. The system applies AI modern techniques such
as RL algorithm, DL-based classification, OCR, and barcode to monitor a patient taking
a specific drug. The GUI implementation of the infrastructure has shown that it is able
to assist patients and minimize medication errors that nowadays cause damages and the
death of many patients every year.
Author Contributions:
Conceptualization, M.N. and A.C.; methodology, M.N. and A.C.; investiga-
tion, M.N; writing—original draft preparation, M.N.; writing—review and editing, A.C.; supervi-
sion, A.C.; funding acquisition, A.C. All authors have read and agreed to the published version of
the manuscript.
Funding:
This work is partially supported by the AMICO project, which has received funding from
the National Programs (PON) of the Italian Ministry of Education, Universities and Research (MIUR):
code ARS0100900 (Decree n.1989, 26 July 2018).
Data Availability Statement:
The data presented in this study are available on request from the
corresponding author.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Rennke, S.; Ranji, S.R. Transitional care strategies from hospital to home: A review for the neurohospitalist. Neurohospitalist
2015
,
5, 35–42. [CrossRef] [PubMed]
2.
Alzahrani, N. The effect of hospitalization on patients’ emotional and psychological well-being among adult patients: An
integrative review. Appl. Nurs. Res. 2021,61, 151488. [CrossRef] [PubMed]
3.
Stadhouders, N.; Kruse, F.; Tanke, M.; Koolman, X.; Jeurissen, P. Effective healthcare cost-containment policies: A systematic
review. Health Policy 2019,123, 71–79. [CrossRef] [PubMed]
4.
World Healthcare Organization. Adherence to Long-Term Therapies: Evidence for Action. Available online: http://www.who.
int/chp/knowledge/adherence_full_report.pdf (accessed on 31 October 2020).
5.
Barber, N.; Alldred, D.; Raynor, D.; Dickinson, R.; Garfield, S.; Jesson, B.; Lim, R.; Savage, I.; Standage, C.; Buckle, P.; et al. Care
homes’ use of medicines study: Prevalence, causes and potential harm of medication errors in care homes for older people. Qual.
Saf. Health Care 2009,18, 341–346. [CrossRef]
6.
World Healthcare Organization. Medication Errors. Available online: http://apps.who.int/iris/bitstream/handle/10665/2522
74/9789241511643\protect\discretionary{\char\hyphenchar\font}{}{}eng.pdf (accessed on 10 November 2020).
7.
European Medicines Agency. Streaming EMA Public Communication on Medication Errors. Available online: https://
www.ema.europa.eu/documents/other/streamlining-ema-public-communication-medication-errors_en.pdf (accessed on 20
November 2020).
8.
DiMatteo, M.R. Evidence-based strategies to foster adherence and improve patient outcomes: The author’s recent meta-analysis
indicates that patients do not follow treatment recommendations unless they know what to do, are committed to doing it, and
have the resources to be able to adhere. JAAPA-J. Am. Acad. Physicians Assist. 2004,17, 18–22.
9.
Sullivan, S.D. Noncompliance with medication regimens and subsequent hospitalization: A literature analysis and cost of
hospitalization estimate. J. Res. Pharm. Econ. 1990,2, 19–33.
J. Sens. Actuator Netw. 2022,11, 13 13 of 14
10.
Perri, M., III; Menon, A.M.; Deshpande, A.D.; Shinde, S.B.; Jiang, R.; Cooper, J.W.; Cook, C.L.; Griffin, S.C.; Lorys, R.A. Adverse
outcomes associated with inappropriate drug use in nursing homes. Ann. Pharmacother.
2005
,39, 405–411. [CrossRef] [PubMed]
11.
Bakhouya, M.; Campbell, R.; Coronato, A.; Pietro, G.D.; Ranganathan, A. Introduction to Special Section on Formal Methods in
Pervasive Computing. ACM Trans. Auton. Adapt. Syst. 2012,7, 6. [CrossRef]
12.
Naeem, M.; Rizvi, S.T.H.; Coronato, A. A Gentle Introduction to Reinforcement Learning and its Application in Different Fields.
IEEE Access 2020,8, 209320–209344. [CrossRef]
13.
Coronato, A.; Naeem, M.; De Pietro, G.; Paragliola, G. Reinforcement learning for intelligent healthcare applications: A survey.
Artif. Intell. Med. 2020,109, 101964. [CrossRef]
14.
Coronato, A.; Naeem, M. Ambient Intelligence for Home Medical Treatment Error Prevention. In Proceedings of the 2021 17th
International Conference on Intelligent Environments (IE), Dubai, United Arab Emirates, 21–24 June 2021; pp. 1–8.
15.
Coronato, A.; Paragliola, G. A structured approach for the designing of safe aal applications. Expert Syst. Appl.
2017
,85, 1–13.
[CrossRef]
16.
Ciampi, M.; Coronato, A.; Naeem, M.; Silvestri, S. An intelligent environment for preventing medication errors in home treatment.
Expert Syst. Appl. 2022,193, 116434. [CrossRef]
17.
Paragliola, G.; Naeem, M. Risk management for nuclear medical department using reinforcement learning algorithms. J. Reliab.
Intell. Environ. 2019,5, 105–113. [CrossRef]
18. Coronato, A.; De Pietro, G. Tools for the Rapid Prototyping of Provably Correct Ambient Intelligence Applications. IEEE Trans.
Softw. Eng. 2012,38, 975–991. [CrossRef]
19.
Cinque, M.; Coronato, A.; Testa, A. A failure modes and effects analysis of mobile health monitoring systems. In Innovations and
Advances in Computer, Information, Systems Sciences, and Engineering; Springer: New York, NY, USA, 2013; pp. 569–582.
20.
Testa, A.; Cinque, M.; Coronato, A.; De Pietro, G.; Augusto, J.C. Heuristic strategies for assessing wireless sensor network
resiliency: An event-based formal approach. J. Heuristics 2015,21, 145–175. [CrossRef]
21. Coronato, A.; Pietro, G.D. Formal Design of Ambient Intelligence Applications. Computer 2010,43, 60–68. [CrossRef]
22.
Naeem, M.; Paragiola, G.; Coronato, A.; De Pietro, G. A CNN-based monitoring system to minimize medication errors
during treatment process at home. In Proceedings of the 3rd International Conference on Applications of Intelligent Systems,
Las Palmas de Gran Canaria, Spain, 7–9 January 2020; pp. 1–5.
23.
Kautz, H.; Arnstein, L.; Borriello, G.; Etzioni, O.; Fox, D. An overview of the assisted cognition project. In Proceedings of the
AAAI—2002 Workshop on Automation as Caregiver: The Role of Intelligent Technology in Elder Care, Edmonton, AB, Canada,
29 July 2002.
24.
Mynatt, E.D.; Essa, I.; Rogers, W. Increasing the Opportunities for Aging in Place. In Proceedings of the CUU’00: 2000 Conference
on Universal Usability, Arlington, VA, USA, 16–17 November 2000; Association for Computing Machinery: New York, NY, USA,
2000; pp. 65–71. [CrossRef]
25.
Pineau, J.; Montemerlo, M.; Pollack, M.; Roy, N.; Thrun, S. Towards robotic assistants in nursing homes: Challenges and results.
Robot. Auton. Syst. 2003,42, 271–281. [CrossRef]
26.
Pollack, M.E. Planning Technology for Intelligent Cognitive Orthotics; AIPS, 2002; pp. 322–332. Available online: https://www.aaai.
org/Papers/AIPS/2002/AIPS02-033.pdf (accessed on 20 December 2021).
27.
Hartl, A. Computer-vision based pharmaceutical pill recognition on mobile phones. In Proceedings of the 14th Central European
Seminar on Computer Graphics, Budmerice, Slovakia, 10–12 May 2010; p. 5.
28.
Benjamim, X.C.; Gomes, R.B.; Burlamaqui, A.F.; Gonçalves, L.M.G. Visual identification of medicine boxes using features
matching. In Proceedings of the 2012 IEEE International Conference on Virtual Environments Human-Computer Interfaces and
Measurement Systems (VECIMS) Proceedings, Tianjin, China, 2–4 July 2012; pp. 43–47.
29.
Naeem, M.; Paragliola, G.; Coronato, A. A reinforcement learning and deep learning based intelligent system for the support of
impaired patients in home treatment. Expert Syst. Appl. 2020,168, 114285. [CrossRef]
30.
Neumann, L.; Matas, J. Real-time scene text localization and recognition. In Proceedings of the 2012 IEEE Conference on
Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3538–3545.
31.
Al-Quwayfili, N.I.; Al-Khalifa, H.S. AraMedReader: An arabic medicine identifier using barcodes. In Proceedings of the Interna-
tional Conference on Human-Computer Interaction, Heraklion, Crete, Greece, 22–27 June 2014; Springer: Berlin/Heidelberg,
Germany, 2014; pp. 383–388.
32.
Ramljak, M. Smart home medication reminder system. In Proceedings of the 2017 25th International Conference on Software,
Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 21–23 September 2017; pp. 1–5. [CrossRef]
33.
Alshamrani, M. IoT and artificial intelligence implementations for remote healthcare monitoring systems: A survey. J. King
Saud-Univ.-Comput. Inf. Sci. 2021. [CrossRef]
34.
Hong-tan, L.; Cui-hua, K.; Muthu, B.; Sivaparthipan, C. Big data and ambient intelligence in IoT-based wireless student health
monitoring system. Aggress. Violent Behav. 2021, 101601. [CrossRef]
35.
Mirmomeni, M.; Fazio, T.; von Cavallar, S.; Harrer, S. From wearables to THINKables: Artificial intelligence-enabled sensors for
health monitoring. In Wearable Sensors; Elsevier: Amsterdam, The Netherlands, 2021; pp. 339–356.
36.
Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM
2017
,
60, 84–90. [CrossRef]
J. Sens. Actuator Netw. 2022,11, 13 14 of 14
37.
Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with
convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June
2015; pp. 1–9.
38. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
39.
Donahue, J.; Jia, Y.; Vinyals, O.; Hoffman, J.; Zhang, N.; Tzeng, E.; Darrell, T. A Deep Convolutional Activation Feature for Generic
Visual Recognition; UC Berkeley & ICSI: Berkeley, CA, USA, 2013 .
40. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016.
41.
Aytar, Y.; Zisserman, A. Tabula rasa: Model transfer for object category detection. In Proceedings of the 2011 International
Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2252–2259.
42.
Chui, K.T.; Fung, D.C.L.; Lytras, M.D.; Lam, T.M. Predicting at-risk university students in a virtual learning environment via a
machine learning algorithm. Comput. Hum. Behav. 2020,107, 105584. [CrossRef]
... Medication adherence is normally defined as taking a drug as prescribed by the doctor, taking the correct amount, taking it at the right time, and importantly taking medication for the period prescribed [10]. To avoid patient non-adherence, many consumer technology solutions focus on forgetfulness and try to mitigate the issue via behavioral aids through reminders to order refills, take medicines, and pick up filled prescriptions. ...
Preprint
Full-text available
Correct and timely medication plays an important role in the treatment and recovery of a patient. An intelligent and efficient patient engagement environment ensures enduring health and positive clinical outcomes. Reinforcement Learning (RL) which brought significant impacts in many areas, has useful applications in healthcare as well. In this paper, we introduce an RL-based intelligent environment that can engage the patient to improve adherence, through proper engagement alerts based on user adherence reports, and suggest him/her to improve the adherence by sending the engagement message to the user. The intelligent system uses the RL agent that decides the optimal decision to send the engagement message to the user based on a patient's behavior. The proposed system could be useful to improve adherence to medication and assist the patient in accurately following his/her medication schedule.
... By identifying patterns in medication errors and adverse drug reactions, AI can offer insights into potential systemic issues, thus informing quality improvement strategies. For example, Google's AI has demonstrated its ability to predict adverse events in hospitals and formulate preventative measures, leading to enhanced patient safety and overall care quality 10 . For example, in 2018, Google unveiled a significant advancement in cardiovascular disease (CVD) studies. ...
... reinforcement learning [11], neural network [12] the goal of Artificial Intelligence (AI) has become a step closer. AI has important application in diverse fields including:healthcare [13,14,15], home medication [16], dynamic treatment regimes [17,18], UAVs, 5G and autonomous control [19] risk management [20,21], touristic planning [22] and communication [23]. ...
Chapter
Full-text available
The intelligent transportation system (ITS) along with vehicular communications are making our daily life safer and easier e.g., saving time, traffic control, safe driving, etc. Many transmission mode selection and resource allocation schemes are mitigating to full the quality of service requirements with minimum latency and negligible interference. For message transmission to far away vehicles realistic cellular link and mode selection is a bottleneck, however, for nearby devices, the safety critical information needs V2V links. Reinforcement learning (RL) and Deep learning (DL) have reshaped vehicular communication in a new model where vehicles act like human beings and take their decisions autonomously without human intervention. In our work, we investigate Vehicle-to-vehicle (V2V) and Vehicle-to-infrastructure where each link takes a decision to find the optimal subband and power level. We investigated the case where each V2V link connected with Vehicle-to-Infrastructure (V2I) satisfying stringent latency constraints while minimizing interference. We exploit RL with DL to develop highly intelligent performance where agents effectively learn to select mode and share spectrum with V2V and V2I.
... Despite the efforts to improve testing capacity and accuracy, ensuring universal access to testing still remains one major obstacle in combating COVID-19. With the advancement in machine learning algorithms [17] and neural network [16], the objective of Artificial Intelligence (AI) has become a step closer. AI has significant application in many areas including:healthcare [6,24], UAVs, 5G and autonomous control [19], risk management [22,25], communication [15,18]. ...
Chapter
Full-text available
Coronavirus can lead to respiratory illnesses ranging from mild to severe, and even death, which makes early detection critical. However, current COVID-19 (Coronavirus Disease 2019) detection methods are not only expensive but also time-consuming. This poses a challenge, especially with an increasing number of patients and demand for testing kits. Waiting for test results for a few days is not ideal, as the outbreak can spread quickly in the meantime. To address this issue, we propose a COVID-19 prediction infrastructure using deep learning. This innovative android-based application uses a Convolutional Neural Network model, trained on a custom dataset with an accuracy of 97 percent, to predict whether COVID-19 is present or not. With this fast and low-cost approach, users can quickly detect COVID-19 and take appropriate actions to reduce the risk of transmission.
... Machine Learning is an ever-growing field with real-time applications reaching out into daily life [2]. Machine Learning has already enjoyed great success in various applications, and its applications are continuously increasing due to the development of new algorithms and methods [7,8,6]. ...
Chapter
Full-text available
From the real-time forecasting of events to Visual analysis tasks, the state-of-the-art machine learning algorithms exhibit unmatched performance. Furthermore, with the ongoing traction of embedded computing, the deployment of machine learning algorithms on mobile devices is receiving increasing attention. There are numerous practical applications where hand-held devices having machine learning methods can be more useful due to their compact size and integrated resources. However, for the realization of ML methods on embedded devices, either the used algorithm should be less computationally complex or there should be some efficient way to implement a state-of-the-art algorithm on a less-powerful embedded device. In this paper, different approaches for reducing the computational complexity of a machine learning-based computer vision application are presented that can be helpful to make other such algorithms applicable on the embedded devices. Results show that the hardware architecture based exploitation can further improve the performance of an existing framework.
... Risk management in healthcare has been focused mainly on loss prevention, and patient safety (Kuhn & Youngberg (2002), Di Sarno et al. (2013)). Solving the patient safety problem avoids the unnecessary effective incidents that many patients face during their interaction with healthcare services Naeem & Coronato (2022), including the nuclear medical examination. It is a fact that interventions to minimize risks during a nuclear medical examination will positively impact patient satisfaction. ...
Article
A medical examination at Nuclear Medicine Department (NMD) carries out at multiple stages. Patients are accompanied and guided by nurses during their movements within the NMD to avoid them entering into any hazardous situation. However, even accompanying nurses could be exposed to harmful radiation, which puts their safety at risk. Artificial Intelligence (AI) technologies can address these issues by supporting these processes avoiding risky situations and preventing patients’ and clinicians’ safe. This article presents an artificial intelligence-based architecture for risk management during the nuclear medical examination to automatically guide the patients during the medical examination and support injury prevention. The architecture comprises two main components; the first component integrates Deep Learning (DL) techniques and WiFi tools to monitor and verify the patient’s position continuously; the second integrates Reinforcement Learning (RL) techniques to guide the patient during his/her examination. Experimental results show the suitability of the proposed architecture. Therefore the proposed risk management system can support the prevention of risks and injuries during medical examination and reduce operational costs.
... A SAS is a system that needs to adapt its run-time behavior autonomously by monitoring its environment and determining the best adaptation action to ensure that its adaptation goals are consistently achieved under run-time changing conditions [1,2]. For example, the Traffic Signal Control (TSC) system adapts its timing according to current traffic conditions [3], the task offloading algorithm adapts its strategy based on the task requests generated, their specific requirements, and available resources [4], and the home care management system adapts its way of communicating with the patients based on their physical and/or cognitive disabilities [5,6]. The environment is an external world where a SAS operates, comprising observable physical and virtual entities. ...
Article
Full-text available
Ubiquitous and pervasive systems interact with each other and perform actions favoring the emergence of a global desired behavior. To function well, these systems need to be self-adaptive to handle noisy data and partially observable dynamic environments. However, facing unpredictable and rare events while only accessing incomplete information about the environment causes uncertainty in the adaptation process. Such uncertainty results in inconsistent decisions and unexpected system behavior. Currently, SAS handles such unpredictable conditions using adaptive modeling mechanisms to select default actions or exploiting Reinforcement Learning (RL) algorithms to learn new actions. However, the current mechanisms do not address rare events in an environment. This paper improves the system's decision-making when facing rare events by providing extra information about alternative adaptation actions using domain ontologies, which provide a thorough understanding of a domain. In this paper, we propose an Ontology-based unCertainty handling model (OnCertain), which enables the RL-based system to augment its observation and reason about the rare event using prior ontological knowledge. The overall aim of this model is to improve the system's decision-making process under conditions of uncertainty. Our model is evaluated in a traffic signal control system and an edge computing environment. The results show that the OnCertain model can improve the RL-based systems' observation and, consequently, their performance in such environments.
... The application of Machine Learning (ML) techniques in biomedicine [31], minimizing medication errors during home treatment [32,33], risk management [34], communication [35][36][37] and healthcare [38][39][40] has been increased extensively in recent years. DTRs [41,42] oversimplify personalized medicine to time-varying treatment settings in which the treatment is frequently tailored to a patient's dynamic-state. ...
Article
Full-text available
Dynamic Treatment Regimes (DTRs) are adaptive treatment strategies that allow clinicians to personalize dynamically the treatment for each patient based on their step-by-step response to their treatment. There are a series of predefined alternative treatments for each disease and any patient may associate with one of these treatments according to his/her demographics. DTRs for a certain disease are studied and evaluated by means of statistical approaches where patients are randomized at each step of the treatment and their responses are observed. Recently, the Reinforcement Learning (RL) paradigm has also been applied to determine DTRs. However, such approaches may be limited by the need to design a true reward function, which may be difficult to formalize when the expert knowledge is not well assessed, as when the DTR is in the design phase. To address this limitation, an extension of the RL paradigm, namely Inverse Reinforcement Learning (IRL), has been adopted to learn the reward function from data, such as those derived from DTR trials. In this paper, we define a Projection Based Inverse Reinforcement Learning (PB-IRL) approach to learn the true underlying reward function for given demonstrations (DTR trials). Such a reward function can be used both to evaluate the set of DTRs determined for a certain disease, as well as to enable an RL-based intelligent agent to self-learn the best way and then act as a decision support system for the clinician.
... reinforcement learning [15], neural network [16] the goal of Artificial Intelligence (AI) has become a step closer. AI has important application in diverse fields including:healthcare [17,18], robotics and autonomous control, vision enhancing method for low vision impairments [19], risk management [20,21], communication [22], and Social humanoid robot [23]. Learning from Demonstrations or Imitation learning [24] may usually be divided into three types: IRL,Adversarial Imitation Learning (AIL) and Behavior Cloning (BC). ...
Chapter
Full-text available
In recent years, the importance of artificial intelligence (AI) and reinforcement learning (RL) has exponentially increased in healthcare and learning Dynamic Treatment Regimes (DTR). These techniques are used to learn and recover the best of the doctor’s treatment policies. However, methods based on existing RL approaches are encountered with some limitations e.g. behavior cloning (BC) methods suffer from compounding errors and reinforcement learning (RL) techniques use self-defined reward functions that are either too sparse or need clinical guidance. To tackle the limitations that are associated with RL model, a new technique named Inverse reinforcement learning (IRL) was introduced. In IRL reward function is learned through expert demonstrations. In this paper, we are proposing an IRL approach for finding the true reward function for expert demonstrations. Result shows that with rewards through proposed technique provide fast learning capability to existing RL model as compared to self-defined rewards.
Article
Full-text available
In the past, information technology was frequently considered a waste from Lean manufacturing perspective. Though the business landscape evolves and competition from low-cost nations grows, new models must be created that provides a competitive edge by combining the Lean paradigm with Industry 4.0 technical advancements. This paper aims to contribute to this field by assessing the supporting function of a Machine-based Identification system (MBID) via Optical Character Recognition (OCR) in Lean manufacturing paradigm. The objective of this paper is to also explore the use of MBID to enable a competitive manufacturing process in a Lean 4.0 environment. Furthermore, a MBID via OCR model is proposed to extract the printed identification number of packages from images captured by a fixed camera in an industrial environment. The method considers different digital image processing techniques to deal with the significant lighting and printing variation observed, followed by a segmentation process that extracts and aligns the characters. The proposed system utilized an approach to treating lighting variations in images, covering low contrast, distorted, darker, and brighter images. Experiments were carried out on a data set consisting of 200 images and achieved an overall detection accuracy of 95% with a very low Character Error Rate (CER) value of 0.0041, clearly supporting the validity and effectiveness of the proposed method.
Article
Full-text available
The Internet of Things (IoT) and artificial intelligence (AI) are two of the fastest-growing technologies in the world. With more people moving to cities, the concept of a smart city is not foreign. The idea of a smart city is based on transforming the healthcare sector by increasing its efficiency, lowering costs, and putting the focus back on a better patient care system. Implementing IoT and AI for remote healthcare monitoring (RHM) systems requires a deep understanding of different frameworks in smart cities. These frameworks occur in the form of underlying technologies, devices, systems, models, designs, use cases, and applications. The IoT-based RHM system mainly employs both AI and machine learning (ML) by gathering different records and datasets. On the other hand, ML methods are broadly used to create analytic representations and are incorporated into clinical decision support systems and diverse healthcare service forms. After carefully examining each factor in clinical decision support systems, a unique treatment, lifestyle advice, and care strategy are proposed to patients. The technology used helps to support healthcare applications and analyze activities, body temperature, heart rate, blood glucose, etcetera. Keeping this in mind, this paper provides a survey that focuses on the identification of the most relevant health Internet of things (H-IoT) applications supported by smart city infrastructure. This study also evaluates related technologies and systems for RHM services by understanding the most pertinent monitoring applications based on several models with different corresponding IoT-based sensors. Finally, this research contributes to scientific knowledge by highlighting the main limitations of the topic and recommending possible opportunities in this research area.
Article
Full-text available
Due to the recent progress in Deep Neural Networks, Reinforcement Learning (RL) has become one of the most important and useful technology. It is a learning method where a software agent interacts with an unknown environment, selects actions, and progressively discovers the environment dynamics. RL has been effectively applied in many important areas of real life. This article intends to provide an in-depth introduction of the Markov Decision Process, RL and its algorithms. Moreover, we present a literature review of the application of RL to a variety of fields, including robotics and autonomous control, communication and networking, natural language processing, games and self-organized system, scheduling management and configuration of resources, and computer vision.
Article
This article presents an assistance system based on home sensors, Ambient Intelligence, and Artificial Intelligence, which helps the elderly during their medical treatment at home to reduce medication errors. The sort of medication errors we address are those due to medication omission, wrong dosage or timing, drug-drug interactions. Since the patient may have some physical and/or cognitive disabilities, the proposed solution provides advanced features of self-adaptation and exploits the most cutting edge Artificial Intelligence technologies such as Reinforcement Learning, Deep Learning and Natural Language Processing (NLP) to remind and monitor adherence to the prescribed treatment. In particular, the system offers functions for (i) personalised reminds; i.e. an intelligent agent -called Tutor- self-learns (via Reinforcement Learning) the best way to communicate with the patient; (ii) feedback about the medication the patient is going to take; i.e., another intelligent agent -called Checker- identifies the pillbox that the patient is handling before taking the pill (via Deep Neural Network, Optical Character Recognition, and Barcode Reading); and, (iii) alerts in case of known drug-drug interactions; i.e., an intelligent service -called Advisor- searches for the active principles of the medication (via NLP and Unified Medical Language System (UMLS) RxNorm resources) identified by the Checker and known interactions with other medications of the treatment. The final objective is to remind effectively when and what medication is to be taken, to check that the patient is going to take the correct medication, and to alert if possible drug-drug interactions are identified, remotely reporting about the adherence to the therapy or anomalies to the caregivers and/or doctors. Experimental evaluations show encouraging results in terms of drug recognition and drug-drug interactions identification.
Article
Objective Health care providers need to understand how hospitalization impacts patients' emotional statuses, to provide high quality of care. However, an overview of the literature suggests a dearth of research studies that examine and identify the effects of hospitalization on patients' emotional statuses and their well-being. In addition, no research review has synthesized this evidence before. To close this gap, this integrative review examines and synthesizes prior research findings regarding the effects of hospitalization on adult patients' emotional reactions and psychological well-being. Method This integrative review has been conducted based on the Whittemore and Knafl (2005) outline, which includes four steps: problem identification, literature search, data analysis, and presentation. Seven databases have been systematically searched, including CINAHL, EMBASE, OVID Medline, PsycINFO, SCOPUS, and Cochrane, with no date limitations through January 2021. Results The current review synthesizes the findings of 18 publications to identify patients' experiences and factors that evoked emotional reactions during hospitalization. Factors include the effect of admission to a hospital, length of stay, and readmission; these also influence hospitalization experience, the role of health care providers, and patient's characteristics. Conclusion The current review's findings yield essential information by confirming that hospitalization negatively affects patients' abilities to cope and adjust. Hospitalization demonstrably exacerbates patients' emotions and increases feelings of depression and anxiety. Understanding these findings may help to support patients throughout their hospital stays. Gaps in the evidence and future research recommendations are also explored and discussed to establish a stronger foundation.
Article
Students' health, fitness, and wellbeing depend on various factors, and a better understanding of these factors ensures that students have effective health and wellbeing interventions. Recently, Ambient Intelligence (AmI) and internet of things (IoT) are promising solutions to provide healthcare monitoring and personalized health care to provide efficient, significantly lower medical services. The amount of data created by sensors can pose data inaccessibility and computational challenge in the IoT environment. Hence, in this study, Ambient Intelligence assisted Health Monitoring System (AmIHMS) with IoT devices has been proposed for student health monitoring. Wireless sensor networks (WSNs) are utilized for collecting the data needed by Ami environments. The cloud will handle the increased amount of health data, exchange information in resourceful ways across health care networks, and make Big Data Analytics sustainable. Real-time alerting of student health information with large data is an important exercise that is crucial in the proposed work. The simulation results show that the proposed AmIHMS method enhances reliability, data accessibility, and accuracy compared to popular methods.
Chapter
Wearable sensors are being used in clinical settings to monitor the condition of patients as well as in recreational environments for routine health monitoring. Some of the most advanced clinical applications include monitoring patients with Parkinson's disease through wearable inertial measuring units (IMUs) and patients with diabetes by means of wearable glucose sensors. Prominent examples of wearable sensors in routine use are fitness trackers, step-and calorie counters. Recently, wearables have evolved to being capable of running artificial intelligence algorithms in real-time at the point of sensing which allows to gain analytical insights directly from measurement data. We call such intelligent wearables with AI-at-the-edge functionality THINKables. First use cases for THINKables have emerged in both clinical and nonclinical applications: real-time seizure prediction or detection systems for epilepsy patients, or digital coaches providing real-time feedback to athletes on performance and injury risks. Technological and regulatory challenges of developing and deploying THINKables are multifold: data privacy and security of monitoring data needs to be ensured at all times, analytical AI models need to be transparent, explainable and fair, and all these features need to be implemented taking the limited computing power of point-of-sensing processors into account. In order for THINKables to become integrated into clinical workflows, all stakeholders in the Health AI ecosystem (regulators, clinicians, biomedical device technologists, pharma and biotech sectors, data scientists, and patients) need to work together to create frameworks for responsible and meaningful use.
Article
A clinical treatment process typically carries out in two stages; i.e., hospital stay and treatment at home after hospitalization. The correct completion of the treatment process is essential, but it becomes challenging for elders and patients with any physical or cognitive disability since they need assistance in the execution of the treatment itself. This work presents an intelligent system able to provide automatic assistance to those patients that have to follow a planned treatment at home. The system can support the patient with both customized reminders whenever it is the time to take medication and alerts to avoid possible medication errors when the patient is going to assume an incorrect drug by mistake. The core of the proposed solution consists of a multi-agent system that relies on algorithms of both Reinforcement Learning and Deep Learning. Experimental results show that the system improves the quality of home assistance services reducing medication errors.
Article
Discovering new treatments and personalizing existing ones is one of the major goals of modern clinical research. In the last decade, Artificial Intelligence (AI) has enabled the realization of advanced intelligent systems able to learn about clinical treatments and discover new medical knowledge from the huge amount of data collected. Reinforcement Learning (RL), which is a branch of Machine Learning (ML), has received significant attention in the medical community since it has the potentiality to support the development of personalized treatments in accordance with the more general precision medicine vision. This report presents a review of the role of RL in healthcare by investigating past work, and highlighting any limitations and possible future contributions.