Content uploaded by Ricardo Carreño Aguilera
Author content
All content in this area was uploaded by Ricardo Carreño Aguilera on Aug 04, 2021
Content may be subject to copyright.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
OPEN ACCESS
Fractals, Vol. 29, No. 6 (2021) 2150227 (10 pages)
c
The Author(s)
DOI: 10.1142/S0218348X21502273
BLOCKCHAIN CNN DEEP LEARNING EXPERT
SYSTEM FOR HEALTHCARE EMERGENCY
RICARDO CARRE˜
NO AGUILERA,∗,‡MIGUEL PATI˜
NO ORTIZ,†
ADAN ACOSTA BANDA†and LUIS ENRIQUE CARRE ˜
NO AGUILERA†
∗
Universidad del Istmo
Ciudad Universitaria S/N,Barrio Santa Cruz
4a. Secci´on Sto. Domingo Tehuantepec
C. P. 70760,Oaxaca,M´exico
†
Instituto Polit´ecnico Nacional,SEPI ESIMEZ
Av. Luis Enrique Erro S/N,Unidad Profesional Adolfo L´opez Mateos
Zacatenco,Alcald´ıa Gustavo A. Madero
C. P. 07738 Ciudad de M´exico,M´exico
‡
ricardo.carreno.a@hotmail.com
Received May 26, 2021
Accepted July 2, 2021
Published July 28, 2021
Abstract
This paper relates to the field of Artificial Intelligence, specifically to image recognition, and
provides an innovative method to take advantage of Blockchain Convolutional Neural Networks
(BCNNs) in Emotion Recognitions (ERs) using audio–visual emotion patterns to determine a
healthcare emergency to be attended. BCNN architectures were used to identify emergency
patterns. The results obtained indicate that the proposed method is adequate for the classi-
fication and identification of audio–visual patterns using deep learning (DL) with Restricted
‡Corresponding author.
This is an Open Access article published by World Scientific Publishing Company. It is distributed under the terms of the
Creative Commons Attribution 4.0 (CC BY) License which permits use, distribution and reproduction in any medium, provided
the original work is properly cited.
2150227-1
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
R. C. Aguilera et al.
Boltzmann Machines (RBMs). It is concluded that it is sufficient to consider the audio–visible
key features obtained from the patient’s face and voice of the proposed model to recognize a
healthcare emergency for immediate action. “Sense of urgency” and “with urgency but with
self-control” are the emotion profiles considered for a healthcare emergency, and user personal
emotion profiles are stored in the Blockchain ecosystem since they are deemed sensitive data.
Keywords: Blockchain Convolutional Neural Network (BCNN); Deep Learning (DL); Emotion
Recognition (ER); Healthcare Emergency.
1. INTRODUCTION
Deep learning (DL) is having some impor-
tant advances in Blockchain Convolutional Neural
Networks (BCNNs) and evolving without disrup-
tion in several areas as follows: computing intelli-
gence,1associative memories,1fuzzy computing,2–4
expert systems,5–8 and Internet of Things.9–11 Neu-
ral networks (NNs) with deep learning stand out
due to their versatility and functionality. Deep
learning has allowed the use of neural networks
with large numbers of layers and neurons,12–14 thus
handling great amounts of information and leaving
behind the most significant limitations that existed
in the past, and Blockchain technology has become
usable to a greater extent in many fields to protect
sensitive data. This advantage in science and tech-
nology has impacted the image recognition field,
including audio–visual recognition,15–17 focusing on
face search applications within a database to cor-
relate it with the person and identify its compati-
bility with specific attributes. However, it has not
been focused yet on the recognition of a combina-
tion of basic emotions, and specifically in healthcare
to detect emergency patterns, which can be very
important for determining the urgency to receive
an immediate medical service. This is important
in making decisions since it may save lives and
reduce expenses. Also, using smart contracts with
Blockchains provides security and reliability to the
patient and medical services.
The use of this application avoids wasting wor-
thy time due to indecisiveness in not giving urgent
attention when needed or giving urgent attention
when not required; in other words, this expert sys-
tem can be a reference for making decisions, with
each particular case requiring a detailed analy-
sis of patient’s situation. This expert system can
aid in detecting the real-time healthcare emergen-
cies when the patient is already receiving medi-
cal attention and selecting a patient for a medical
service before receiving medical attention at a first
check. Therefore, this expert system may be helpful
in determining the proper medical attention when
accurate decisions must be taken in urgent medi-
cal services. It is important to emphasize that this
development does not propose to replace the exist-
ing online medical services; however, it can comple-
ment them.
The importance of the design and contribution
of this method lies in the fact that it would be
impossible for a human being to interpret auto-
matically and immediately the amounts of informa-
tion that have to be analyzed to properly deter-
mine the kind of emotions that the patient is really
feeling, considering that healthcare/medical care is
sometimes under colossal stress. This stress situ-
ation demands the most urgent attention given by
the medical assistance to stabilized the patients but
not in detecting emotional profiles. Furthermore,
this application acts as a predictive interpreter that
enables nonevident inferences based on Artificial
Intelligence thus making the identification of the
patient’s profile possible. An expert system detect-
ing healthcare emergencies in real-time can save
lives since it requires immediate attention. Using
this expert system in medical service may have a
significant impact on saving lives. The proposal for
the development and use of audio–visual recogni-
tion with deep learning neural networks promotes
the technological advance of audio–visual recogni-
tion and patient identification for medical services.
2. EXPERT SYSTEM
DEPLOYMENT
Our contribution is a computational method for the
recognition of audio–visual patterns of a patient to
predict the type of patient profile using artificial
intelligence tools such as Blockchain, Convolutional
Neural Networks (CNN), and DL, with NNs. This
application includes in its structure the modules
discussed in the following.
2150227-2
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
Blockchain CNN Deep Learning Expert System for Healthcare Emergency
2.1. Human–Machine Interface
The access to the application called “emotional
medical system administrator” is through a Web
application with possible terminal licenses. Since
the access is via the Internet, the usage can be
for hospitals, healthcare in houses, private use,
research institutions, industrial usage, etc., under
the Internet of Things concept. The user–machine
interface is established with the computational
talking bot, which will guide the user in an intel-
ligent way. This interface is smart enough to com-
municate the emotion pattern recognition results to
either the patient or the medical/care service (per-
son/team). The human–machine interface contains
visual and audio devices: webcam, microphones,
and oscilloscopes.
Figure 1 shows the information flow that takes
place in the process of interaction between the
patient and the system in general. In this sys-
tem, the database plays an important role, being
the central axis of both the process and the flow
of information. The system receives the necessary
information from the patient to compare it with
the pre-established patterns and make the required
recognition process in order to determine the audio–
visual emotion profiles that can be applied for the
patient diagnosis.
Figure 1 also shows an image acquisition device
for capturing images and voice as part of the
patient’s audio–visual information to recognize
the emotion patterns. This high-fidelity camera
is fast enough to capture various combinations
of audio–visual figures for the emotions at mul-
tiple moments and recognize the patient’s emo-
tion with sufficient clarity for the analysis of key
Paent
(system
user)
Data Acquision
Equipment for
Sampling Images in Real Time
Camera, Microphone
and Oscilloscope
AUDIO-VISUAL DEVICES
Condioning
and
Fing Image
(Crop, Convert
to
Gray Scale,
and Resize)
HIERARCHICAL
CONVOLUTION
Key
Characteriscs
Extracon
Input
wiWi+k-1
Convoluon+Pooling
Layer l
Convoluon+Pooling
Layer l+1
FULL
CONNECTED LAYERS
Classifier
Basic
Emoons
Profiles
Combined
Emoon
Profiles
Emergency Health Care Profile
RELU
KALMAN
FILTERS
Fig. 1 Patient–machine interface module.
characteristics. A microphone is embedded in the
system for sending the signal to the oscilloscope and
data acquisition equipment for sampling the voice
until finding a basic emotion profile. Finally, images
acquired from the camera and the oscilloscope are
used to find the key characteristics to be sent to
the hierarchical CNN+pooling process18,19 for get-
ting the basic and combined emotion profiles using a
fully connected neural network classifier (the hierar-
chical CNN + pooling process maps key characteris-
tics with a k-sized convolutional filter). The benefit
of hierarchical CNN is that it can be flexible, chang-
ing the layers quantity as required depending on the
key characteristics required for each combined emo-
tion profile.
Considering (W, b) as the parameters correspond-
ing to weights and biases, ⊕as the concatenation
operation done on nkey characteristic embeddings
e1:neach of dimension d,andkasthesizeof
the convolution, the output is given by cnn1:mas
follows18,19:
cnn1:m=CK
(W,b)(e1:n),(1)
cnn1:m=f(⊕(Wi:i+k−1)W+b),(2)
where m=n−k+ 1 since this case is a narrow
convolution. If we consider players, where the con-
volution of one feeds into another, then
cnn1
1:m1=Ck1
(W1,b1)(W1:n),
cnn2
1:m2=Ck2
(W2,b2)c1
1:m1,
.
.
.
cnnp
1:mp=Ckp
(Wp,bp)cp−1
1:mp−1.
(3)
2150227-3
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
R. C. Aguilera et al.
2.2. Centralized Database
This is a database containing the audio–visual pat-
terns to teach the BCNN deep learning expert
system to recognize the audio–visual emotion pat-
terns. This database has two basic emotion sets:
the “regular” set of basic emotions including sad-
ness, anger, fearfulness, disgust, surprise, and calm;
and the “very” set with intense emotions including
very sad, very angry, very fearful, very disgusting,
very surprising, and very calm. Any emotion out of
these basic emotion sets is neutral.
2.3. Decentralized Database
This database is not part of the expert system
learning or recognition process, but is only a safe
storage for sensitive user data as follows: user
name and his/her emotional profile results, there-
fore these data can be retrieved by the expert
system any time, mainly by the human–machine
interface for queries. This database stores the
user’s basic emotion results and whether the user
got the combinations of emotions for a health-
care emergency profile. The two valid combina-
tions for healthcare emergency profile are as follows:
“sense of urgency” (very sad + very angry + very
fearful) and “with urgency but with self-control”
(sad + angry + fearful + disgusting). A decentral-
ized database is protected by the cryptography and
Blockchain technology in the Ethereum ecosystem.
2.4. Inference Module
Inference module performs the analysis of the infor-
mation to determine the profile of the patient’s emo-
tions for the prediction, resulting from the study
of the audio–visual patterns, which consists of an
intelligent module using Convolutional Neural Net-
works and Kalman filters for deep learning, and
smart contracts in Blockchain to trigger the usage
of the expert system by the authorized user, avoid-
ing fraudulent use, corruption of the system, or
hacking the system for ransomware. The function
of this module is to evaluate the prediction that
consists of the search for emotion patterns within
a database generated from all time usage as long
as the expert system is working. This module also
uses the existing previous patient analysis to make
the expert system more smarter every time. Basi-
cally, the main job of this module is to permit access
and authorize user using a safety log in the system
(smart contract), recognize the audio–visual emo-
tion patterns, and keep safely the patient data in
the Blo ckchain20,21 with the smart contract allow-
ing to retrieve and store patients’ and users’ data
from the databases, respectively.
In this sense, the database is a structure based
on an inference logic that is comprised of two differ-
ent templates, one with the emergency or not really
an emergency notation and another that details the
attributes or characteristics of each of these con-
notations. In a first comparison process with the
pre-established patterns within the same database,
it is determined in the first instance whether it is
a patient with an emergency emotion profile: aver-
age (indecisiveness) or not really grave (a patient
pretending to be at emergency but not really), and
once this is completed, it is concatenated with a
tuple or a list of attributes corresponding to the
determined profile of emotion. The profile emotion
is a logic AND of the combination of basic emotions
as described in the following, except for the average
profile, which is not a combination:
Healthcare emergency profile:
sense of urgency (very sad + very angry + very
fearful),
with urgency but with self-control (sad + angry+
fearful + disgusting);
Average profile:
neutral;
Nonhealthcare emergency profile:
insistent in getting a quick response (sad + fear-
ful + surprising),
no sense of urgency (calm + sad + fearful + dis-
gusting),
tangled person (angry + fearful + disgusting),
not insistent on getting the medical service
(calm + sad + fearful + disgusting).
From Fig. 2, to determine a patient’s profile,
it is necessary that an audio–visual image goes
User
Image
Pooling key
characteriscs
Reference
Image
Paerns
Centralized
Database
Paerns
(very and
regular
emoons)
CNN process
recognion
Audio
Video
Valid user combined
emoons
detected
Ethereum blockchain ecosystem
Descentralized Database with
Sensible user data
Deep Learning Process
Kalman Filters Neural Network
Fig. 2 BCNN diagram of the inference machine.
2150227-4
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
Blockchain CNN Deep Learning Expert System for Healthcare Emergency
Table 1 Truth Table to Determine the Emergency Profile.
Item Input I Input II Input III Input IV Input V Input VI Input VII . . . Input XIII Output
11 1 1 0 0 0 0 0 0 1
20 0 0 1 1 1 1 0 0 1
... ... ... ... ... ... ... ... ... ... 0
13 0 0 0 0 0 0 0 0 1 0
through a key characteristic pooling, then it is com-
pared with the patterns contained in the centralized
database. The user’s valid emotions detected are
protected in a BCNN environment, and these emo-
tions can be retrieved any time when it is necessary.
In Table 1, we can see the truth table for the
emergency profile. The emergency profile is a basic
emotions combination with two possibilities:
Sense of urgency [very sad (input I) + very angry
(input II) + very fearful (input III)], which is
case 1.
With urgency but with self-control [sad (input
IV) + angry (input V) + fearful (input VI) +
disgusting (input VII)], which is case 2.
Item 13 shows the neutral case for input XIII
(neutral), and the rest of the cases are not signifi-
cant for this study since it only matters the combi-
nations for health emergency profile.
2.5. Deep Learning Deployment
With the purpose to test the neural identifi-
cation task using deep architectures subject to
pre-training processes,22,23 a class of systems repre-
sented by equations in differences form is considered
as follows:
ys(k)=f(ys(k−1),...,y
s(k−n),u(k−1)),
(4)
where ys(k) represents the evolution of the output of
the system that depends on nprevious outputs, and
u(k−1) is the value of some signal that is actively
interacting with the system. In this paper, a series–
parallel-type model is used to carry out the identi-
fication process. This configuration uses the output
of the actual system for the calculation and updat-
ing of the synaptic weights.
In Fig. 3, the identification process is represented
considering a second-order plant, which depends
directly on the values of its outputs in two previ-
ous instants. Here, the output of the real system is
denoted by ys, and the identification value of the
model is represented by y.
Fig. 3 Parallel–serial topology for system identification.
In a multilayer perceptron (MLP) neuronal net-
work,24–27 layer kdelivers an output vector hk−1
using the output hkobtained from the last layer,
starting this process with the input x=h0.Aclas-
sic example of activation for a neural network func-
tion is the hyperbolic tangent, whose output layer
is given by the following equation:
hk= tanh(bk+wkhk−1),(5)
where bkis the displacement vector and wkis
the weight matrix; the activation function can be
changed depending on the problem to be solved.
Another very prevalent function is the sigmoid,
given by the following equation:
sigm(u)=1
2(sigm(u)+1).(6)
For this study, the tangential hyperbolic function
was used for the hidden layers due to the best results
obtained in learning; and a sigmoid function as an
activation function is used for the output layer. The
total output network, hl,isexpressedbytheopti-
mization function (cost function) L(hl,y)because
it is generally convex for bl+wlhl−1. Therefore, it
can be represented by the following equation:
hl
i=expbl
ihl−1
i
jexpbl
i+wl
ihl−1
i,(7)
where
i
hl
i=1 and hl
iis positive.
2150227-5
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
R. C. Aguilera et al.
The output hl
ican be used as an estimator of
P(Y=y|x). In this case, a negative value of the
likelihood logarithm is used. Therefore, L(hl,y)=
−log P(Y=y|x)=−log hl
iwhose expected value
for (x, y) should be minimized with the cost func-
tion. The sigm(u) transfer function is used to design
the neural network with machine learning architec-
ture (DL) where, by definition, the hidden layers
and the output layer are trained differently. This
DL neural network is the architecture of the neural
network used once the Belief Learning28 statistical
methods find a low-dimensional representation of
deep learning.
Boltzmann Machines29,30 are a particular class
of the so-called Markov random field in the form of
linear logarithmic version, i.e. for which the energy
function is linear in its parameters. To make this
model powerful enough to represent complicated
distributions (i.e. to go from a limited parametric
configuration to a nonparametric one), we will con-
sider that some of the variables that describe the
representation are never observed (hidden calls).
By having more hidden variables (or hidden units),
the modeling capacity of the Boltzmann Machine is
increased (although its computational cost is also
increased). The Restricted Boltzmann Machines
(RBMs) only take into account those models in
which there are no connections of the visible–visible
and hidden–hidden types, see Fig. 4.
The energy function E(v, h)ofanRBMis
defined as
E(v, h)=−bTv−cTh−hTWv ,(8)
where Wrepresents the weights that connect the
hidden layer with the visible layer and b, c are the
offsets of the visible and hidden units, respectively.
This transforms directly into the following formula
Fig. 4 Fully connected Boltzmann-constrained machine
with v-o dependency.
for free energy:
F(v)=−bTv−
i
log
hi
ehi(ci+Wiv).(9)
Due to the specific structure of RBMs, the hid-
den and visible units are conditionally independent,
given one or the other. Using this property, equa-
tions can be written as follows:
p(h|v)=
i
p(hi|v),(10)
p(v|h)=
j
p(vj|h).(11)
Deep belief networks (DBNs) are graphical mod-
els that learn to extract a deep hierarchical repre-
sentation from training data. These models charac-
terize the joint distribution hkbetween the observed
vector xand the lhidden layers as shown in the fol-
lowing:
P(x, h1,...,h
l)
=l−2
k=0
P(hk|hk+1)P(hl−1,h
l),(12)
where x=h0,P(hk|hk+1) is a conditional distri-
bution for the visible units limited to on the hid-
den units that belong to the RBM at level k,and
P(hl−1,h
l) is the hidden joint distribution visible in
the RBM of the upper or output level. This is illus-
trated by Fig. 5. Deep belief network is formed by
RBMs. The principle of selfish layer unsupervised
training can be applied to DBNs with RBMs fol-
lowing the concept of a building block presented by
Bengio et al.31 and Hinton et al.32 The process is
carried out as follows (see Fig. 5):
(1) Train the first layer as an RBM that models the
original input x=h(0) as its visible layer.
(2) Use this first layer to obtain a representation
of the input that will be used as input data
Fig. 5 Deep belief network formed by RBMs.
2150227-6
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
Blockchain CNN Deep Learning Expert System for Healthcare Emergency
for the second layer. Since an RBM does not
contain an inherent output in its structure, we
have two options for assigning this representa-
tion. The chosen dataset is generally chosen as
the average activations p(h(1) = 1 |h(0)) or the
samples of p(h(1) |h(0)).
(3) Train the second layer as an independent RBM,
taking the transformed data (samples or activa-
tions) as training examples (for the visible layer
of that RBM).
(4) Repeat steps (2) and (3) according to the
desired number of layers, propagating either the
samples or the activations upwards.
(5) Carry out a fine adjustment procedure upon
all the parameters of this deep architecture to
an approximation function (proxy) of the log-
likelihood of the DBN, or to a supervised learn-
ing criterion; for the realization of the latter,
it is necessary to add extra machinery in the
network architecture that converts the learned
representation into supervised predictions, e.g.
a classifier or linear function.
3. DEEP LEARNING ON
BLOCKCHAIN
The developed expert system has a precision of
84.1% obtained in the semantic search engine. The
accuracy can be improved using a more extensive
database, but this will reduce the learning speed.
Therefore, precision is not the only neural network
issue. However, the obtained precision is acceptable
for this study if we consider that the core of this
paper is the semantic neural network performance
but not the database performance. Table 2 shows
the comparative results using three common secure
hash algorithms used for Blockchain: SHA256 used
in the Bitcoin protocol, SCRYPT used in Litecoin
protocol, and X11 used in Dash coin.
The choice of the architecture to use has some
significance. For this, an algorithm for random
selection of hyperparameters is implemented. The
pre-training algorithm used was chosen as the one
that gave the best results during the identifica-
tion of the system, specifically a DBN architecture
using 1 Gibbs step with fine-tuning by means of
descending gradient and early stop with 5% thresh-
old improvement. Only a pre-training epoch and a
fine-tuning epoch of the model were used, consider-
ing visible units of continuous type in the interval
[0,1].
Table 2 Blockchain Algorithms Comparative
Performance.
Algorithm Parameters Training Testing
(s) (s)
SHA256 L= 843, N= 10, 35.8 29.33
M=1TB, D=7
SCRYPT L= 843, N= 10, 27.2 24.5
M=1TB, D=7
X11 L= 843, N= 10, 42.5 39.1
M=1TB, D=7
Notes: Table 2 shows the performance in time of seman-
tic searches where Lrepresents the number of entries,
Nrepresents the number of hidden nodes per layer, M
represents the size of the database, and Dthe number
of depth layers (deep learning dimension).
Parameters used in the deep learning net for this
study are in Table 3.
Table 3 Hyperparameters of a Deep Network
Range.
Units Layers Pre-Training Fine
Per Learning Adjustment
Layer Rate Learning
Rate
Minimum 3 1 0.01 0.1
Maximum 10 7 0.1 0.3
The deep belief network helps to minimize
the dimensionality effect problem in the deep
architecture models. It is observed that by using
only two identification layers, we can have a min-
imal testing error. Nevertheless, on the medium
region of the surface, it is noticed that when increas-
ing the numbers of layers and units, we can achieve
a new magnitude of local minimum from the studied
error (ideal deep dimension).
In Table 3 can be seen the configurations with
different neural network architectures and the
dynamic error performance for the deep learning
training is shown in Fig. 6, with a testing database.
In this case, the training database and testing
database are the same. In Fig. 6, it can be observed
that minimum error can be obtained with 41 lay-
ers. Therefore, in this study, for this application, 41
layers are recommended for deep learning.
In Table 4 is shown a comparative performance
between three different neural network structures:
extreme learning machine (ELM), MLP, and DL.
In this table, we can see that DL represents the
minimum cost, therefore it is the optimal solution
with a big difference.
2150227-7
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
R. C. Aguilera et al.
Fig. 6 Deep learning dynamic error.
Table 4 Comparative Table for ELM, MLP,
and DL.
ELM MLP DL
Cost/efficiency (s) 11.012 19.175 7.965
Fig. 7 Confusion matrix for combined profiles.
The confusion matrix shown in Fig. 7 has a green
color visual aid. The darker the green color, the
higher the match score from 0% to 100%; a 100%
score represents an accuracy of 100%, i.e. all con-
sidered Emotion Recognition (ER) images (actual
profiles) match with the correct ER image predicted
profile. The matching accuracies between the actual
profiles and their predicted profiles are in the range
of 71.31–87.84%.
The scores obtained are acceptable for this exper-
imental study since there is a clear difference
Table 5 Experimental Results.
Combined Profiles Accuracy
Rate
Healthcare emergency profile (average score = 80.955%)
Sense of urgency 82.41%
With urgency but with self-control 79.50%
Average profile
Neutral 87.84%
Nonhealthcare emergency profile (average score = 78.26%)
No sense of urgency 84.42%
Insistent in getting a quick response 81.16%
Not insistent to get the medial service 76.15%
Tangled person 71.31%
with the confusion range of 11.99–55.79% when
matching incorrectly actual profiles with predicted
profiles.
One can observe that the only two valid combined
profiles for a healthcare emergency are: “sense of
urgency” and “with urgency but with self-control”.
In Table 5, we can see that a lower accuracy
is observed for “nonhealthcare emergency profile”
(78.26% on average) and a greater accuracy for
“healthcare emergency profile” (80.955%). Since
only combined profiles matter for a healthcare emer-
gency profile in this study, this is good.
This study only tested combined emotion pro-
files for healthcare emergencies. However, as a
future work, it can be interesting to try new com-
bined emotion recognitions for healthcare or other
applications.
Confusion matrix accuracies shown in Fig. 7 can
be improved using more powerful CPU hardware.
In this case, the following hardware is used: Intel
Core i7-8700K turbo boost processor, 16-GB DDR4
2400-MHz RAM, and two GeForce GTX 1070 Ti
GPU boards.
2150227-8
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
Blockchain CNN Deep Learning Expert System for Healthcare Emergency
4. CONCLUSIONS
The main purpose of this paper is emotion pat-
terns recognition. However, it may be exciting to
try regressive elements as recurrent and nonrecur-
rent neural network combinations33 to predict new
health emergencies for patients who already had one
but are still in a critical condition.
In the case of combined emotions classification,
it was found that it is sufficient to consider binary
inputs to the model (see Table 1) and identify sys-
tems with real-time inputs and outputs. It was also
observed that the domain workspace considered for
the profiles must be customized for each client; in
this case, the results were obtained in a standard
hospital expert system shell to achieve better per-
formance by the pre-training stage.
In this study, Artificial Intelligence in a
Blockchain ecosystem was incorporated as an effi-
cient alternative for solving several healthcare emer-
gencies in different health areas such as hospitals,
doctors’ clinics, research, etc. So, this is another
Blockchain study case that is proposed because of
its capability to make secure transactions.
The RBM Kalman filters proved to have a good
performance. However, in this case, the process was
performed with 13 basic emotion profiles, but when
working hundreds of classifications (profiles), the
efficiency (cost in time) may be affected; therefore,
an in-depth study must be performed in each case.
Better accuracies can be obtained using more
powerful hardware. However, it is not really the
aim of this study to perform a higher precision pro-
cess. This will depend on the requirements for each
application. In this case, we are just proving the
efficiency of the proposed technology.
REFERENCES
1. R. Carre˜no et al., Computational intelligence
for shoeprint recognition, Fractals 27(4) (2019)
1950080.
2. P. Agrawal, V. Madaan and V. Kumar, Fuzzy rule-
based medical expert system to identify the disor-
ders of eyes, ENT and liver, Int. J. Adv. Intell.
Paradig. 7(2015) 352–367.
3. C. A. Magni et al., An alternative approach to firms
evaluation: Expert systems and fuzzy logic, Int. J.
Inf. Technol. Decis. Mak. 5(1) (2016) 195–225.
4. W. Yu and X. Li, Automated nonlinear system mod-
eling with multiple fuzzy neural networks and kernel
smoothing, Int. J. Neural Syst. 20(5) (2010) 429–
435.
5. W. Gu et al., Expert system for ice hockey game
prediction: Data mining with human judgment, Int.
J. Inf. Technol. Decis. Mak. 15(4) (2016) 763–789.
6. N. J. Pizzi et al., Expert system approach to
assessments of bleeding predispositions in tonsillec-
tomy/adenoidectomy patients, Adv. Artif. Intell. 27
(1990) 67–83.
7. P. Som, R. Chitturi and A. J. G. Babu, Expert sys-
tems application in manufacturing, Proc. SPIE 786
(1987) 474–479.
8. R. Carre˜no et al., An IoT expert system shell
in block-chain technology with ELM as inference
engine, Int. J. Inf. Technol. Decis. Mak. 18(1)
(2019) 87–104.
9. F. K. Flores, Internet of Things: Managing wireless
sensor network with rest API for smart homes, in
Theory and Practice of Computation:Proceedings
of Workshop on Computation:Theory and Practice
WCTP2014 (World Scientific, 2016), pp. 132–142.
10. R. Carre˜no et al., Parameter estimation space for
unknown internal evolution on IoT domotic systems,
Fractals 28(3) (2020) 2050066.
11. R. J. de Jesus, A. Hern´andez-Alberto Jos´e, J. Avila-
Camacho Francisco, M. Stein-Carrillo Juan and
A. Mel´endez-Ram´ırez, Sistema sensor para el moni-
toreo ambiental basado en redes neuronales (Sensor
system based in neural networks for the environ-
mental monitoring), Ing. Investig. Tecnol. 17 (2016)
211–222.
12. H. Larochelle, Y. Bengio, J. Louradour and P. Lam-
blin, Exploring strategies for training deep neural
networks, J. Mach. Learn. Res. 10 (2009) 1–40.
13. F. Hao et al., Deep learning, Int. J. Semant. Com-
put. 10(3) (2016) 417–439.
14. J. Baxter, A bayesian/information theoretic model
of learning via multiple task sampling, Mach. Learn.
28 (1997) 7–40.
15. V. E. Dahiphale, R. Sathyanarayana and M. M.
Mukhedkar, Computer vision system for driver
fatigue detection, Int. J. Adv. Res. Electron. Com-
mun. Eng. 4(9) (2015) 2331–2334.
16. H. Nguyen et al., Facial emotion recognition using
an ensemble of multi-level convolutional neural net-
works, Int. J. Pattern Recognit. Artif. Intell. 33(11)
(2019) 1940015.
17. R. Carre˜no et al., Robotic arm with BIoT machine
learning system, Fra ctals 28(4) (2020) 2050088.
18. U. Kamath et al., Convolutional neural networks,
in Deep Learning for NLP and Speech Recognition
(Springer, Cham, 2019), pp. 263–314.
19. H. Khalajzadeh et al., Hierarchical structure based
convolutional neural network for face recognition,
Int. J. Comput. Intell. Appl. 12(3) (2013) 1350018.
20. M. Samaniego and R. Deters, Internet of Smart
Things — IoST: Using blockchain and CLIPS to
make things autonomous, in Proceedings of the
2150227-9
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.
2nd Reading
July 27, 2021 16:59 0218-348X 2150227
R. C. Aguilera et al.
IEEE International Conference on Cognitive Com-
puting (ICCC ) (2017), pp. 9–16.
21. R. Carre˜no et al., A nonlinear model for a smart
semantic browser bot for a text attribute recogni-
tion, Fractals 28(2) (2020) 2050045.
22. D. Erhan, P.-A. Manzagol, Y. Bengio, S. Bengio
and P. Vincent, The difficulty of training deep
architectures and the effect of unsupervised pre-
training, in Proceedings of the Twelfth In ter national
Conference on Artificial Intelligence and Statistics
(AISTATS’09) (2009), pp. 153–160.
23. G. B. Huang, What are extreme learning machines?
Filling the gap between Frank Rosenblatt’s dream
and John von Neumann’s puzzle, Cogn. Comput.
7(3) (2015) 263–278.
24. J. J. S. Diaz, J. J. D. Fernandez and E. G. Guer-
rero, Diagn´ostico autom´atico del s´ındrome coronario
agudo mediante el uso de un sistema multiagente
basado en redes neuronales (Automatic diagnosis of
acute coronary syndrome using a multi-agent system
based in neural networks), Rev. Colomb. Cardiol. 24
(2017) 255–260.
25. R. Carre˜no et al., Comparative analysis on nonlin-
ear models for ron gasoline blending using neural
networks, Fractals 25(6) (2017) 1750064.
26. Q. Wang and P. Lu, Research on application of arti-
ficial intelligence in computer network technology,
Int. J. Pattern Recognit. Artif. Intell. 33(5) (2019)
1959015.
27. J. J. Sprockel, J. J. Diaztagle, W. Alzate and E.
Gonz´alez, Redes neuronales en el diagn´ostico del
infarto agudo de miocardio (Neural networks for
the diagnosis of acute myocardial infarction), Rev.
Colomb. Cardiol. 21 (2014) 215–223.
28. S. Kamada et al., Knowledge extraction of adaptive
structural learning of deep belief network for medical
examination data, Int. J. Semant. Comput. 13(1)
(2019) 67–87.
29. D. H. Ackley, G. E. Hinton and T. J. Sejnowski, A
learning algorithm for Boltzmann machines, Cogn.
Sci. 9(1985) 147–169.
30. G. E. Hinton and T. J. Sejnowski, Learning and
relearning in Boltzmann machines, in Parallel Dis-
tributed Processing:Explorations in the Microstruc-
ture of Cognition,Vol.1: Foundations (The MIT
Press, Cambridge, 1986), pp. 282–317.
31. Y. Bengio, P. Simard and P. Frasconi, Learning long-
term dependencies with gradient descent is difficult,
IEEE Trans. Neural Netw. 5(2) (1994) 157–166.
32. G. E. Hinton, S. Osindero and Y. Teh, A fast learn-
ing algorithm for deep belief nets, Neural Comput.
18 (2006) 1527–1554.
33. T. Rashid et al., Auto-regressive recurrent neural
network approach for electricity load forecasting,
Int. J. Comput. Intell. 3(1) (2006) 36–44.
2150227-10
Fractals Downloaded from www.worldscientific.com
by 200.68.170.59 on 08/04/21. Re-use and distribution is strictly not permitted, except for Open Access articles.