Access to this full-text is provided by Wiley.
Content available from Computational Intelligence and Neuroscience
This content is subject to copyright. Terms and conditions apply.
Research Article
Medical Image Classification Utilizing Ensemble Learning and
Levy Flight-Based Honey Badger Algorithm on 6G-Enabled
Internet of Things
Mohamed Abd Elaziz ,
1
,
2
,
3
Alhassan Mabrouk ,
4
Abdelghani Dahou ,
5
and Samia Allaoua Chelloug
6
1
Faculty of Computer Science Engineering, Galala University, Suze 435611, Egypt
2
Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, UAE
3
Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
4
Mathematics and Computer Science Department, Faculty of Science, Beni-Suef University, Beni Suef 62511, Egypt
5
Mathematics and Computer Science Department, University of Ahmed DRAIA, Adrar 01000, Algeria
6
Department of Information Technology, College of Computer and Information Sciences,
Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
Correspondence should be addressed to Mohamed Abd Elaziz; abd_el_aziz_m@yahoo.com and Samia Allaoua Chelloug;
sachelloug@pnu.edu.sa
Received 14 February 2022; Revised 20 March 2022; Accepted 30 April 2022; Published 29 May 2022
Academic Editor: Dalin Zhang
Copyright ©2022 Mohamed Abd Elaziz et al. is is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Recently, the 6G-enabled Internet of Medical ings (IoMT) has played a key role in the development of functional health systems
due to the massive data generated daily from the hospitals. erefore, the automatic detection and prediction of future risks such
as pneumonia and retinal diseases are still under research and study. However, traditional approaches did not yield good results
for accurate diagnosis. In this paper, a robust 6G-enabled IoMT framework is proposed for medical image classification with an
ensemble learning (EL)-based model. EL is achieved using MobileNet and DenseNet architecture as a feature extraction backbone.
In addition, the developed framework uses a modified honey badger algorithm (HBA) based on Levy flight (LFHBA) as a feature
selection method that aims to remove the irrelevant features from those extracted features using the EL model. For evaluation of
the performance of the proposed framework, the chest X-ray (CXR) dataset and the optical coherence tomography (OCT) dataset
were employed. e accuracy of our technique was 87.10% on the CXR dataset and 94.32% on OCT dataset—both very good
results. Compared to other current methods, the proposed method is more accurate and efficient than other well-known and
popular algorithms.
1. Introduction
Providing medical diagnoses in real time using modern
communication technologies such as the sixth-generation
wireless communications standard (6G) is a major topic.
Fortunately, early diagnosis of infections that affect sensitive
human body sections may assist in restricting disease
transmission and safeguarding the afflicted body parts (e.g.,
cancers spread rapidly throughout the body). Without a
proper and timely diagnosis, illnesses may spread rapidly,
resulting in a high risk of death [1, 2]. For instance,
pneumonia diagnosis and prediction are difficult in medical
imaging and healthcare, which are still under research. e
fast growth of medical devices, communication technology,
cloud computing, and the Internet of Medical ings
(IoMT) may improve healthcare. e IoMT is a series of
Internet-connected devices that aid in medical operations
and activities [3]. e IoMTand 6G technology recently gave
the medical field new tools/approaches to enhance illness
detection and provide a rapid medical diagnosis.
Hindawi
Computational Intelligence and Neuroscience
Volume 2022, Article ID 5830766, 17 pages
https://doi.org/10.1155/2022/5830766
Furthermore, a vast quantity of data, such as computed
tomographic (CT) scans, is created daily, showing the
challenge of analyzing these images in real time to aid in the
early detection of disease. Medical images may be restricted
to researchers in the medical field alone due to privacy is-
sues. As a result, the medical field continues to face hurdles
in categorizing medical images. Furthermore, CT scans are
low-resolution, noisy, and complex to analyze, posing dif-
ficulties in terms of disease detection accuracy.
Compared to earlier approaches for early diagnosis, the
6G-enabled IoMT offers a framework for analyzing multiple
slices of CT scans in real time, allowing for speedy and
reliable replies. Furthermore, various deep learning (DL)
models and architectures were developed for the IoMT
infrastructure integration [4, 5]. For instance, the MobileNet
architecture is a widely used model which can be easily
integrated into embedded systems with low resource. In
addition, the MobileNet [6] models showed remarkable
performance in terms of medical image analysis and disease
diagnosis assistance [7, 8]. e recently developed DL
models such as DenseNet [9], ShuffleNet [10], NASNet [11],
and EfficientNet [12] rely on transfer learning which boost
their performance based on using pretrained models on
large amount of domain-specific data rather than using the
same untrained model on new data [13, 14]. e employ-
ment of DL became one of the most popular and widely
applied algorithms for computer vision tasks, such as im-
proving loss function [15], feature selection optimization
[16], and information interaction perception network [17].
Furthermore, most of the DL models applied to image
analysis are based on convolutional neural networks (CNNs)
which are widely used in medical imaging as robust feature
extractors. However, the usage of these models can be costly
in terms of the model size, number of extracted and learned
features (representation space), and the computational
complexity (time). In addition, the models are not easily
integrated into IoT devices, which is a major challenge for
the medical field. us, developing an optimized framework
for the 6G IoT-enabled medical images is promising for the
medical field especially when applying optimization algo-
rithms to only select the most relevant extracted features
from a DL model and reduce the representation space and
boost the system performance. In addition, single DL models
are less challenging in terms of development than ensemble-
based DL models, whereas the latest one can learn more
meaningfully features from image data which can be used to
improve the system performance [18].
Recently, the feature selection (FS) methods have been
developed to improve the performance of the classification
task [16, 19]. For diagnosing medical images, in [20], they
proposed a modified crow search algorithm as an FS
technique to improve Parkinson’s disease. is algorithm
has been established its performance against other models.
In [21], an opposition-based crow search (OCS) algorithm
has been introduced as FS to determine the relevant features
extracted using the DL model. is model has been used to
classify brain image, lung cancer, and Alzheimer’s disease in
IoMT environment. With the advantages of these FS
methods that achieved to improve the performance of
classification in IoMT, they still have some limitations
such as stuck in local optima and slow convergence to-
wards the optimal solution. In addition, according to the
none-free lunch theory, the optimization algorithm
cannot solve all the optimization problems (such as FS)
with the same efficiency. is motivated us to developed
an alternative FS model for IoMT environment using a
modified new meta-heuristic technique named honey
badger algorithm (HBA) [22].
In general, the HBA emulates the behaviour of honey
badger in nature to catch its food. According to these be-
haviours, HBA has been applied to solve different optimi-
zation and engineering problems as illustrated in [22]. Also,
it has been applied to identify the parameter of proton-
exchange membrane fuel cells as in [23] and to improve the
sidelobe suppression in the antenna radiation patterns as in
[24].
erefore, this paper proposes a system for improving
diagnostic imaging recognition effectiveness in terms of
classification accuracy, which will be incorporated into
the 6G-enabled IoMT. e system is composed of parts
including feature extraction and feature selection. In the
first part, a DL-based ensemble architecture was imple-
mented incorporating two well-known DL models named
MobileNet and DenseNet. At this stage, the ensemble
model is trained to learn and extract more complex and
meaningful image representation which will be fed to the
FS part. In the second part, a novel FS algorithm is
proposed by improving HBA relying on the Levy flight
(LF) algorithm to reduce the representation space of
medical images and boost the model performance. Using
two real-world datasets, a complete evaluation of the
proposed methodology is provided and compared to
several state-of-the-art works.
e main contributions of this work are summarized as
follows:
(1) We propose an ensemble DL-based model to extract
features from medical images collected using IoT
devices.
(2) We proposed an FS algorithm named Levy flight-
based honey badger algorithm (LFHBA) where the
operators of the Levy flight were used to improve the
honey badger algorithm’s capabilities.
(3) e proposed FS algorithm aims to select the most
relevant features from the extracted image repre-
sentations which maximize the classification preci-
sion and accuracy.
(4) e proposed framework can be integrated in a 6G-
enabled IoMT system to reduce human intervention
in medical facilities and provide quick diagnostic
results.
According to the paper’s structure, Section 2 reviews
recent works on medical imaging and its applications. In
Section 3, we provide the background of honey badger al-
gorithm (HBA) and Levy flight (LF). Section 4 offers a
detailed description of our proposed 6G-enabled IoMT
framework. Section 5 lists the outcomes of image
2Computational Intelligence and Neuroscience
recognition experiments conducted to validate the proposed
framework. Lastly, the conclusion and future direction are
presented in Section 6.
2. Related Works
e transfer learning (TL) approach has become increas-
ingly popular in recent years, which improves the efficiency
of the models, reduces financial costs, and does not require
more input data [25, 26]. Recently, TL was used for
extracting features to solve the problems of the traditional
deep learning methods (for more information, see [27]).
Extracting features from VGG and ResNet networks,
combining bilinear and classification algorithms, and
learning them with SVM classifiers produced the best
outcomes [28]. Esteva et al. [29] employed a mixture of data-
driven technologies and the InceptionV3 to train derma-
tological images, achieving great results on the test set that
were equivalent to professional dermatologists. Yu et al. [30]
proposed a deep residual network-based phased classifica-
tion technique, and segmentation was used to classify dis-
ease. However, it is not a complete solution because the
ultimate classification must be done step by step [31]. Guo
et al. [32] developed a multi-CNN using the adaptive sample
learning technique to replicate intra-class disagreement and
related noise interference.
Instead of developing a CNN from scratch with ran-
domly initialized parameters, researchers used a pretrained
CNN and transfer learning to classify medical images
throughout the whole dataset [33]. As a result of this pre-
training, CNN’s training time was greatly reduced, resulting
in an accuracy rate of 84.8% over five categories. In this case,
transfer learning helps models trained on one task or large
dataset after gaining a large knowledge to be transferred and
applied to different but related tasks. Lopez et al. [34] used a
deep learning-based technique to detect disease early. To
address the medical image classification task, they used a
modified VGGNet architecture and a transfer learning
methodology. On the ISIC Archive dataset, the proposed
technique had a sensitivity value of 78.66%. In a study by
Ayan and ¨
Unver [35], among an extended and non-
augmented dataset, the effectiveness of CNN model for
identification of medical images was evaluated. However,
they stated that when there is not enough data, deep learning
methods could be useful [15]. Using the enhanced dataset,
the network had a greater classification accuracy than just
the model that did not.
CNNs have been widely used in medical image analysis
in recent years due to their robust features representation
abilities and have shown significant improvements. Yu et al.
[30] suggested a multi-stage system for automatic disease
recognition in medical images based on an extremely deep
residual network. When SVM classifiers were used to collect
high-level features from VGG and ResNet networks for
bilinear merging, Ge et al. [28] achieved some of the greatest
recognition results on a range of test sets. Following [36],
they designed an aggregation of multi-level fully convolu-
tional networks. A multi-CNN collaborative class label le-
sion recognition framework was designed by Zhang et al.
[37]. eir method was more robust of lesion identification,
and its usefulness had been tested using relevant data. A
robust ensemble architecture, constructed using dynamic
classifier selection techniques, was employed to detect
cancer [38], so that the model can learn more powerful and
distinguishing features. A crossnet-based combination of
various convolutional networks was suggested as a solution
for medical image identification [39] and proven by ex-
tensive testing. MobileNet and DenseNet were combined to
create a lightweight and efficient classification model [40].
Different from prior methods, they used the well-known
classification approach in the lightweight classification al-
gorithm to increase feature discrimination, to decrease
computational cost, and to keep the number of parameter
values to a minimum.
Recently, Internet of Medical ings (IoMT) technology
has proven to be ideal for constructing smart systems ca-
pable of accurately diagnosing illness in the same way that
specialists do. According to [7], IoMT technology has aided
in the creation of critical medical systems. Doctors can now
get it in a range of locations, with greater patient diagnostic
capacity without influencing subjective aspects. e issue of
unbalanced data between unusual and widespread diseases,
on the other hand, remained an unresolved challenge for any
framework. As a result, this issue resulted in poor perfor-
mance. In the medical profession, however, the classifier
must be certain of its accuracy with a large proportion when
detecting the type of cancer. According to a prior research,
accurate detection is crucial for providing patients with the
appropriate treatment. As a result, we are attempting to
improve medical diagnostics.
A wide variety of real-world complicated optimization
issues have been successfully solved using meta-heuristic
algorithms. Due to their ability to utilize a list of candidate
solutions rather than a single solution, they are efficiently
able to traverse the solution space. As a result, meta-heuristic
algorithms outperform other optimization methods. Many
meta-heuristic strategies have been developed to help
schedule tasks in the IoMT [41]. Some of the existing FS
methods suffer from premature convergence and local
minima, especially when faced with a large solution space
[42]. Often, this limit results in inefficient task scheduling
solutions, which has a negative impact on system perfor-
mance. A global optimal solution to the IoMT task sched-
uling problem is therefore urgently needed. Hence, this
paper aims to find the best solutions that leads to an increase
in the rate of convergence, as shown in the next sections.
3. Preliminaries
is section briefly describes the honey badger algorithm
(HBA) and Levy flight (LF) algorithms used for selecting the
most relevant features.
3.1. Honey Badger Algorithm. e honey badger algorithm
(HBA) is a meta-heuristic optimization approach created by
Hashim et al. [22]. HBA may be regarded as a universal
optimization method since it includes both exploration and
Computational Intelligence and Neuroscience 3
exploitation stages. e stages of the HBA are outlined
mathematically as follows. e HBA starts by forming the
initial population of Nsolutions (i.e., honey badgers) using
the following equation [22].
xi�lbi+r1×ubi−lbi
,(1)
where xidenotes the ith honey badger location. In the search
domain, lbirefers to the lower bound and ubirefers to the
upper bound. Furthermore, a randomised value between 0
and 1 is r1.
e next step is to compute the Intensity that is de-
pendent on the prey’s attention force and the range between
the prey and the badger (ith). Iirepresents the prey’s scent
intensity; if the scent is strong, the prey will move quickly,
and vice versa, according to the inverse square law, as stated
by the following equation [22].
Ii�r2×S
4πd2
i
, S �xi−xi+1
2, di�xprey −xi,(2)
where Sis the attention force, as S� (xi−xi+1)2.diindicates
the range between both the prey and the ith badger, as
di�xprey −xi.
ereafter, HBA is split into two steps: digging and
honey. During the digging step, a badger adopts a cardioid
form. e cardioid movement is formulated using the fol-
lowing equation [22]:
xnew �xprey +F×β×I×xprey +F×r3×α×di×cos 2πr4
×1−cos 2πr5
,(3)
where the density parameter (α)governs time-varying
randomness by the following equation [22]:
α�C×exp −t
tmax
,(4)
where Cis a constant that is greater than or equal to 1. Also,
tmax denotes the highest number of cycles. In equation (3),
xprey represents the location of the prey that has been dis-
covered to be the optimum so far—the global optimum
location. e honey badger’s food-finding ability is β≥1.
(Default �6) is the honey badger’s food-finding ability. r3,
r4, and r5are three distinct randomly initialized parameters
ranging from 0 to 1. Fis a flag that changes the direction of
the search; it is calculated by the following equation [22]:
F�1,if r6≤0.5,
−1,otherwise.
(5)
Finally, equation (6) can be used when a badger follows a
honey bee to arrive at a beehive [22].
xnew �xprey +F×r7×α×di,(6)
where xnew represents the badger’s latest place, xprey rep-
resents the prey position, and Fand αare calculated using
equation (4) and (5), respectively. Based on spatial knowl-
edge of di, it can be shown in equation (6) that a badger
undertakes a search near to the prey position xprey dis-
covered so far. At this step, the search is influenced by time-
varying search behaviour (α). Furthermore, a honey badger
may discover Fdisturbance. e algorithm’s pseudocode is
described in Algorithm 1.
3.2. Levy Flight. Levy flight is a kind of chaotic system in
which the magnitude of the leap is determined by the
likelihood function. We employ the Levy flight as in [43] in
our job. When a high fly identifies a prey region, Aquila
determines the land and then strikes. is is known as
contour flight with quick glide invasion. In this case, Aquila
optimization closely investigates the target prey’s specified
region in preparing for the assault. is behaviour is
expressed formally as follows [43].
xnew �xprey ×Levy(D) + XR(t) + (y−x)∗rand,(7)
where xnew is the new position that is produced by the search
technique (x). e dimensionality space is denoted by D,
and the Levy flight distribution is denoted by Levy(D),
which is derived using equation (8). At the ith cycle, XR(t)is
a random value chosen from the range [1N][43].
Levy(D) � s×u×σ
|r|1/p,(8)
where sis a constant set to 0.01, uis a randomised value
between 0 and 1, and ris a randomised number between 0
and 1. Equation (9) is used to compute σ[43].
σ�Γ(1+β) × sine(πβ/2)
Γ(1+β/2) × β×2(β−1/2)
⎛
⎝⎞
⎠,(9)
where βis a constant set to 1.5. yand xare being used to
display the circular form in the seek in equation (7), and they
are given by the following equation [43].
y�r×cos(θ),
x�r×sin(θ),
r�r1+U×D1,
θ� −w×D1+θ1,
θ1�3×π
2.
(10)
For a given number of search iterations, r1denotes the
value from 1 to 20, and Uis a tiny value set to 0.00565. D1
represents integers ranging from 1 to Dim (i.e., the length
of the search space), and wrepresents a tiny value set to
0.005.
4Computational Intelligence and Neuroscience
4. Proposed Approach
To achieve our objective, we developed a 6G-enabled IoMT
framework for medical image classification. Based on the
principles of the 6G network and the DL architecture, we
developed a technique that consists of three stages, as il-
lustrated in Figure 1. (1) e first stage extracts the repre-
sentation (features) of the input image using an ensemble DL
model; (2) the second stage reduces the dimensionality of the
extracted features by selecting the important features using a
novel proposed feature selection algorithm based on im-
proved honey badger algorithm and Levy flight (LFHBA);
and (3) the selected features are fed into an ML classifier for
classification tasks.
4.1. Feature Extraction Using Ensemble Deep Learning.
is section describes the implemented DL architecture
based on ensemble learning and transfers learning tech-
niques. e objective of the developed model is to learn and
extract medical image representation using two well-known
DL models, including MobileNet and DenseNet. As shown
in Figure 2, the input image to ensemble model is fed to two
functional layers simultaneously. At this stage, each func-
tional layer represents a pretrained model that relies on
MobileNetV2 and DenseNet169, respectively. Each func-
tional layer’s output (learned representations) is fed to the
global average pooling layer for dimensionality reduction.
After applying the pooling operation on each parallel flow,
the output is flattened and concatenated to generate a single
feature vector of each inputted image. To fine-tune the
overall network, overcome overfitting, and boost the clas-
sification accuracy, a sequential set of layers was placed on
top of each other, including batch normalization (BN), fully
connected layer (dense), and dropout layer as shown in
Figure 2. e final output of the ensemble model is generated
using a fully connected layer with a single output node to
output the classification probability. Meanwhile, the dense
layer before the final output is used to extract the learned
image representations and feed them to the FS phase.
Using different image datasets, the ensemble model was
fine-tuned to learn and extract feature vectors from inputted
images of size 224 ×224. e DL models such as Mobile-
NetV2 and DenseNet169 were pretrained on the ImageNet
dataset [44]. In our experiments, the ensemble pretrained
model was employed and fine-tuned on the datasets having
chest and optical images. As an output, the MobileNetV2
and DenseNet169 generate a feature vector of size 1280 and
1664 after flattening, respectively. us, the concatenated
feature vector is of size 2944. During the fine-tuning of the
ensemble model, MobileNetV2 and DenseNet169 weights
were fixed to accelerate the training process.
Meanwhile, the MobileNetV2 model building block
consists of an inverted residual block core component,
which is inspired by the bottleneck blocks. e inverted
residual block contains two important blocks: the depthwise
separable convolution block and skip connections used to
link the input and output features on the same channels, thus
improving the features representations with low memory
usage. e depthwise separable convolution block consists of
3×3 depthwise convolution, BN, activation function, and
1×1 pointwise convolution where the order of execution of
the layers is as follows: (3×3Conv)
⟶(BN)⟶(ReLU)⟶(1×1Conv)⟶(BN)⟶
(ReLU).Each building block can integrate a depthwise
separable convolutional layer with different nonlinearity
functions such as ReLU/ReLU6. Meanwhile, the Dense-
Net169 model has fewer parameters to be optimized and
reduces the vanishing gradient problem in large models.
DenseNet169 consists of 169 low parameters layers where
each layer (L)is connected to every other layer with short
connections (L(L+1)/2 connections).
To extract the feature vector from each input image, we
used the generated fine-tuned model (model with the best
classification accuracy) on each dataset. e extracted fea-
ture vector for each image of size 128 will be fed into the FS
process in the proposed framework. e model was fine-
tuned for 100 epochs with a batch of size 32 on each dataset
to produce the best classification performance. Meanwhile,
to update the model’s weight and bias parameters, we used
the RMSprop optimizer with a learning rate of 1e−4. To
overcome the model’s overfitting, we used the dropout layer
with a probability of 0.38 and data augmentation with the
following transformations: random horizontal flip, random
zoom, random width shift, random height shift, and random
brightness.
4.2. e Enhanced HBA as FS Algorithm. e extracted
features from the ensemble model are high-dimensional
feature sets with 128 features that demand high computa-
tional complexity and may decrease the effectiveness of a
classifier. As a result, these features are input into the feature
selection (FS) step, which filters out duplicate and unnec-
essary features. As a result, a novel approach for improving
the honey badger algorithm’s (HBA) efficiency is presented
in this study using Levy flight, which is employed to get more
sustainable results. When the HBA is unable to obtain the
best solution at the current epoch, a more effective search
relying on Levy flight is performed to avoid being trapped in
a locally optimal solution. e Levy flight search enhances
the capacity to do both global and local searches at the same
time. Figure 3 illustrates the various phases of the developed
FS approach.
To eliminate unnecessary and duplicated features, the
developed FS approach employs a novel Levy flight-based
honey badger algorithm (LFHBA). Initially, in the developed
LFHBA, the beginning locations of each honey badger in the
population are randomly assigned between [0, 1]. Each in-
dividual has a dimension equivalent to the total of features
collected from the ensemble model. If there are nretrieved
features and Nbadgers, for fitness calculation, each xiwould
have a randomised value of 0 or 1 that is separated into either
1 or 0. e support vector machine (SVM) is used to compute
the fitness value, which is the most commonly used for
classification tasks [45], for a variety of reasons: the SVM is
used to solve binary problems by maximizing the difference
between both classes around a hyperplane. Hence, the ideal
Computational Intelligence and Neuroscience 5
hyperplane is obtained with the greatest distance to the
nearest training site of any class, resulting in an acceptable
class distinction. Following the computation of the fitness
value, the best solution is determined and the process of
updating the solutions is conducted using the operators of
LFHBA.
e developed LFHBA begins with the creation of a
collection of Nagents Xthat represent the FS problem
solutions. To apply this process, the following equation is
used:
Xi�rand ×(U−L) + L, i �1,2,. . . , N, j �1,2,. . . ,Dim,
(11)
where Dim refers to the number of features. As a conse-
quence, the accessible dimensionality is limited to values
ranging from Uto L. Once we have determined the binary
value of each Xiusing equation (12), we can use it to figure
out what it signifies.
BXij �1,if Xij >0.5,
0,otherwise.
(12)
We will next compute the fitness value of each Xi
depending on its binary decision BXi, and the objective
function is defined as
Fiti�λ×ci+(1−λ) × BXi
Dim
.(13)
In this situation, the fraction of relevant features is given
as (|BXi|/Dim). SVM validation loss is ci. Since SVM is
more trustworthy and has lower complexity than other
classification methods, SVM is often utilized. e parameter
λadjusts the proportion between both the efficiency of a
classifier’s forecasts and the feature selection.
e LF or HBA operators are employed in the proposed
technique to modify solution Xi. is is achieved by using
the probability Piconnected with each Xi. e LF will be
employed if the probability of Piis greater than 0.5, as stated
by the following equations:
Xij �update Xiusing equation(3),if Pi<0.5,
update Xiusing equation(7),otherwise,
(14)
where Pri∈[0,1]is a probability sampling variable
employed to keep the operators of LF and HBA comparable
while amending the solutions.
e following step is to assess whether or not the stop
conditions have been satisfied, and if so, the best solution
will be provided. If this occurs, the upgrading operation is
restarted from the start. e pseudocode for the proposed
LFHBA is presented in Algorithm 2.
4.3. 6G-Enabled IoMT Framework. Figure 4 depicts the
proposed 6G-enabled IoMT system. e IoT devices capture
medical images initially, and if the user’s goal is to train our
system, the input medical images could well be delivered via
a 6G network. Fog computing and multi-access edge
computing (MEC) servers are major elements of the 6G
network design since they reduce latency and bandwidth
usage for commonly used software across a broad scope of
various terminals. e gathered information from the MEC
could perhaps be sent to a cloud computing provider. In
cloud computing, the three core operations remain in place.
(1) Initialize the parameters tmax,N,β, C.
(2) Initialize the number of solutions (N).
(3) Evaluate the fitness of each honey badger position xiusing objective function and assign to fi,i∈[1, 2, ...,N].
(4) Save best position xprey and assign fitness to fprey .
(5) repeat
(6) Update the decreasing factor αusing equation (4).
(7) Calculate the intensity Iiusing equation (2).
(8) fori�1 to Ndo
(9) if r<0.5 then
(10) Update the position xnew using equation (3).
(11) else
(12) Update the position xnew using equation (6).
(13) end if
(14) Evaluate new position and assign to fnew.
(15) iffnew ≤fithen
(16) Set xi�xnew and fi�fnew.
(17) end if
(18) iffnew ≤fpreythen
(19) Set xprey �xnew and fprey �fnew.
(20) end if
(21) end for
(22) until e iteration (t)criterion has been met.
(23) Return the xprey .
A
LGORITHM
1:
Pseudocode of the proposed HBA.
6Computational Intelligence and Neuroscience
e characteristics of the DL architecture were recovered in
the first step, as explained in Section 4.1. As a second step, as
shown in Section 4.2, we utilize the improved HBA
depending on LF, named LFHBA, to identify the relevant
features. Furthermore, after the classification has been
trained, it can be deployed over many API prediction units,
reducing transmission costs.
If the specialist’s purpose is to evaluate the disease of the
captured image, the classification algorithm is used in API
prediction tools. Time was reduced because the API allows
the platform’s approved training to predict without the need
for retraining, thus decreasing Internet communications.
Ultimately, the sender will be provided with the most recent
diagnosis as well as many assessment measures, such as
accuracy, to support the software’s projections.
5. Experimental Results and Discussion
In this section, we will briefly go through the details about
the datasets used in this research, followed by evaluation
metrics. Next, the effectiveness of optimization feature se-
lection methods is given and studied. Finally, we conclude
this section by comparing our proposed method with the
state-of-the-art methods.
5.1. Dataset. Our experiments were evaluated on the chest
X-ray (CXR) images and the retinal optical coherence to-
mography (OCT) images. e two datasets were applied in
Guangzhou Women and Children Medical Center [46].
Figure 5, for instance, displays a selection of images from the
chosen databases. In the CXR image (pneumonia) dataset,
which is publicly available at https://www.kaggle.com/
paultimothymooney/chest-xray-pneumonia, there are in
total 5856 normal and pneumonia X-ray images. To provide
a fair comparison platform for different systems, the training
set, validation set, and the test set have been partitioned
beforehand. Examples of normal and pneumonia samples
can be seen in the right of Figure 5. Details about dataset1 are
shown in Table 1.
Other data used in this research consisted of 84,484 OCT
B-scans collected from 4,686 patients at the Shiley Eye In-
stitute of the University of California, San Diego (UCSD),
https://www.kaggle.com/paultimothymooney/
kermany2018. All of the pictures are classified into four
types: Drusen, CNV, DME, and Normal, with the corre-
sponding numbers of 8866, 37455, 11598, and 26565. ese
pictures (collected from Heidelberg Engineering, Spectralis
OCT, Germany) were all chosen retrospectively from a
sample of older patients with no age, gender, or ethnicity
restrictions. According to the company’s program and
guidelines, the ultimate OCT images are produced using a
horizontally foveal slice from the vital picture format. Ad-
ditionally, this dataset includes 84,484 OCT B-scans, 968 test
images, and 83,516 training images. To be more specific, the
training set contains 37,213 CNV, 11,356 DME, 8,624
Drusen, and 26,323 Normal images, whereas the testing set
has 242 CNV, 242 DME, 242 Drusen, and 242 Normal
images. Additional information on the database is available
at [46].
5.2. Performance Metrics. e precision, recall, F1-score,
accuracy, and balanced accuracy measures are utilized to
Output
Label
ClassifierSVMLFHBA
Feature Selection method
Feature Extraction-
base Ensemble Model
Medical Images
Database
Figure 1: e proposed methodology.
Feature Vector
Dense Batch Normalization Batch NormalizationDenseDropout Dropout
FlattenFunctional
Functional
MobileNet
DenseNet
Ensemble model
Input Image
Flatten
GlobalAveragePooling2D
GlobalAveragePooling2D
Concatenation
Dense
Output
Prediction
Figure 2: e structure of ensemble model block based on the extracted features of MobileNet and DenseNet.
Computational Intelligence and Neuroscience 7
evaluate the developed approach for recognizing medical
images. Let true positive (TP)denote the number of ade-
quately recognized images. Also, false positive (FP)denotes
the collection of images that have been incorrectly classified.
It is the opposite for true negative (TN). Finally, false
negative (FN)denotes the number of images erroneously
classified.
In equation (15), precision is measured as the ratio of
exact data that conforms to specified characteristics. e
recall is measured as the ratio of actual statistics to quantities
which should have been explicitly anticipated, as introduced
in equation (16). e F1-score, in equation (17), is an in-
dication of imbalanced data between recall and precision.
e amount produced across all expected amounts is known
Start
Test da ta
Initialize the parameters
Update the decreasing
factor α using Eq. (3).
Calculate the intensity
using Eq. (2)
Update position Xnew by
Eq. (4)
Update p osition Xnew by
Eq. (7)
Initialize population with random
positions
Evaluate the fitness of each honey
badger position xi
Evaluate new position
Stopping
Condition?
Evaluate the performance
End
Return the best solution
Reduce the test set
No
No
Ye s If r < 0.5
Ye s
Determine the best solution
Train data
Medical Images Database
Figure 3: Flowchart showing the proposed algorithm.
(1) Initialize the parameters of HBA.
(2) Initialize the number of solutions (N).
(3) Evaluate the fitness of each honey badger position xiusing objective function and assign to fi,i∈[1, 2, ...,N].
(4) Save best position xprey and assign fitness to fprey .
(5) repeat
(6) Update the decreasing factor αusing equation (4).
(7) Calculate the intensity Iiusing equation (2).
(8) Update the x,y, Levy(D), etc.
(9) fori�1 to Ndo
(10) if r<0.5 then
(11) Update the current solution xnew using equation (3).
(12) else
(13) Update the current solution xnew using equation (7).
(14) end if
(15) Evaluate new position and assign to fnew.
(16) iffnew ≤fithen
(17) Set xi�xnew and fi�fnew.
(18) end if
(19) iffnew ≤fpreythen
(20) Set xprey �xnew and fprey �fnew .
(21) end if
(22) end for
(23) until e iteration (t)criterion has been met.
(24) Return the best solution (xprey ).
A
LGORITHM
2:
Pseudocode of the proposed LFHBA algorithm.
8Computational Intelligence and Neuroscience
as accuracy (equation (18)). Balanced accuracy is defined as
the average accuracy achieved across all categories, as shown
in equation (19).
Precision �TP
TP +FP,(15)
recall �TP
TP +FN,(16)
F1 −score �2×precision ×recall
precision +recall ,(17)
accuracy �TP +TN
TP +TN +FP +FN,(18)
balanced accuracy �1
2×TP
TP +FN +TN
FP +TN
.(19)
5.3. Results and Analysis. is section summarizes and
discusses the findings of experiments conducted to test the
proposed FS optimization method. We begin by evaluating
our strategy against other meta-heuristic optimization ap-
proaches. After that, the support vector machine (SVM)
classifier was assessed against each other. Furthermore, this
is followed by a comparison with other existing medical
imaging categorization systems with different transfer
learning models, including DenseNet, MobileNet, and en-
semble model. We now have comparisons for recall, pre-
cision, F1-score, balanced accuracy, and accuracy. Lastly, it
was compared to previously published methods.
Using the proposed FS method, the results are sum-
marized in the next subsection. For the purpose of evaluating
our approach’s efficacy, we evaluated it against nine other
well-known techniques. e meta-heuristic optimizers in-
clude grey wolf optimization (GWO) [47], Aquila optimizer
(AO) [43], hunger games search (HGS) [48], arithmetic
optimization algorithm (AOA) [49], whale optimization
algorithm (WOA) [50], firefly algorithm (FFA) [51], and
honey badger algorithm (HBA) [22].
ese optimization algorithms are assessed using dif-
ferent metrics to solve complex numerical issues of opti-
mization. Due to the unpredictable nature of measure of
performance issues, the dimensions of both datasets were
decreased to 30 rows and the number of iterations was 1000
in all trials. e greater the number of search agents, the
more likely it is to find the global optimum. e sample value
is fixed to 50 for all tests. e number of search agents can be
lowered to solve the costly problem.
5.3.1. Results of FS Methods. Multiple metrics are utilized to
assess the effectiveness of various optimization strategies.
F1-score was used to compare the results of each technique.
Results from the CXR and OCT datasets can be found in
Tables 2 and 3, respectively. Bolded findings are the most
accurate in these tables. According to these results, the
ensemble model-based LFHBA outperforms GWO, AO,
HGS, FFA, WOA, HBA, and AOA.
For the CXR dataset, the results of the proposed method
and other optimizers are shown in Table 2. e DenseNet,
MobileNet, and ensemble models have been combined on
the eight optimization algorithms in the table. According to
Cloud Local
6G Network
Fog Computing
FE using Ensemble Learning
FS-based LFHBA
Training
Medical
Image s
Testing
What do
you
need?
IOT Devices
API Prediction
Real Time
Diagnostics
ML Classification Model
MEC Server
IoT Cloud
Figure 4: e developed 6G-enabled IoMT framework diagram.
Computational Intelligence and Neuroscience 9
the table, merging the LFHBA algorithm with ensemble
model surpassed other algorithms by 87.10% in terms of the
accuracy score. e AO and HGA optimizers came in the
second level with 86.22%. HBA is then used to get the same
outcome as GWO (i.e., 86.06%). It is 85.10% for the WOA.
AOA and FFA had the worst score, with 84.94%, while our
proposed algorithm had the best results on the precision
metric, with 88.56% of the vote. 88.38% was the second-best
result, which belongs to the HGS algorithm. ere was one
other algorithm AO that performed poorly, with 87.17%.
Recall results were better when using the LFHBA algorithm,
which had the best outcomes. e AO and HGS both have
the same recall (i.e., 86.22%). With 86.06% of the vote, they
are followed closely by the GWO and HBA. 85.10% was
reached by the WOA. Finally, the FFA and AOA algorithms
have a worse outcome of 84.94%. e proposed LFHBA also
outperformed other algorithms on F1-score, with 86.19%.
e AO optimizer came in the second level with 85.47%. e
HGS optimizer came in the third level with 85.41%. Finally,
the FFA gets the poorest performance with 83.96% of the
population. In the LFHBA algorithm for the balanced ac-
curacy, 82.82% accuracy was attained. In comparison, AO
was ranked second (81.97%). With 81.79%, HGS algorithm is
next in line. Only FFA achieved a score of 80.17%, which
achieved the worst performance.
e proposed LFHBA algorithm outperformed other
optimization techniques on the OCT dataset, as seen in
Table 3. e accuracy of the LFHBA algorithm for the
ensemble model classifier was 94.32%, the best performance.
At the same time, the AO and HBA optimizers were at the
second level, with 93.80%. With 93.70% of the vote, the AOA
algorithm follows the preceding two. Finally, FFA achieved a
score of 93.29%, which achieved the worst performance. For
the precision measure, our developed LFHBA approach
achieved a score of 94.93%. e AO and HBA algorithms
follow our algorithm, which has a 94.55% rating. It was
94.48% for the AOA algorithm to keep up with them. 94.26%
and 94.19% are the relative percentages for the WOA and
HGS algorithms, respectively. As to improve comparison,
the recall metric for the ensemble model was 94.32% for
LFHBA, the proposed algorithm with the maximum effec-
tiveness. 93.80% was the second-best outcome, which is
consistent with the results obtained by the two optimizers
(i.e., AO and HBA). Our developed LFHBA algorithm was
the best optimizer, with a F1-score of 94.30%. LFHBA is
preceded by the AO and HBA methods, which have a
combined score of 93.78%. Next, AOA, FFA, GWO, and
WOA have 93.68%, 93.57%, 93.45%, and 93.35%, respec-
tively. Finally but not latest, HGS got the poorest perfor-
mance with 93.26%. ere was a 94.32% balanced accuracy
of the LFHBA algorithms, which was the best performance.
AO and HBA were in the second level with 93.80%. Re-
garding the AOA’s and the FFA’s performance in the third
and fourth levels, respectively, they scored 93.70% and
93.60%. GWO is behind them with 93.49%. HGS scored
93.29%, the lowest possible score.
From a different viewpoint, the average of the three
models (i.e., DenseNet, MobileNet, and ensemble model)
using the eight feature selection optimizers investigated the
two chosen datasets: CXR and OCT, which are displayed in
Figures 6 and 7, respectively. As shown in Figure 6, the
overall average accuracy on the CXR dataset is approxi-
mately 85.68% for the LFHBA optimizer, whereas the HBA
classification model comes in second with 85.04%. A higher
level of HGS (with 84.88%) results is preferable to the GWO
classification algorithm (i.e., 84.83%). Furthermore, the
AOA and AO algorithms outperform the FFA algorithm,
with a success rate of 84.35% for AOA and AO and 93.92%
O
C
T
CNV
DME
N
orma
l
D
rusen
(a)
C
X
R
Nor
mal
P
NEUM
O
NI
A
(b)
Figure 5: Example medical image samples for classification task from the two selected databases. (a) e optical coherence tomography
(OCT) images. (b) e chest X-ray (CXR) images.
10 Computational Intelligence and Neuroscience
for FFA. Besides, the WOA beats other classifiers in terms of
accuracy (with 83.71%). From a different point of view, the
overall balanced accuracy of the LFHBA algorithm is the best
(81.34%). It is preceded by the HBA (80.51%), the GWO
(80.23%), and the HGS (80.24%). e AO achieved 79.62%,
while the AOA achieved 79.59%. Finally, the FFA and WOA
algorithms obtained 79.01% and 78.76%, respectively. Ad-
ditionally, the average F1-score of the three models beat the
LFHBA method by about 84.88%; the HBA algorithm takes
second place with 84.14%. Furthermore, the GWO method
outperforms the HGS algorithm, with a success rate of
83.91% for XGB and 83.95% for HGS. e HGS algorithm
delivers superior results than AO, AOA, FFA, and WOA
optimizer, with 83.35%, 83.34%, 82.83%, and 82.59%, re-
spectively. Furthermore, the LFHBA beats other optimizers
in terms of recall. To be more specific, the LFHBA achieved
85.87%, while the HBA achieved 85.04%. Finally, the HGS,
GWO, AOA, AO, FFA, and WOA algorithms obtained
84.88%, 84.83%, 84.35%, 84.35%, 83.92%, and 83.71%, re-
spectively. In terms of precision measure, the LFHBA al-
gorithm delivers superior results than HBA, HGS, GWO,
AOA, AO, FFA, and WOA, with 87.03% 87.02%, 86.88%,
86.52%, 86.48%, 86.21%, and 86.01%, respectively.
On the OCT dataset, as displayed in Figure 7, the overall
accuracy of our proposed algorithm is the best (90.33%). It is
preceded by the HBA (89.91%), the AO (89.88%), and the
AOA (89.81%). Furthermore, the WOA beats the other
optimizers. To be more specific, the WOA achieved 89.67%,
while the HGS achieved 89.60%. Finally, the FFA and GWO
algorithms obtained 89.39% and 89.15%, respectively. In
addition, the balanced accuracy of the ten optimization
techniques beat the LFHBA method by a total of about
90.33%; the HBA algorithm takes second place with 89.91%.
Furthermore, the AO method outperforms the AOA algo-
rithm, with a success rate of 89.88% for AO and 89.81% for
AOA. It is preceded by the WOA (89.67%), the HGS
(89.60%), the FFA (89.40%), and the GWO (89.15%). e
overall average F1-score is approximately 90.13% for the
LFHBA classifier, whereas the HBA classification model
comes in second with 89.71%. e results of AO results (with
89.65%) are preferable to the AOA algorithm. It is preceded
by the WOA (89.45%), the HGS (89.38%), the FFA (89.15%),
and the GWO (88.86%). In terms of recall measure, the
LFHBA algorithm delivers superior results than HBA, AO,
AOA, WOA, HGS, FFA, and GWO algorithm, with 89.91%,
89.88%, 89.81%, 89.67%, 89.60%, 89.39%, and 89.15%, re-
spectively. Additionally, the LFHBA beats other optimizers
in terms of precision. To be more specific, the LFHBA
achieved 92.14%, while the AO and HBA achieved 91.83%
and 91.80%, respectively. Finally, the AOA, WOA, HGS,
FFA, and GWO algorithm obtained 91.74%, 91.63%, 91.61%,
91.48%, and 91.36%, respectively.
e average results of the five measures (i.e., accuracy,
F1-score, precision, recall, and balanced accuracy) on the
CXR and OCT dataset are presented in Figures 8 and 9,
respectively, using different optimization strategies (i.e., the
eight optimizers that were introduced before). As seen in
Figure 8, the LFHBA played a more critical role than other
algorithms. To be more precise, the proposed LFHBA
algorithm had an average result of 85.03%, while the HBA
had a result of 84.35%. Furthermore, the HGS and GWO
algorithms achieved a result of 84.19% and 84.13%, re-
spectively. en, follows the AOA algorithm, which
achieved 83.63%. In all, 83.62% was achieved on the AO
optimizer. Other optimizers, FFA and WOA, obtained
83.18% and 82.96%, respectively.
On the OCT dataset, Figure 9 demonstrates that the
LFHBA method has a significant influence in selecting
features; this is evident across the average of the five metrics.
e proposed LFHBA algorithm correctly classifies 90.95%
of the testing sample when employing the SVM classification
model, more significant than the accuracy of all the other FS
optimization methods. Alternatively, the HBA had the
second-level outstanding performance at 90.25%. AO was
placed in third level. Following these optimizers are the AOA
and the WOA, which achieved 90.15% and 90.02%, re-
spectively. Furthermore, the HGS, FFA, and GWO algo-
rithm achieved 89.96%, 89.76%, and 89.54%.
To further analysis, the average results of the five
measures of different optimizers on the three TL models of
the CXR, OCT dataset is introduced in Figures 10 and 11,
respectively. In Figure 10, the average results of ensemble
model was 85.80%, this model with the maximum effec-
tiveness in term of accuracy score. 85.56% was the second-
best outcome, which is consistent with the results obtained
by the MobileNet architecture, while DenseNet was the
worst in the performance (i.e., 82.43%). Furthermore, the
ensemble method achieved the highest results in balanced
accuracy, at 81.35%. For all the three networks, MobileNet
architecture came in second place with 81.00%. Finally, the
DenseNet achieved 77.39%, which was lower than others.
e ensemble model was the best optimizer, with a F1-score
of 84.96%. e ensemble model is preceded by the Mobi-
leNet and DenseNet architectures, which have 84.68% and
81.23%, respectively. In terms of recall measure, the en-
semble model delivered superior results than MobileNet and
DenseNet, with 85.56% and 82.43%, respectively. Addi-
tionally, the ensemble model beats other networks in terms
of precision. To be more specific, the ensemble model
achieved 87.88%, while the MobileNet achieved 87.77%.
Finally, the DenseNet network obtained 84.50%. e overall
average of different measures is approximately 85.16% for
the ensemble model, whereas the MobileNet model came in
second with 84.91%. e DenseNet network had the worst
performance across different metrics (at 81.60%).
On the OCT dataset, the average results of different
optimizers are displayed in Figure 11. In terms of accuracy
measure, the ensemble model had the best performance
(93.67%). It is preceded by the MobileNet (88.47%) and the
DenseNet (87.01%). In addition, the average balanced
accuracy of the eight optimization techniques beat the
ensemble method by a total of about 93.67%; the MobileNet
model took second place with 88.47%. Furthermore, the
DenseNet method achieved 87.01%, which was the worst
performance. e overall average F1-score is approxi-
mately 93.65% for the ensemble model, whereas the
MobileNet model comes in second with 88.18%. Finally, the
DenseNet model obtained 86.66%. In terms of recall
Computational Intelligence and Neuroscience 11
measure, the ensemble model delivered superior results
than the MobileNet and DenseNet models, with 88.47%
and 87.01%, respectively. Furthermore, the ensemble
model beats other architectures in terms of precision. To be
more specific, the ensemble model achieved 94.46%, while
the MobileNet and DenseNet achieved 90.88% and 89.76%,
respectively. Finally, the average results of the three
methods across different measures are obtained. As seen in
the figure, the ensemble learning method played a more
critical role than other networks. To be more precise, the
ensemble model had an average result of 93.82%, while the
MobileNet had a result of 88.89%. Finally, the DenseNet
achieved 87.49%.
e ensemble model, MobileNet, and DenseNet archi-
tectures’ average accuracy on the two selected datasets is
shown in Figure 12 on various techniques for optimization
(i.e., the eight optimizers, introduced before). In the figure,
we can see that the ensemble model outperformed other
classifiers on the accuracy metric. To be more specific, the
ensemble model achieved 89.74% accuracy, whereas the
MobileNet achieved 87.02% accuracy. In the end, the
DenseNet network achieved 84.72%.
From a clearer point of view on the ensemble model,
Figure 13 displays the average accuracy of each feature
selection approach on the two datasets, namely: CXR and
OCT, from a different perspective. On average, the proposed
LFHBA optimizer outperformed the others by about 90.59%;
the AO method comes in second with 90.01%. e HBA
delivered superior results than GWO and HGS, with 89.78%
and 89.67%, respectively. 89.32% of the vote goes to the
AOA. After that, the FFA and WOA obtained the lowest
results, with average accuracy of 89.27% and 89.25%,
respectively.
e statistical value is calculated and ranked by the
Friedman (FD) test [52]. e FD test is used to calculate the
difference among various approaches. Figure 14 compares
the proposed algorithm to other optimization algorithms.
When the proposed method is analyzed in terms of recall,
Table 1: Dataset description.
Dataset Class Training Test Total images
CXR
Normal 1,349 234 1,583
Pneumonia 3,883 390 4,273
Total 5,232 624 5,856
OCT
CNV 37,213 242 37,455
DME 11,356 242 11,598
Drusen 8,624 242 8,866
Normal 26,323 242 26,565
Total 83,516 968 84,484
Table 2: Classification results (%) of each feature selection optimization algorithm on the CXR dataset.
Optimizer Model Accuracy Balanced accuracy F1-score Recall Precision
GWO
DenseNet 82.69 77.86 81.60 82.69 84.48
MobileNet 85.74 81.24 84.89 85.74 87.89
Ensemble model 86.06 81.58 85.23 86.06 88.27
WOA
DenseNet 81.25 75.85 79.84 81.25 83.53
MobileNet 84.78 79.96 83.77 84.78 87.22
Ensemble model 85.10 80.47 84.17 85.10 87.28
FFA
DenseNet 81.57 76.28 80.22 81.57 83.77
MobileNet 85.26 80.60 84.33 85.26 87.55
Ensemble model 84.94 80.17 83.96 84.94 87.33
HGS
DenseNet 82.85 77.99 81.76 82.85 84.74
MobileNet 85.58 80.94 84.67 85.58 87.94
Ensemble model 86.22 81.79 85.41 86.22 88.38
AOA
DenseNet 82.21 77.14 80.99 82.21 84.25
MobileNet 85.90 81.37 85.04 85.90 88.16
Ensemble model 84.94 80.26 83.99 84.94 87.17
AO
DenseNet 82.21 76.97 80.91 82.21 84.57
MobileNet 84.62 79.91 83.65 84.62 86.78
Ensemble model 86.22 81.97 85.47 86.22 88.09
HBA
DenseNet 83.01 78.03 81.87 83.01 85.16
MobileNet 86.06 81.75 85.28 86.06 87.97
Ensemble model 86.06 81.75 85.28 86.06 87.97
LFHBA
DenseNet 83.65 78.97 82.66 83.65 85.49
MobileNet 86.54 82.22 85.78 86.54 88.61
Ensemble model 87.10 82.82 86.19 87.10 88.56
12 Computational Intelligence and Neuroscience
precision, F1-measure, accuracy, and balanced accuracy, it
outperforms the others. In terms of recall, the LFHBA has
the highest mean ranking of 8, followed by AO which has the
mean rank of 6.50. ey were followed by HBA, which
achieved 5.50. HGS and GWO have nearly identical mean
levels, with 3.75. Finally, AOA, FFA, and WOA are lower
than the others, with a mean rank of 3.25, 2.75, and 2.50,
respectively. According to the FD test results for precision,
LFHBA is also better than others, with a mean rank of 8. It
was followed by AO and HBA, which achieved 5.75 and 5.25,
respectively. On the other hand, GWO and HGS have the
same mean level of 4. Lastly, FFA, AOA, and WOA have the
lowest mean ranking. Furthermore, we discovered that the
LFHBA in terms of the F1-score measure has the best mean
rank of 8, and the AO and HBA have the second and third
mean ranks of 6.75 and 5.75, respectively. AOA, HGS, and
GWO have the same mean levels (i.e., 3.50). Finally, WOA
and FFA achieved the lowest mean rank with 2.50. Finally,
themean rankaccording to theaccuracy for LFHBA and the
AO, HBA, HGS, GWO, AOA, FFA, and WOA optimization
algorithms is 6.50, 5.50, 3.75, 3.75, 3.25, 2.75, and 2.50,
respectively. According to the FD test results for balanced
accuracy, LFHBA is also better than others, with a mean
rank of 8. It was followed by AO, which achieved 6.75. HBA
has a mean rank level of 5.75, whereas AOA, HGS, and
GWO have 3.50. Lastly, WOA and FFA have the lowest
mean ranking.
To summarize, for the CXR and OCT datasets, the
proposed LFHBA optimization strategy combined with the
ensemble model obtained the highest accuracy score.
5.3.2. Comparison with State-of-the-Art Methods. is
section compares the proposed method with other state-of-
the-art medical image classification techniques. Table 4
shows the results of a few important methodologies. e
development of high-accuracy technology for medical image
classification is a major undertaking. It is important to
compare our strategy to other models that have been tested
on the same datasets. Using CXR and OCT datasets, Table 4
evaluates the performance of several techniques for disease
identification.
e CXR dataset was used to compare the various ad-
vanced methods for pneumonia detection. In [53], they
examined using generative adversarial networks (GANs) to
enrich a dataset by producing chest X-ray data samples. For
pneumonia diagnosis, Ayan and ¨
Unver [54] employed two
well-known convolutional neural network algorithms,
Xception and VGG16. In [55], they proposed an automatic
transfer learning method based on convolutional neural
networks using DenseNet121 pretrained concepts.
To evaluate the developed LFHBA’s performance on the
OCTdataset, four well-known OCT classification algorithms
are tested: the HOG-SVM [56] algorithm efficiently extracts
Table 3: Classification results (%) of each feature selection optimization algorithm on the OCT dataset.
Optimizer Model Accuracy Balanced accuracy F1-score Recall Precision
GWO
DenseNet 85.85 85.85 85.35 85.85 89.05
MobileNet 88.12 88.12 87.78 88.12 90.74
Ensemble model 93.49 93.49 93.45 93.49 94.28
MFO
DenseNet 86.67 86.67 86.38 86.67 89.45
MobileNet 88.53 88.53 88.25 88.53 91.00
Ensemble model 93.80 93.80 93.78 93.80 94.55
WOA
DenseNet 87.19 87.19 86.86 87.19 89.84
MobileNet 88.43 88.43 88.15 88.43 90.79
Ensemble model 93.39 93.39 93.35 93.39 94.26
FFA
DenseNet 86.26 86.26 85.88 86.26 89.46
MobileNet 88.33 88.33 87.99 88.33 90.57
Ensemble model 87.19 87.19 86.87 87.19 89.78
HGS
DenseNet 87.19 87.19 86.87 87.19 89.78
MobileNet 88.33 88.33 88.02 88.33 90.87
Ensemble model 93.29 93.29 93.26 93.29 94.19
AOA
DenseNet 86.98 86.98 86.65 86.98 89.66
MobileNet 88.74 88.74 88.49 88.74 91.08
Ensemble model 93.70 93.70 93.68 93.70 94.48
AO
DenseNet 87.40 87.40 87.03 87.40 89.95
MobileNet 88.43 88.43 88.13 88.43 90.98
Ensemble model 93.80 93.80 93.78 93.80 94.55
HBA
DenseNet 87.50 87.50 87.17 87.50 90.01
MobileNet 88.43 88.43 88.18 88.43 90.85
Ensemble model 93.80 93.80 93.78 93.80 94.55
LFHBA
DenseNet 87.71 87.71 87.44 87.71 90.30
MobileNet 88.95 88.95 88.66 88.95 91.19
Ensemble model 94.32 94.32 94.30 94.32 94.93
Computational Intelligence and Neuroscience 13
features from the HOG descriptor and subsequently learns a
multi-class SVM model for OCT categorization. In [46],
transfer learning identifies a conventional CNN by first
training it on the ImageNet and then fine-tuning the final
convolutional layer on the accessible OCT data. IFCNN [57]
utilized a recurrent fusion approach to classify OCT images
by using several convolutional features inside CNN. Huang
et al. [58] introduced a unique layer guided convolutional
neural network (LGCNN) to distinguish between the typical
retina and three prevalent macular diseases.
74.00
76.00
78.00
80.00
82.00
84.00
86.00
88.00
GWO WOA FFA HGS AOA AO HBA LFHBA
Accuracy
balanced accuracy
F1-score
Recall
Precision
Figure 6: e average results of three models on different FS
optimizers in CXR dataset.
GWO
87.00
87.50
88.00
88.50
89.00
89.50
90.00
90.50
91.00
91.50
92.00
92.50
WOA FFA HGS AOA AO H BA LFHB A
Accuracy
balanced accuracy
F1-score
Recall
Precision
Figure 7: e average results of three models on different FS
optimizers in OCT dataset.
GWO
81.50
82.00
82.50
83.00
83.50
84.00
84.50
85.00
85.50
WOA FFA HGS AOA AO HBA LFHBA
Figure 8: Average results of the five measures on different opti-
mizers in CXR dataset.
GWO
88.80
89.00
89.40
89.20
89.60
89.80
90.00
90.20
90.40
90.60
90.80
WOA FFA HGS AOA AO HBA LFHBA
Figure 9: Average results of the five measures on different opti-
mizers in OCT dataset.
DenseNet MobileNet Ensemble-Model
72
74
76
78
80
82
84
86
88
90
Accuracy
balanced accuracy
F1-score
Recall
Precision
Average
(%)
Figure 10: Average results of different optimizers on the three
models on the CXR dataset.
DenseNet MobileNet Ensemble-Model
82
84
86
88
90
92
94
96
Accuracy
balanced accuracy
F1-score
Recall
Precision
Average
(%)
Figure 11: Average results of different optimizers on the three
models on the OCT dataset.
14 Computational Intelligence and Neuroscience
e bottom line is that we can remove superfluous
features from high-dimensional medical image representa-
tions obtained by convolutional neural network (CNN)
using our strategy. However, this framework’s fundamental
drawback is its complexity, both in terms of time and
memory. e following steps include reducing complexity
and improving the efficiency of our proposed framework. In
the future, other augmentation procedures can be
researched to improve our method’s efficiency.
e goal of this paper is to present a unique method for
improving the performance of the honey badger algorithm
(HBA) based on Levy flight (LF). When the HBA is unable to
find the optimum solution within a given iteration, it does a
more efficient search concentrated on the LF to prevent being
trapped in a locally optimal solution. LF search improves the
ability to do global and regional searches at the same time.
6. Conclusion
is paper details how to classify medical images using 6G
technology for IoMT systems. e motivation for this study is
that the advantages of 6G over previous generations of
wireless connections have recently attracted a lot of interest in
the corporate and academic worlds. Moreover, although the
medical image classification task has recently grown rapidly,
current methods are still not able to achieve good results due
to the similarity in the physical features of the image data.
Additionally, we have implemented IoMT in our system to
assist physicians and patients make fast and advanced di-
agnosis of diseases worldwide. In order to build our system, it
requires training before using cloud center classification
techniques. In the training phase, the medical images are
collected by IoT devices and communicated to the cloud
center. Next, these images are sent to the 6G network to
decrease delay and bandwidth consumption. en, the fea-
tures of these images were obtained using an ensemble
learning model. is model was improved to create more
relevant feature vector representations that are beneficial to
the medical field. us, a unique meta-heuristic technique
based on selected features combines the honey badger al-
gorithm (HBA) with Levy flight to pick the important fea-
tures. e proposed algorithm has a fast convergence velocity,
indicating that it avoids capturing in local optimization and
effectively balances the exploration and exploitation stages
due to the quick classification of thresholds. To evaluate our
model, it was tested on the following datasets: CXR and OCT.
e findings reveal that the proposed optimization strategy
outperforms existing feature selection techniques currently in
use. Furthermore, experiments with several cutting-edge
methods revealed that the proposed approach is better.
However, the proposed model has some crucial problems like
time and memory. In the future, we plan to solve these
problems by a multi-objective feature selection approach.
Also, combining different classification algorithms is also a
good research issue since it might help researchers enhance
the performance of present methods.
DenseNet
ACCURACY (%)
MobileNet Ensemble-Model
82
83
84
85
86
87
88
89
90
91
Figure 12: Average accuracy of the three models on the two
datasets.
GWO
88
89
90
91
WOA
Accuracy (%)
FFA HGS AOA AO HBA LFHBA
Figure 13: e accuracy of ensemble model on different optimizers
on the two selected datasets.
GWO
0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
9.00
WOA FFA HGS AOA
Algorithms
Mean Ra nk
AO HBA LFHBA
Recall
Precision
F1-score
Accura cy
Balanced Accuracy
Figure 14: e mean rank of Friedman test.
Table 4: Accuracy results of the state-of-the-art methods (the best
results for each item are labeled in bold).
Dataset Model Accuracy (%) Reference
CXR
DGGAN 84.19 [53]
VGG16 87.00 [54]
DenseNet121 86.80 [55]
EL +LFHBA 87.10 Ours
OCT
HOG-SVM 77.80 [56]
IFCNN 87.30 [46]
LGCNN 89.90 [58]
EL + LFHBA 94.32 Ours
Computational Intelligence and Neuroscience 15
Data Availability
e data used to support the findings of this study are
available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest
regarding the publication of this paper.
Acknowledgments
e authors would like to thank the support of the Deanship
of Scientific Research at Princess Nourah bint Abdulrahman
University. is research was funded by Princess Nourah
bint Abdulrahman University Researchers Supporting
Project number (PNURSP2022R239), Princess Nourah bint
Abdulrahman University, Riyadh, Saudi Arabia.
References
[1] A. Ahmad, “Breast cancer statistics: recent trends,” Advances
in Experimental Medicine & Biology, pp. 1–7, 2019.
[2] M. M. A. Eid, A. N. Z. Rashed, A. A.-M. Bulbul, and E. Podder,
“Mono-rectangular core photonic crystal fiber (mrc-pcf) for
skin and blood cancer detection,” Plasmonics, vol. 16, no. 3,
pp. 717–727, 2021.
[3] B. Jin, Y. Zhao, and Y. Liang, “Internet of things medical
image detection and pediatric renal failure dialysis compli-
cated with respiratory tract infection,” Microprocessors and
Microsystems, vol. 83, Article ID 104016, 2021.
[4] A. Pouttu, F. Burkhardt, C. Patachia et al., “6G white paper on
validation and trials for verticals towards 2030’s,” 6G Research
Visions, vol. 4, 2020.
[5] W. Wang, F. Liu, X. Zhi, T. Zhang, and C. Huang, “An in-
tegrated deep learning algorithm for detecting lung nodules
with low-dose ct and its application in 6g-enabled internet of
medical things,” IEEE Internet of ings Journal, vol. 8, no. 7,
pp. 5274–5284, 2021.
[6] A. G. Howard, M. Zhu, B. Chen et al., “MobileNets: Efficient
Convolutional Neural Networks for mobile Vision Applica-
tions,” 2017, https://arxiv.org/pdf/1704.04861.
[7] D. D. A. Rodrigues, R. F. Ivo, S. C. Satapathy, S. Wang,
J. Hemanth, and P. P. R. Filho, “A new approach for clas-
sification skin lesion based on transfer learning, deep learning,
and iot system,” Pattern Recognition Letters, vol. 136, pp. 8–15,
2020.
[8] A. Mabrouk, R. P. D. Redondo, and M. Kayed, “Seopinion:
summarization and exploration of opinion from e-commerce
websites,” Sensors, vol. 21, no. 2, p. 636, 2021.
[9] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger,
“Densely connected convolutional networks,” in Proceedings
of the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 4700–4708, Hawaii, HW, USA, July 2017.
[10] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: an ex-
tremely efficient convolutional neural network for mobile
devices,” in Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pp. 6848–6856, Salt Lake City,
UT, USA, June 2018.
[11] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning
transferable architectures for scalable image recognition,” in
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 8697–8710, Salt Lake City, UT, USA,
June 2018.
[12] M. Tan and Q. Le, “Efficientnet: rethinking model scaling for
convolutional neural networks,” in Proceedings of the Inter-
national Conference on Machine Learning. PMLR, pp. 6105–
6114, California, CA, USA, June 2019.
[13] A. Ignatov, A. Romero, H. Kim, and R. Timofte, “Real-time
video super-resolution on smartphones with deep learning,
mobile ai 2021 challenge: Report,” in Proceedings of the IEEE/
CVF Conference on Computer Vision and Pattern Recognition,
pp. 2535–2544, Nashville, TN, USA, June 2021.
[14] J. Liu, N. Inkawhich, O. Nina, and R. Timofte, “Ntire 2021 multi-
modal aerial view object classification challenge,” in Proceedings
of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pp. 588–595, Nashville, TN, USA, June 2021.
[15] C. Wen, X. Yang, K. Zhang, and J. Zhang, “Improved loss
function for image classification,” Computational Intelligence
and Neuroscience, vol. 2021, Article ID 6660961, 8 pages, 2021.
[16] A. Farki, Z. Salekshahrezaee, A. M. Tofigh, R. Ghanavati,
B. Arandian, and A. Chapnevis, “COVID-19 diagnosis using
capsule network and fuzzy C-means and mayfly optimization
algorithm,” BioMed Research International, vol. 2021, Article
ID 6660961, 11 pages, 2021.
[17] W. Wang, Y. Hu, Y. Luo, and X. Wang, “Medical image
classification based on information interaction perception
mechanism,” Computational Intelligence and Neuroscience,
vol. 2021, Article ID 8429899, 12 pages, 2021.
[18] Z. Rahman, M. S. Hossain, M. R. Islam, M. M. Hasan, and
R. A. Hridhee, “An approach for multiclass skin lesion
classification based on ensemble learning,” Informatics in
Medicine Unlocked, vol. 25, Article ID 100659, 2021.
[19] H. Adel, A. Dahou, A. Mabrouk et al., “Improving crisis
events detection using distilbert with hunger games search
algorithm,” Mathematics, vol. 10, no. 3, p. 447, 2022.
[20] D. Gupta, S. Sundaram, A. Khanna, A. Ella Hassanien, and
V. H. C. De Albuquerque, “Improved diagnosis of Parkinson’s
disease using optimized crow search algorithm,” Computers &
Electrical Engineering, vol. 68, pp. 412–424, 2018.
[21] R. J. S. Raj, S. J. Shobana, I. V. Pustokhina, D. A. Pustokhin,
D. Gupta, and K. Shankar, “Optimal feature selection-based
medical image classification using deep learning model in
internet of medical things,” IEEE Access, vol. 8, Article ID
58006, 2020.
[22] F. A. Hashim, E. H.Houssein, K. Hussain, M. S. Mabrouk, and
W. Al-Atabany, “Honey badger algorithm: new metaheuristic
algorithm for solving optimization problems,” Mathematics
and Computers in Simulation, vol. 192, pp. 84–110, 2022.
[23] E. Han and N. Ghadimi, “Model identification of proton-
exchange membrane fuel cells based on a hybrid convolu-
tional neural network and extreme learning machine opti-
mized by improved honey badger algorithm,” Sustainable
Energy Technologies and Assessments, vol. 52, Article ID
102005, 2022.
[24] A. Durmus, “Novel metaheuristic optimization algorithms for
sidelobe suppression of linear antenna array,” in Proceedings
of the 2021 5th International Symposium on Multidisciplinary
Studies and Innovative Technologies (ISMSIT), pp. 291–294,
Ankara, Turkey, October 2021.
[25] A. Mabrouk, R. P. D. Redondo, and M. Kayed, “Deep
learning-based sentiment classification: a comparative sur-
vey,” IEEE Access, vol. 8, Article ID 85616, 2020.
[26] W. Li, R. Huang, J. Li et al., “A perspective survey on deep
transfer learning for fault diagnosis in industrial scenarios:
16 Computational Intelligence and Neuroscience
theories, applications and challenges,” Mechanical Systems
and Signal Processing, vol. 167, Article ID 108487, 2022.
[27] M. A. Morid, A. Borjali, and G. Del Fiol, “A scoping review of
transfer learning research on medical image analysis using
imagenet,” Computers in Biology and Medicine, vol. 128,
Article ID 104115, 2021.
[28] Z. Ge, S. Demyanov, B. Bozorgtabar et al., “Exploiting local and
generic features for accurate skin lesions classification using
clinical and dermoscopy imaging,” in Proceedings of the 2017
IEEE 14th International Symposium on Biomedical Imaging
(ISBI 2017), pp. 986–990, Melbourne, Australia, April 2017.
[29] A. Esteva, B. Kuprel, R. A. Novoa et al., “Dermatologist-level
classification of skin cancer with deep neural networks,”
Nature, vol. 542, no. 7639, pp. 115–118, 2017.
[30] L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated
melanoma recognition in dermoscopy images via very deep
residual networks,” IEEE Transactions on Medical Imaging,
vol. 36, no. 4, pp. 994–1004, 2017.
[31] D. Gutman, N. C. Codella, E. Celebi et al., “Skin Lesion
Analysis toward Melanoma Detection: A challenge at the
International Symposium on Biomedical Imaging (Isbi) 2016,
Hosted by the International Skin Imaging Collaboration
(Isic),” 2016, https://arxiv.org/abs/1605.01397.
[32] Y. Guo, A. S. Ashour, L. Si, and D. P. Mandalaywala, “Multiple
convolutional neural network for skin dermoscopic image
classification,” in Proceedings of the 2018 IEEE International
Symposium on Signal Processing and Information Technology
(ISSPIT), pp. 365–369, Louisville, KY, USA, December 2018.
[33] J. Kawahara, A. BenTaieb, and G. Hamarneh, “Deep features
to classify skin lesions,” in Proceedings of the 2016 IEEE 13th
International Symposium on Biomedical Imaging (ISBI),
pp. 1397–1400, Prague, Czech Republic, April 2016.
[34] A. R. Lopez, X. Giro-i Nieto, J. Burdick, and O. Marques,
“Skin lesion classification from dermoscopic images using
deep learning techniques,” in Proceedings of the 2017 13th
IASTED International Conference on Biomedical Engineering
(BioMed), pp. 49–54, Innsbruck, Austria, February 2017.
[35] E. Ayan and H. M. ¨
Unver, “Data augmentation importance for
classification of skin lesions via deep learning,” in Proceedings of
the 2018 Electric Electronics, Computer Science, Biomedical
Engineerings’ Meeting (EBBT), pp. 1–4, Istanbul, April 2018.
[36] Z. Yu, X. Jiang, F. Zhou et al., “Melanoma recognition in
dermoscopy images via aggregated deep convolutional fea-
tures,” IEEE Transactions on Biomedical Engineering, vol. 66,
no. 4, pp. 1006–1016, 2019.
[37] J. Zhang, Y. Xie, Q. Wu, and Y. Xia, “Medical image classi-
fication using synergic deep learning,” Medical Image Anal-
ysis, vol. 54, pp. 10–19, 2019.
[38] S. Pathan, K. Gopalakrishna Prabhu, and
P. C. Siddalingaswamy, “Automated detection of melanocytes
related pigmented skin lesions: a clinical framework,” Bio-
medical Signal Processing and Control, vol. 51, pp. 59–72, 2019.
[39] Z. Yu, F. Jiang, F. Zhou et al., “Convolutional descriptors
aggregation via cross-net for skin lesion recognition,” Applied
Soft Computing, vol. 92, Article ID 106281, 2020.
[40] L. Wei, K. Ding, and H. Hu, “Automatic skin cancer detection
in dermoscopy images based on ensemble lightweight deep
learning network,” IEEE Access, vol. 8, Article ID 99633, 2020.
[41] S. Niu, M. Liu, Y. Liu, J. Wang, and H. Song, “Distant domain
transfer learning for medical imaging,” IEEE Journal of Biomedical
and Health Informatics, vol. 25, no. 10, pp. 3784–3793, 2021.
[42] E. El-Shafeiy, K. M. Sallam, R. K. Chakrabortty, and
A. A. Abohany, “A clustering based swarm intelligence
optimization technique for the internet of medical things,” Expert
Systems with Applications, vol. 173, Article ID 114648, 2021.
[43] L. Abualigah, D. Yousri, M. Abd Elaziz, A. A. Ewees,
M. A. A. Al-qaness, and A. H. Gandomi, “Aquila optimizer: a
novel meta-heuristic optimization algorithm,” Computers &
Industrial Engineering, vol. 157, Article ID 107250, 2021.
[44] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning
for image recognition,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pp. 770–778, Las
Vegas, NV, USA, July 2016.
[45] Z. Manbari, F. AkhlaghianTab, and C. Salavati, “Hybrid fast
unsupervised feature selection for high-dimensional data,”
Expert Systems with Applications, vol. 124, pp. 97–118, 2019.
[46] D. S. Kermany, M. Goldbaum, W. Cai et al., “Identifying
medical diagnoses and treatable diseases by image-based deep
learning,” Cell, vol. 172, no. 5, pp. 1122–1131, 2018.
[47] E. Emary, H. M. Zawbaa, and A. E. Hassanien, “Binary grey
wolf optimization approaches for feature selection,” Neuro-
computing, vol. 172, pp. 371–381, 2016.
[48] Y. Yang, H. Chen, A. A. Heidari, and A. H. Gandomi, “Hunger
games search: visions, conception, implementation, deep anal-
ysis, perspectives, and towards performance shifts,” Expert
Systems with Applications, vol. 177, Article ID 114864, 2021.
[49] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, and
A. H. Gandomi, “e arithmetic optimization algorithm,”
Computer Methods in Applied Mechanics and Engineering,
vol. 376, Article ID 113609, 2021.
[50] S. Mirjalili and A. Lewis, “e whale optimization algorithm,”
Advances in Engineering Software, vol. 95, pp. 51–67, 2016.
[51] X.-S. Yang, “Firefly algorithm, l´
evy flights and global opti-
mization,” in Research and Development in Intelligent Systems
XXVI, pp. 209–218, Springer, New York, NY, USA, 2010.
[52] J. Derrac, S. Garc´
ıa, D. Molina, and F. Herrera, “A practical
tutorial on the use of nonparametric statistical tests as a
methodology for comparing evolutionary and swarm intel-
ligence algorithms,” Swarm and Evolutionary Computation,
vol. 1, no. 1, pp. 3–18, 2011.
[53] A. Madani, M. Moradi, A. Karargyris, and T. Syeda-Mah-
mood, “Chest x-ray generation and data augmentation for
cardiovascular abnormality classification,” Medical Imaging
2018: Image Processing, vol. 10574, Article ID 105741, 2018.
[54] E. Ayan and H. M. ¨
Unver, “Diagnosis of pneumonia from
chest x-ray images using deep learning,” in Proceedings of the
2019 Scientific Meeting on Electrical-Electronics & Biomedical
Engineering and Computer Science (EBBT), pp. 1–5, Istanbul,
Turkey, April 2019.
[55] M. Salehi, R. Mohammadi, H. Ghaffari, N. Sadighi, and R. Reiazi,
“Automated detection of pneumonia cases using deep transfer
learning with paediatric chest x-ray images,” British Journal of
Radiology, vol. 94, no. 1121, Article ID 20201263, 2021.
[56] P. P. Srinivasan, L. A.Kim, P. S. Mettu et al., “Fully automated
detection of diabetic macular edema and dry age-related
macular degeneration from optical coherence tomography
images,” Biomedical Optics Express, vol. 5, no. 10,
pp. 3568–3577, 2014.
[57] L. Fang, Y. Jin, L. Huang, S. Guo, G. Zhao, and X. Chen,
“Iterative fusion convolutional neural networks for classifi-
cation of optical coherence tomography images,” Journal of
Visual Communication and Image Representation, vol. 59,
pp. 327–333, 2019.
[58] L. Huang, X. He, L. Fang, H. Rabbani, and X. Chen, “Au-
tomatic classification of retinal optical coherence tomography
images with layer guided convolutional neural network,” IEEE
Signal Processing Letters, vol. 26, no. 7, pp. 1026–1030, 2019.
Computational Intelligence and Neuroscience 17