ArticlePDF Available

Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering Algorithm

Authors:

Abstract and Figures

This study underscores the importance of wound image segmentation in the medical world to speed up first aid for victims and increase the efficiency of medical personnel in providing appropriate treatment. Although the body has a protective function from external threats, the skin can be easily damaged and cause injuries that require rapid detection and treatment. This study used the K-Means clustering algorithm to segment the external wound image dataset consisting of three types of wounds, namely abrasion, puncture, and laceration. The results showed that K-Means clustering is an effective method for segmenting wound images. The greater the number of segments used, the better the quality of the resulting segmentation. However, it is necessary to take into account the specific characteristics of each type of wound and the number of segments used in order to choose the most suitable segmentation method. Evaluation using various metrics, such as VOI, GCE, MSE, and PSNR, provides an objective assessment of the quality of segmentation. The results showed that abrasion wounds were easier to segment compared to puncture wounds and lacerations. In addition, the size of the image file also affects the speed of program execution, although it is not always absolute depending on the characteristics of the image.
Content may be subject to copyright.
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
ISSN: 2985-6116, DOI: 10.59247/csol.v1i3.33
132
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Analysis of the Influence of Number of Segments
on Similarity Level in Wound Image Segmentation
Using K-Means Clustering Algorithm
Furizal 1,* , Syifa’ah Setya Mawarni 2, Son Ali Akbar 3, Anton Yudhana 4, Murinto Kusno 5
1,2 Master Program of Informatics, Universitas Ahmad Dahlan, Yogyakarta, Indonesia
3,4 Department of Electrical Engineering, Universitas Ahmad Dahlan, Yogyakarta, Indonesia
5 Department of Informatics, Universitas Ahmad Dahlan, Yogyakarta, Indonesia
Email: 1 furizal.id@gmail.com, 2 syifa.mawarni19@gmail.com, 3 sonali@ee.uad.ac.id, 4 eyudhana@ee.uad.ac.id,
5 murintokusno@tif.uad.ac.id
*Corresponding Author
Abstract This study underscores the importance of wound
image segmentation in the medical world to speed up first aid
for victims and increase the efficiency of medical personnel in
providing appropriate treatment. Although the body has a
protective function from external threats, the skin can be easily
damaged and cause injuries that require rapid detection and
treatment. This study used the K-Means clustering algorithm
to segment the external wound image dataset consisting of
three types of wounds, namely abrasion, puncture, and
laceration. The results showed that K-Means clustering is an
effective method for segmenting wound images. The greater
the number of segments used, the better the quality of the
resulting segmentation. However, it is necessary to take into
account the specific characteristics of each type of wound and
the number of segments used in order to choose the most
suitable segmentation method. Evaluation using various
metrics, such as VOI, GCE, MSE, and PSNR, provides an
objective assessment of the quality of segmentation. The results
showed that abrasion wounds were easier to segment
compared to puncture wounds and lacerations. In addition, the
size of the image file also affects the speed of program
execution, although it is not always absolute depending on the
characteristics of the image.
KeywordsImage segmentation, K-Means, Wound imagery,
Image Processing
I. INTRODUCTION
The body has a protective function from various external
threats, but skin tissue is easily damaged and causes injury.
Skin vulnerability presents various types of injuries
including animal or human bites, accidental injuries, sharp
or blunt objects, temperature and radiation [1]. Wound is
damage to the protective function of the skin accompanied
by loss of epithelial tissue continuity with or without
damage to other tissues such as muscles, bones, and nerves
caused by several factors, namely pressure, incisions, and
surgical injuries [2].
External wounds have three levels of severity, namely
level 1 (superficial), level 2 (superficial partial-thickness),
and level 3 (full thickness). Wound segmentation can
provide a clear picture of how serious the wound is by
looking at the color depth. The deeper the color, the worse
and higher the severity of the wound.
Object detection is an identification process. Generally,
identification is carried out using a camera as an object
capturer and then the image obtained is further identified.
According to the Central Statistics Agency, there were
around 272 injuries due to natural disasters and fires in
2019-2021 in Jakarta. Object recognition or object detection
on wounds can help speed up the process of first aid to
victims and increase the efficiency of medical personnel in
providing handlers to victims.
Object detection using segmentation has been widely
done in the medical world, such as Riska Nanda et al
segmenting medical images on Fibroa Adenoma Mammae
(FAM) objects or benign tumors in the breast using the
Sobel algorithm [3]. The image used by FAM ultrasound
results is in .dcom format with Matlab R2009a as its tool.
The results of segmentation of medical images with sobel
algorithms are good for determining edges on FAM objects,
but images with less resolution of edge detection are
difficult. In addition to breasts, image segmentation with
Deep Learning on lung image segmentation by Muhammad
Hussein produced the best value using the U-Net
EfficientNetb0 algorithm with a dice value of 0.967 and a
jaccard of 0.937 but for the speed of time the MobileNet U-
net algorithm [4]. The dataset used was X-Ray images with
a total of 80 normal lung images and 58 others had
abnormalities. Other research on segmentation with lung
objects has also been conducted by Sintha Syaputri and
Zulkarnain using Active Contour algorithm and Matlab
R2015a as their tools [5]. The result of this study is that
Active Contour can segment well depending on the
parameter value.
Medical image segmentation has a variety of algorithms
that can be used, one of which is K-Means clustering. K-
Means is an unsupervised learning algorithm for solving
clustering problems in a faster time than C-Means Fuzzy
[6]. Segmentation on medical images using the K-Means
algorithm has been carried out by R. S. D. Wijaya et al [7]
and Dwiza Riana et al [8] on cervical cancer objects and W.
M. Baihaqi et al on leukocyte nucleus segmentation [9].
Medical image segmentation is centered on vital objects,
even though the skin is the outermost layer or shield for the
body, so there is rarely segmentation using wound objects.
External wound segmentation helps wound treatment be
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
133
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
faster and more precise. Identification of wounds becomes
clearer so that appropriate treatment is given.
Research contribution to analyzing external wound
image segmentation using k-means clustering and the effect
of the number of segments on external wound image
segmentation.
II. METHOD
This study began with the collection of wound image
datasets. The dataset that has been obtained will be
processed at the preprocessing stage such as resizing,
cropping, noise correction, and converting color images to
grayish images. Once preprocessing is complete, the new
grayish image will be segmented using the K-means
Clustering method. The research flow can be seen in Fig 1.
Fig. 1. Research flow
A. Image Dataset
The image data used is external wound image data. The
image dataset was taken from Kaggle with a total of nine
data consisting of three classes, namely abrasion wounds,
lacerations, and puncture wounds. Each class has three
image data. The external wound image dataset is a dataset
that is quite difficult to find. the hospital did not allow
external wound datasets to be collected. dataset was taken
from Kaggle, which is one of the platforms that provides
external wound image datasets.
B. Preprocessing
1) Image Resize
The image data obtained is not in the same size. In order
for the segmentation process to work optimally, it takes an
image resizing process or resizing images with the aim of
equalizing the size of each image. The study resized each
image to 640x640 pixels. This size is at the medium point,
not too tight and not too loose so that the segmentation
process becomes more efficient but the image of the wound
is still clearly visible.
2) Image Color Conversion
The original image has a red, green, and blue color base
which is often also called RGB image. RGB images have a
total pixel intensity value of 24-bits, each color channel has
an 8-bit pixel intensity value which means it has a color
variation of 28 = 256 color degrees (0 to 255). A grayscale
image is a grayscale image with an intensity value of at
most 255 in white to black with an intensity value of at least
0 [10]. Image conversion from RGB images to grayscale
images aims to simplify images so that the segmentation
process becomes more efficient.
3) Noise Repair
Genuine imagery is usually contaminated with noise,
whether it's during image shooting, image delivery, image
downloading. Noise is a disturbance caused by the storage
of digital data received by image data receivers that can
interfere with image quality [11]. The use of noise and
filters is based on measuring image quality values. There are
four image quality measurements, namely Mean Square
Error (MSE) [12]-[13], Peak Signal Noise Ratio (PSNR)
[14]-[16], Variation of Information (VOI) [17]-[18], and
Global Consistency Error (GCE) [19]-[21]. The smaller the
MSE value, the better the recovery value from the noise
effect, and in PSNR the higher the value, the better because
it is considered close to the original image.
a) Mean Square Error (MSE) and Peak Signal Noise Ratio
(PSNR)
MSE and PSNR are expressed mathematically using
Equations 1 and 2.

 󰇟󰆒󰇛󰇜 󰇛󰇛󰇜󰇠


(1)
 (
)
(2)
Where m and n are the column and row sizes of the
image, f (I,j) is the original image and f' (I,j) is the recovered
image. Max is the maximum possible pixel value, if 8 Bits
means 28-1 = 255.
b) Variation of Information (VOI) and Global Consistency
Error (GCE)
VOI or information variation serves to measure the
amount of missing information obtained between two
clusters, while GCE or global consistency error according to
D. Martin is some error measure to measure consistency
between human partitions and algorithm partitions. VOI and
GCE are mathematically expressed using equations 3 and 4.
󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
(3)
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
134
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Where FSx and FSy exhibit K-Means segmentation,
entropy is represented by E(FS), then E(FSx, FSy) indicates
a combination of two images, and M (FSx, FSy) sends
reciprocal information from the two images.
(4)
Where Sx and Sy show two segmentations and pi
represents the position of the pixel.
Based on these measurements, the closest thing to the
original image is using uniform noise with a 3x3 Median
low-pass filter. Produces clear images with minimal noise
contamination so that segmentation becomes more effective.
C. K-Means Segmentation
The K-Means clustering algorithm is a centroid model.
The centroid model is a model that uses centroids to create
clusters, centroid is the midpoint value of a cluster and
centroid is used to calculate the distance of an object to the
centroid. A data object is included in a cluster if it has the
shortest distance to the cluster centroid.
The K-means algorithm uses an iterative process to
obtain a cluster database. It takes the desired initial number
of clusters as input and produces the final centroid point as
output. The K-means method will choose k patterns as the
starting point of the centroid randomly. K-means uses the
parameters in equation 5.
(4)
Vij is the centroid/average of the first cluster for the j
variable, Ni is the amount of data that is a member of the i
cluster, i,k is the index of the cluster, j is the index of the
variable, xkj is the k data value in the cluster for the j
variable.
III. RESULTS AND DISCUSSION
In the results of this discussion, wound images were
segmented using the K-Means algorithm by testing nine
different wound images. This wound image is divided into
three types of wounds, namely abrasion, puncture, and laser
wounds. All images used have jpg extension. This test aims
to test the extent of the ability of the K-Means algorithm to
segment the three types of wound images used as test data.
This segmentation was tested in three test categories,
namely image segmentation with 2 segments, 3 segments,
and 4 segments. The selection to test three different numbers
of segments, namely 2, 3, and 4, in testing K-Means
segmentation algorithm is aimed at exploring how well the
algorithm can adapt to various levels of complexity in
wound images. By including different numbers of segments,
we can evaluate the algorithm's performance in scenarios
with few objects (2 segments), moderate complexity (3
segments), or higher complexity (4 segments). This helps us
understand whether the algorithm exhibits strong
generalization capabilities across different situations and
ensures that the segmentation results can be optimized
according to different task contexts. The original image and
segmented image can be seen in Table 1.
Table 1 displays the original image and segmented
images that have been processed using the K-Means
algorithm. The original image is the image before the
segmentation process, while the segmented image is the
image that has been processed to divide the pixels by color.
Every color image consists of a combination of colors
formed from the pixels in the image, creating a complete
image. There are several color models commonly used in
image processing, such as the RGB color model that uses
red, green, and blue color channels, the CMYK color model
used in printing, the HSB color model that describes the
shades, saturation, and brightness of colors, and the CIE-
XYZ color model used in lighting science and industry.
Based on the original image used along with the RGB color
distribution outlined in 3D graphics as in Fig. 2.
When viewed with the naked eye, the results of new scar
segmentation will be more clearly visible when the image is
segmented into 4-segments. While results on 2-segments
tend to show less than optimal results, because some
wounds are not detected properly. Dark skin is always
identified as a wound. In addition, blood splashes on
uninjured skin are also identified as wounds by the results of
2-segment segmentation. However, to better see the extent
of the performance of the K-Means algorithm in segmenting
images, it is based on four indicators, namely the value of
VOI, GCE, MSE, and PSNR. The values of these four
indicators can be seen in full in Table 2.
Table 2 provides complete information regarding image
segmentation that has been carried out by providing VOI,
GCE, MSE, and PSNR values. In addition, it also calculates
the program execution time during image processing. The
file size of each image is also displayed to see its influence
in the image process. The effect of file size on program
execution time is shown in Fig. 3.
Fig. 3 indicates that the image file size greatly affects the
ET of the program. However, this statement is not absolute
because there are some files that are higher in size, but
programs can execute faster when compared to lower file
sizes. For example, the 3rd abrasion wound image has a
lower file size when compared to the 1st and 2nd laser
wound images. But the required ET is greater. The highest
ET was obtained in the 2nd abrasion wound image of the 4-
segment, with an ET of 2.0206 s. While the lowest ET was
obtained in the 2nd puncture wound image of the 2-segment,
with an ET of 0.0653 s.
In addition, Fig. 3 also shows how the number of image
segments affects ET. The more segments used, the slower
the program execution. In this test, images divided into 4-
segments require more ET when compared to 2-segments
and 3-segments. Although this is also not absolute, the tests
carried out show almost certainty. Because only 2 tests
contradict this statement, namely ET 2nd puncture wound
image with 3-segment (0.1936 s) greater than ET at 4-
segment (0.1922 s) and ET 3rd puncture wound image with
3-segment (0.5314 s) greater than ET at 4-segment (0.5214
s). Then, in determining the difference in information, the
percentage of inconsistent pixels, the average square
difference between pixel intensity, and the comparison
between peak signal power in the original image and
distortion power in the segmented image, the VOI, GCE,
MSE, and PSNR values depicted in Fig 4.
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
135
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Table 1. Wound image segmentation results
No
Types of
Wounds
Original Image
Segmentation Results
2 Segments
3 Segments
4 Segments
1
Abrasion
1
2
3
2
Stab
1
2
3
3
Laser
1
2
3
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
136
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Fig. 2. RGB color distribution 3D graphic (a) First image of abrasion wound, (b) The second image of the abrasion wound, (c) The third image of the
abrasion wound, (d) The first image of the stab wound, (e) The image of both stab wounds, (f) The image of the three stab wounds, (g) First image of
laser wound, (h) Images of both laser wounds, (i) Image of the three laser wounds
Table 2. Wound image segmentation results
No
Types
of
Wounds
Original Image
2-Segments
3-Segments
4-Segments
Imagery
to-
File size
(KB)
VOI
GCE
MSE
PSNR
ET
VOI
GCE
MSE
PSNR
ET
VOI
GCE
MSE
PSNR
ET
1
Abrasion
1
138
6.3218
0.9712
207.21
24.967
0.6873
6.0785
0.9613
114.41
27.546
0.9599
5.8753
0.9571
83.353
28.922
1.6309
2
121
7.0982
0.9795
289.05
23.521
0.4026
6.9543
0.9726
178.15
25.623
0.7555
6.7307
0.9612
117.99
27.412
1.2398
3
140
6.8201
0.9746
263.45
23.924
0.7553
6.621
0.9687
155.89
26.203
1.0154
6.5326
0.9631
110.64
27.692
2.0206
2
Stab
1
50
7.1391
0.977
423.51
21.862
0.1052
6.988
0.9738
270.34
23.812
0.1595
6.8136
0.9696
159.16
26.112
0.1656
2
42
7.381
0.9769
534.6
20.851
0.0653
7.1679
0.9692
248.18
24.183
0.1936
6.9836
0.9694
168.91
25.854
0.1922
3
112
7.3688
0.9792
398.88
22.122
0.2992
7.0344
0.9774
228.79
24.537
0.5314
6.6558
0.9693
151.04
26.34
0.5214
3
Laser
1
239
7.9809
0.9842
664.89
19.903
0.8723
7.4285
0.9811
360.78
22.558
0.8219
7.4377
0.9703
263.23
23.927
1.2572
2
258
7.7414
0.9852
722.26
19.544
0.7508
7.3257
0.9796
366.46
22.491
0.6773
7.3184
0.9754
254.32
24.077
0.9817
3
89
7.9178
0.99
981.04
18.214
0.1492
7.476
0.9784
453.45
21.566
0.1678
7.3754
0.9755
291.63
23.483
0.2588
Fig. 4a is the VOI value in each image segmentation.
This VOI result shows that the more the number of
segments, the lower the potential VOI value. This means
that the more the number of segments, the degree of
similarity between the original image and the better the
image of the segment results and vice versa. Then to get the
percentage of pixels that are inconsistent between the
segmented image and the original image, Fig. 4b explains
that the more the number of segments, the more potential to
produce a lower GCE value. This proves that the more the
number of segments, the more potential for consistency of
the two images.
On Fig. 4c, MSE describes the average squared
difference between the pixel intensity in the segmented
image and the original image. The lower the value, the more
similar the two images are. Just like GCE, MSE also shows
the same results. The more the number of segments, the
more potential to produce a lower MSE value. This proves
that the more the number of segments, the more similar the
two images.
Meanwhile, to get a comparison between the peak signal
power in the original image with the distortion power in the
segmented image, the PSNR value is needed as in Fig. 4d.
The highest average PSNR value is obtained from
segmentation of abrasion wound images. While the lowest
average PSNR value is obtained from segmentation of laser
wound images. The higher the value, the more similar the
two images are. This means that the original image of the
abrasion wound can be better segmented by K-Means than
the image of the puncture wound and laser. This is also
supported by VOI, GCE, and MSE values in each image
segmentation.
Fig. 3. Impact of file size on ET
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
137
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Fig. 4. (a) VOI value; (b) GCE value; (c) MSE value; and (d) the PSNR value of each image segmentation
IV. CONCLUSION
The use of the K-Means algorithm in segmenting wound
images through testing on three types of wounds (abrasion,
puncture, and laser) with variations in the number of
segments showed mixed results. Segmentation with a larger
number of segments tends to produce better results in terms
of similarity and consistency between the segmented image
and the original image. Evaluation using evaluation metrics
such as VOI, GCE, MSE, and PSNR provides an objective
assessment of segmentation quality, with abrasion wound
imagery generally providing better results. However, it is
necessary to pay attention to the characteristics of each type
of wound and the number of segments in choosing the
appropriate method of segmentation. These conclusions
provide important insights in the development of image
segmentation techniques for medical and analytical
applications. The results of this study need to be
investigated further to provide maximum results in wound
image segmentation. It is recommended to explore optimal
preprocessing techniques, develop hybrid methods that
combine multiple segmentation algorithms, and involve
external evaluation with better ground truth to obtain
accurate validation and a better understanding of
segmentation quality. In addition, the development of
clinical applications using wound image segmentation is
also an important step to test its usefulness and effectiveness
in medical practice.
REFERENCES
[1] G. Zaman Khan et al., "An Efficient Deep Learning Model based
Diagnosis System for Lung Cancer Disease," 2023 4th International
Conference on Computing, Mathematics and Engineering
Technologies (iCoMET), pp. 1-6, 2023,
https://doi.org/10.1109/iCoMET57998.2023.10099357.
[2] C. N. Elangwe, S. N. Morozkina, R. O. Olekhnovich, A. Krasichkov,
V. O. Polyakova, and M. V. Uspenskaya, “A Review on Chitosan and
Cellulose Hydrogels for Wound Dressings,” Polymers (Basel), vol.
14, no. 23, p. 5163, 2022, https://doi.org/10.3390/polym14235163.
[3] P. Cho, H. J. Yoon, Evaluation of U-net-based image segmentation
model to digital mammography,” Medical Imaging 2021: Image
Processing, vol. 11596, pp. 593-599, 2021,
https://doi.org/10.1117/12.2581401.
[4] M. M. Hasan, M. Md. Jahangir Kabir, M. R. Haque and M. Ahmed,
"A Combined Approach Using Image Processing and Deep Learning
to Detect Pneumonia from Chest X-Ray Image," 2019 3rd
International Conference on Electrical, Computer &
Telecommunication Engineering (ICECTE), pp. 89-92, 2019,
https://doi.org/10.1109/ICECTE48615.2019.9303543.
[5] E. Jangam, A. C. S. Rao, Segmentation of lungs from chest X rays
using firefly optimized fuzzy C-means and level set algorithm,”
International Conference on Recent Trends in Image Processing and
Pattern Recognition, pp. 303-311, 2019, https://doi.org/10.1007/978-
981-13-9184-2_27.
[6] C. Zhang et al., “White Blood Cell Segmentation by Color-Space-
Based K-Means Clustering,” Sensors, vol. 14, no. 9, pp. 16128
16147, 2014, https://doi.org/10.3390/s140916128.
[7] C. Sun et al., Gastric histopathology image segmentation using a
hierarchical conditional random field,” Biocybernetics and
Biomedical Engineering, vol. 40, no. 4, pp. 1535-1555, 2020,
https://doi.org/10.1016/j.bbe.2020.09.008.
[8] D. Riana, S. Hadianti, S. Rahayu, Frieyadie, M. Hasan, I. N. Karimah,
R. Pratama, Repomedunm: A new dataset for feature extraction and
training of deep learning network for classification of pap smear
images,” International Conference on Neural Information Processing,
pp. 317-325, 2021, https://doi.org/10.1007/978-3-030-92307-5_37.
[9] W. M. Baihaqi, C. R. A. Widiawati, T. Insani, “K-Means Clustering
Based on Otsu Thresholding For Nucleus of White Blood Cells
Segmentation,” Jurnal RESTI (Rekayasa Sistem dan Teknologi
Informasi), vol. 4, no. 5, pp. 907-914, 2020,
https://doi.org/10.29207/resti.v4i5.2309.
[10] K. Liu, W. Wang, and J. Wang, “Pedestrian Detection with Lidar
Point Clouds Based on Single Template Matching,” Electronics, vol.
8, no. 7, p. 780, 2019, http://dx.doi.org/10.3390/electronics8070780.
[11] B. Karthik, T. K. Kumar, S. P. Vijayaragavan, M. Sriram, Removal
of high density salt and pepper noise in color image through modified
cascaded filter,” Journal of Ambient Intelligence and Humanized
Computing, vol. 12, pp. 3901-3908, 2021,
https://doi.org/10.1007/s12652-020-01737-1.
[12] U. Sara, M. Akter, and M. S. Uddin, “Image Quality Assessment
through FSIM, SSIM, MSE and PSNR—A Comparative Study,”
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
138
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Journal of Computer and Communications, vol. 7, no. 3, pp. 8-18,
2019, https://doi.org/10.4236/jcc.2019.73002.
[13] T. Badriyah, N. Sakinah, I. Syarif, and D. R. Syarif, “Segmentation
Stroke Objects based on CT Scan Image using Thresholding
Method,” 2019 First International Conference on Smart Technology
& Urban Development (STUD), pp. 16. 2019,
https://doi.org/10.1109/STUD49732.2019.9018825.
[14] A. R. W. Putri, Classification of Breast Cancer Using the Digital
Mammogram Method,” JATISI (Jurnal Teknik Informatika dan
Sistem Informasi), vol. 9, no. 4, pp. 27522761, 2022,
https://doi.org/10.35957/jatisi.v9i2.2100.
[15] C. Huang, X. Li, and Y. Wen, “AN OTSU image segmentation based
on fruitfly optimization algorithm,” Alexandria Engineering Journal,
vol. 60, no. 1, pp. 183188, 2021,
https://doi.org/10.1016/j.aej.2020.06.054.
[16] H. Jia, X. Peng, W. Song, C. Lang, Z. Xing, and K. Sun, “Multiverse
Optimization Algorithm Based on Lévy Flight Improvement for
Multithreshold Color Image Segmentation,” IEEE Access, vol. 7, pp.
32805-32844, 2019, https://doi.org/10.1109/ACCESS.2019.2903345.
[17] L. Guo, P. Shi, L. Chen, C. Chen, and W. Ding, “Pixel and region
level information fusion in membership regularized fuzzy clustering
for image segmentation,” Information Fusion, vol. 92, pp. 479497,
2023, https://doi.org/10.1016/j.inffus.2022.12.008.
[18] B. Jai Shankar, K. Murugan, A. Obulesu, S. Finney Daniel Shadrach,
and R. Anitha, “MRI Image Segmentation Using Bat Optimization
Algorithm with Fuzzy C Means (BOA-FCM) Clustering,” J Med
Imaging Health Inform, vol. 11, no. 3, pp. 661666, 2021,
https://doi.org/10.1166/jmihi.2021.3365.
[19] D. Anandan, S. Hariharan, and R. Sasikumar, “Deep learning based
two-fold segmentation model for liver tumor detection,” Journal of
Intelligent & Fuzzy Systems, vol. 45, no. 1, pp. 7792, 2023,
https://doi.org/10.3233/JIFS-230694.
[20] M. Poojary and Y. Srinivas, “Optimization Technique Based
Approach for Image Segmentation,” Curr Med Imaging Rev, vol. 19,
no. 10, 2023, https://doi.org/10.2174/1573405619666221104161441.
[21] H. Liu, X. Diao, and H. Guo, “Quantitative analysis for image
segmentation by granular computing clustering from the view of set,”
J Algorithm Comput Technol, vol. 13, p. 174830181983305, 2019,
https://doi.org/10.1177/1748301819833050.
... Thus, the authors conclude that 800-1000 superpixels are optimal. The authors of [14] chose another approach -they tend to segment images with wounds on them into a relatively small number of segments (superpixels) (2, 3, and 4) utilizing ordinary k-means clustering instead of SLIC algorithm. However, the authors utilize a small dataset of only nine images of three classes (three wound types). ...
Article
Full-text available
This paper investigates the application of the Simple Linear Iterative Clustering (SLIC) algorithm for complex object image segmentation, on the example of images of human body injuries. The study solves the problem of the lack of quantitative evidence regarding SLIC's algorithm performance in high-precision area and boundary assessment of the lesion on a digital image of a human body with a wound injury on it. A comprehensive methodology is developed to evaluate SLIC's algorithm efficacy across various complex images and image resolutions. The research utilizes a combined dataset of 3696 wound images from the Foot Ulcer Segmentation Challenge (FUSeg) and WoundSeg datasets. Bayesian optimization is utilized to fine-tune SLIC algorithm hyperparameters, focusing on the number of segments and compactness. Results indicate that SLIC algorithm demonstrates consistent performance across different implementations, achieving Dice scores around 0.84 and Soft Boundary F1 scores around 0.55. The study reveals that the optimal number of segments for SLIC algorithm can be defined relative to the spatial dimensions of the input image, with maximal image dimension *2 being the most effective value. A thorough analysis of various segmentation metrics is conducted, including IoU, Dice Score, and Boundary F1 Score. The research introduces and employs the Soft Boundary F1 Score – modification of Boundary F1 Score, a novel metric designed to provide a more nuanced evaluation of boundary detection accuracy while offering a smoother optimization landscape. This metric proves particularly valuable in assessing the performance of SLIC algorithm in image with complex objects on them segmentation tasks. Importantly, this research presents an idealized SLIC-based segmentation approach, where superpixels are optimally combined using ground-truth masks to establish an upper bound of performance. This idealized SLIC algorithm segmentation is compared with a pre-trained on the FUSeg dataset UNet model, showcasing superior generalization capability across diverse wound types. On the WoundSeg dataset, the idealized SLIC algorithm approach achieved a Dice score of 0.84, significantly outperforming the UNet model (0.12 Dice score). As a result, this study provides valuable insights for improving complex objects segmentation methods and highlights the need for further research on developing effective methods for superpixel classification in real-world scenarios. The findings also highlight the potential of SLIC-based approaches in addressing the challenges of diverse data types and limited training data.
... In evaluating the performance of forecasting models, three metrics are commonly used: Mean Squared Error (MSE) [85]- [88], Root Mean Squared Error (RMSE) [89]- [92], and Mean Absolute Error (MAE) [93]- [95]. MSE is a measure that calculates the average squared difference between observed values and expected values [96], [97]. In the context of prediction or forecasting, MSE is a measure that calculates the average of the squared differences between predicted values and actual values. ...
Article
Full-text available
Temperature forecasting is a crucial aspect of meteorology and climate change studies, but challenges arise due to the complexity of time series data involving seasonal patterns and long-term trends. Traditional methods often fall short in handling this variability, necessitating more advanced solutions to enhance prediction accuracy. LSTM and GRU models have emerged as promising alternatives for modeling temperature data. This study is a literature review comparing the effectiveness of LSTM and GRU based on previous research in temperature forecasting. The goal of this review is to evaluate the performance of both models using various evaluation metrics such as MSE, RMSE, and MAE to identify gaps in previous research and suggest improvements for future studies. The method involves a comprehensive analysis of previous studies using LSTM and GRU for temperature forecasting. Assessment is based on RMSE values and other metrics to compare the accuracy and consistency of both models across different conditions and temperature datasets. The analysis results show that LSTM has an RMSE range of 0.37 to 2.28. While LSTM demonstrates good performance in handling long-term dependencies, GRU provides more stable and accurate performance with an RMSE range of 0.03 to 2.00. This review underscores the importance of selecting the appropriate model based on data characteristics to improve the reliability of temperature forecasting.
... The study highlights the pivotal role such technology plays in advancing healthcare data management while also safeguarding individual privacy rights. Additionally, the research explores the integration of AI technologies for advanced medical image processing [20]- [23]. ...
Article
Full-text available
This research project aims to advance healthcare data and diagnostic imaging delivery by employing a digital Picture Archiving and Communication System, also known as PACS, alongside a web application that is internet-accessible. The study explores two main areas, which are User Interface design and system functionality. The User Interface design merges the aesthetic of traditional paper documentation with the advantages of electronic displays to enhance both readability and overall user experience. The system functionality has been assessed and confirmed to offer secure and efficient transmission of medical diagnostic data as well as radiographic images. By complying with the Personal Data Protection Act of 2019 or PDPA, the system ensures the safe handling of confidential personal information. Test results show that the system meets performance standards and has been well-received by users. The project serves as a timely solution for today's medical industry, focusing on the speed and security of diagnostic data and image transmission.
Article
Full-text available
In the dynamic realm of Automatic Voltage Regulation (AVR), the pursuit of robust transient response, adaptability, and stability drives researchers to explore novel avenues. This study introduces a groundbreaking approach—the Hybrid Intelligent Fractional Order Proportional Derivative2+Integral (FOPDD+I) controller—leveraging the power of the Adaptive Neuro-Fuzzy Inference System (ANFIS). The novelty lies in the comparative analysis of three scenarios: the AVR system without a controller, with a traditional PID controller, and with the proposed FOPDD+I-based ANFIS. By fusing ANFIS with a hybrid controller, we forge a unique path toward optimized AVR performance. The hybrid controller, based on FOPID (Fractional Order Proportional Integral Derivative) principles, synergizes individual integral factors with ANFIS, augmenting them with a doubled derivative factor. The ANFIS design employs a hybrid optimization learning scheme to fine-tune the Fuzzy Inference System (FIS) parameters governing the AVR system. To train the fuzzy inference system, we utilize a Proportional-Integral-Derivative (PID) simulation of the entire AVR system, capturing essential data over approximately seven seconds. Our simulations, conducted in MATLAB/Simulink, reveal impressive performance metrics for the FOPDD+I-ANFIS approach: Rise time: 1.1162 seconds, settling time: 0.5531 seconds, Overshoot: 0%, Steady-state error: 0.00272, These results position our novel approach favorably against existing works, underscoring the transformative potential of intelligent creation in AVR control.
Article
Full-text available
Wound management remains a challenging issue around the world, although a lot of wound dressing materials have been produced for the treatment of chronic and acute wounds. Wound healing is a highly dynamic and complex regulatory process that involves four principal integrated phases, including hemostasis, inflammation, proliferation, and remodeling. Chronic non-healing wounds are wounds that heal significantly more slowly, fail to progress to all the phases of the normal wound healing process, and are usually stalled at the inflammatory phase. These wounds cause a lot of challenges to patients, such as severe emotional and physical stress and generate a considerable financial burden on patients and the general public healthcare system. It has been reported that about 1–2% of the global population suffers from chronic non-healing wounds during their lifetime in developed nations. Traditional wound dressings are dry, and therefore cannot provide moist environment for wound healing and do not possess antibacterial properties. Wound dressings that are currently used consist of bandages, films, foams, patches and hydrogels. Currently, hydrogels are gaining much attention as a result of their water-holding capacity, providing a moist wound-healing milieu. Chitosan is a biopolymer that has gained a lot of attention recently in the pharmaceutical industry due to its unique chemical and antibacterial nature. However, with its poor mechanical properties, chitosan is incorporated with other biopolymers, such as the cellulose of desirable biocompatibility, at the same time having the improved mechanical and physical properties of the hydrogels. This review focuses on the study of biopolymers, such as cellulose and chitosan hydrogels, for wound treatment.
Article
Full-text available
White blood cells function as the human immune system, and help defend the body against viruses. In clinical practice, identification and counting of white blood cells in blood smears is often used to diagnose many diseases such as infection, inflammation, malignancy, leukemia. In the past, examination of blood smears was very complex, manual tasks were tedious and time-consuming. This research proposes the k-means clustering algorithm to separate white blood cells from other parts. However, k-means clustering has a weakness that is when determining the initial prototype values, so the otsu thresholding method is used to determine the threshold by utilizing global values, then proceed with morphological operations to refine the segmentation image. The results of segmentation are measured by the Positive Predeictive Value (PPV) and Negative Positive Value (NPV) parameters. The results obtained prove that the use of otsu thresholding and morphological operations significantly increase the value of PPV compared to the value of PPV that does not use otsu thresholding. Whereas the NPV value increased but not significantly.
Article
Liver Tumour (LT) develops when healthy cells undergo abnormal DNA changes that cause them to grow and divide uncontrollably. In manual examination, evaluation might be changed by the unique perception of the observers, which depends on their expertise and subjectivity. Therefore, computer-aided intelligent tools are established to eliminate subjectivity and increase the performance. To overcome these challenges, a novel Two-fold Segmentation of Liver Tumour (TFSLT) model for accurately detecting the liver tumour using computed tomography (CT) images. Initially, the CT images are pre-processed using Normalized-Modified Anisotropic Diffusion Filtering (NMADF) Algorithm to reduce the noise artifacts. These pre-processed CT images are taken as input to the Canny Edge Detector (CED) for detecting the edges of the liver. Based on these edges, the first-fold segmentation process is performed using the Jaccard metric-based Watershed (JMWS) algorithm to accurately segment the liver region. Improved Deep Neural Network (IDNN) is utilized to classify the LT into normal, Hepatocellular carcinoma (HCC), Cholangio carcinoma (CC) and Metastatic tumour (MT). Modified Elephant Herd Optimization (MEHO) algorithm for the MEHO algorithm for selecting the features of the images. Finally, the Improved Expectation-Maximization (IEM) Algorithm as second-fold segmentation process to segment the different abnormal classes. The performance of the proposed TFSLT approach is assessed using the specific metrics like recall, precision, specificity, accuracy and F1 score. The experimental findings reveal that the proposed TFSLT approach achieves a better accuracy range of 99.57% for detecting LT in its early stages.
Article
Objective The study's goal was to diagnose the condition at an earlier stage by employing the optimization-based technique for image segmentation to find deformities in MRI and Aura images. Methods Our methodology was based on two case studies. The diseased data set of MRI images obtained from the UCI data set and Aura images from Bio-Well were taken into consideration. Using the Relevance Feedback Mechanism(RFM), the sick images that are most pertinent are determined. The optimization-based Cuckoo Search (CS) algorithm is used to find the best features. The resulting model utilising the Truncated Gaussian Mixture Model(TGMM) is used to compare the extracted characteristics. The most relevant images are chosen based on the likely hood estimation. Results The suggested methodology is tested using 150 retrieved Aura images, 50 trained photos, and processing of the input image utilizing morphological techniques like dilation, erosion, opening, and closing to improve the image quality. Together with segmentation quality measurements including Global Consistency Error (GCE), Probability Random Index (PRI), and Volume of Symmetry(VOS), the results are assessed using image quality metrics such as Average Difference (AD), Maximum Difference (MD), and Image Fidelity (IF) . Conclusion The TGMM algorithm is used to conduct the experiment. The outcomes demonstrate the effectiveness of the suggested approaches in locating various injured tissues inside medical images obtained using MRI technology as well as in locating high-intensity energy zones in which a potential deformity is associated in Aura images. The outcomes reveal a respectable recognition accuracy of about 93%.
Chapter
Morphological changes in the cell structure in Pap Smear images are the basis for classification in pathology. Identification of this classification is a challenge because of the complexity of Pap Smear images caused by changes in cell morphology. This procedure is very important because it provides basic information for detecting cancerous or precancerous lesions. To help advance research in this area, we present the RepoMedUNM Pap smear image database consisting of non-ThinPrep (nTP) Pap test images and ThinPrep (TP) Pap test images. It is common for research groups to have their image datasets. This need is driven by the fact that established datasets are not publicly accessible. The purpose of this study is to present the RepoMedUNM dataset analysis performed for texture feature cells on new images consisting of four classes, normal, L-Sil, H-Sil, and Koilocyt with K-means segmentation. Evaluation of model classification using reuse pretrained network method. Convolutional Neural Network (CNN) implements the pre-trained CNN VGG16, VGG19, and ResNet50 models for the classification of three groups namely TP, nTP and all datasets. The results of feature cells and classification can be used as a reference for the evaluation of future classification techniques.
Article
Functional and anatomical information extraction from Magnetic Resonance Images (MRI) is important in medical image applications. The information extraction is highly influenced by the artifacts in the MRI images. The feature extraction involves the segmentation of MRI images. We present a MRI image segmentation using Bat Optimization Algorithm (BOA) with Fuzzy C Means (FCM) clustering. Echolocation of bats is utilized in Bat Optimization Algorithm. The proposed segmentation technique is evaluated with existing segmentation techniques. Results of experimentation shows that proposed segmentation technique outperforms existing methods and produces 98.5% better results.
Article
For the Convolutional Neural Networks (CNNs) applied in the intelligent diagnosis of gastric cancer, existing methods mostly focus on individual characteristics or network frameworks without a policy to depict the integral information. Mainly, conditional random field (CRF), an efficient and stable algorithm for analyzing images containing complicated contents, can characterize spatial relation in images. In this paper, a novel hierarchical conditional random field (HCRF) based gastric histopathology image segmentation (GHIS) method is proposed, which can automatically localize abnormal (cancer) regions in gastric histopathology images obtained by an optical microscope to assist histopathologists in medical work. This HCRF model is built up with higher order potentials, including pixel-level and patch-level potentials, and graph-based post-processing is applied to further improve its segmentation performance. Especially, a CNN is trained to build up the pixel-level potentials and another three CNNs are fine-tuned to build up the patch-level potentials for sufficient spatial segmentation information. In the experiment, a hematoxylin and eosin (H&E) stained gastric histopathological dataset with 560 abnormal images are divided into training, validation and test sets with a ratio of 1 : 1 :2. Finally, segmentation accuracy, recall and specificity of 78.91%, 65.59%, and 81.33% are achieved on the test set. Our HCRF model demonstrates high segmentation performance and shows its effectiveness and future potential in the GHIS field.