Available via license: CC BY-SA 4.0
Content may be subject to copyright.
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
ISSN: 2985-6116, DOI: 10.59247/csol.v1i3.33
132
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Analysis of the Influence of Number of Segments
on Similarity Level in Wound Image Segmentation
Using K-Means Clustering Algorithm
Furizal 1,* , Syifa’ah Setya Mawarni 2, Son Ali Akbar 3, Anton Yudhana 4, Murinto Kusno 5
1,2 Master Program of Informatics, Universitas Ahmad Dahlan, Yogyakarta, Indonesia
3,4 Department of Electrical Engineering, Universitas Ahmad Dahlan, Yogyakarta, Indonesia
5 Department of Informatics, Universitas Ahmad Dahlan, Yogyakarta, Indonesia
Email: 1 furizal.id@gmail.com, 2 syifa.mawarni19@gmail.com, 3 sonali@ee.uad.ac.id, 4 eyudhana@ee.uad.ac.id,
5 murintokusno@tif.uad.ac.id
*Corresponding Author
Abstract— This study underscores the importance of wound
image segmentation in the medical world to speed up first aid
for victims and increase the efficiency of medical personnel in
providing appropriate treatment. Although the body has a
protective function from external threats, the skin can be easily
damaged and cause injuries that require rapid detection and
treatment. This study used the K-Means clustering algorithm
to segment the external wound image dataset consisting of
three types of wounds, namely abrasion, puncture, and
laceration. The results showed that K-Means clustering is an
effective method for segmenting wound images. The greater
the number of segments used, the better the quality of the
resulting segmentation. However, it is necessary to take into
account the specific characteristics of each type of wound and
the number of segments used in order to choose the most
suitable segmentation method. Evaluation using various
metrics, such as VOI, GCE, MSE, and PSNR, provides an
objective assessment of the quality of segmentation. The results
showed that abrasion wounds were easier to segment
compared to puncture wounds and lacerations. In addition, the
size of the image file also affects the speed of program
execution, although it is not always absolute depending on the
characteristics of the image.
Keywords—Image segmentation, K-Means, Wound imagery,
Image Processing
I. INTRODUCTION
The body has a protective function from various external
threats, but skin tissue is easily damaged and causes injury.
Skin vulnerability presents various types of injuries
including animal or human bites, accidental injuries, sharp
or blunt objects, temperature and radiation [1]. Wound is
damage to the protective function of the skin accompanied
by loss of epithelial tissue continuity with or without
damage to other tissues such as muscles, bones, and nerves
caused by several factors, namely pressure, incisions, and
surgical injuries [2].
External wounds have three levels of severity, namely
level 1 (superficial), level 2 (superficial partial-thickness),
and level 3 (full thickness). Wound segmentation can
provide a clear picture of how serious the wound is by
looking at the color depth. The deeper the color, the worse
and higher the severity of the wound.
Object detection is an identification process. Generally,
identification is carried out using a camera as an object
capturer and then the image obtained is further identified.
According to the Central Statistics Agency, there were
around 272 injuries due to natural disasters and fires in
2019-2021 in Jakarta. Object recognition or object detection
on wounds can help speed up the process of first aid to
victims and increase the efficiency of medical personnel in
providing handlers to victims.
Object detection using segmentation has been widely
done in the medical world, such as Riska Nanda et al
segmenting medical images on Fibroa Adenoma Mammae
(FAM) objects or benign tumors in the breast using the
Sobel algorithm [3]. The image used by FAM ultrasound
results is in .dcom format with Matlab R2009a as its tool.
The results of segmentation of medical images with sobel
algorithms are good for determining edges on FAM objects,
but images with less resolution of edge detection are
difficult. In addition to breasts, image segmentation with
Deep Learning on lung image segmentation by Muhammad
Hussein produced the best value using the U-Net
EfficientNetb0 algorithm with a dice value of 0.967 and a
jaccard of 0.937 but for the speed of time the MobileNet U-
net algorithm [4]. The dataset used was X-Ray images with
a total of 80 normal lung images and 58 others had
abnormalities. Other research on segmentation with lung
objects has also been conducted by Sintha Syaputri and
Zulkarnain using Active Contour algorithm and Matlab
R2015a as their tools [5]. The result of this study is that
Active Contour can segment well depending on the
parameter value.
Medical image segmentation has a variety of algorithms
that can be used, one of which is K-Means clustering. K-
Means is an unsupervised learning algorithm for solving
clustering problems in a faster time than C-Means Fuzzy
[6]. Segmentation on medical images using the K-Means
algorithm has been carried out by R. S. D. Wijaya et al [7]
and Dwiza Riana et al [8] on cervical cancer objects and W.
M. Baihaqi et al on leukocyte nucleus segmentation [9].
Medical image segmentation is centered on vital objects,
even though the skin is the outermost layer or shield for the
body, so there is rarely segmentation using wound objects.
External wound segmentation helps wound treatment be
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
133
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
faster and more precise. Identification of wounds becomes
clearer so that appropriate treatment is given.
Research contribution to analyzing external wound
image segmentation using k-means clustering and the effect
of the number of segments on external wound image
segmentation.
II. METHOD
This study began with the collection of wound image
datasets. The dataset that has been obtained will be
processed at the preprocessing stage such as resizing,
cropping, noise correction, and converting color images to
grayish images. Once preprocessing is complete, the new
grayish image will be segmented using the K-means
Clustering method. The research flow can be seen in Fig 1.
Fig. 1. Research flow
A. Image Dataset
The image data used is external wound image data. The
image dataset was taken from Kaggle with a total of nine
data consisting of three classes, namely abrasion wounds,
lacerations, and puncture wounds. Each class has three
image data. The external wound image dataset is a dataset
that is quite difficult to find. the hospital did not allow
external wound datasets to be collected. dataset was taken
from Kaggle, which is one of the platforms that provides
external wound image datasets.
B. Preprocessing
1) Image Resize
The image data obtained is not in the same size. In order
for the segmentation process to work optimally, it takes an
image resizing process or resizing images with the aim of
equalizing the size of each image. The study resized each
image to 640x640 pixels. This size is at the medium point,
not too tight and not too loose so that the segmentation
process becomes more efficient but the image of the wound
is still clearly visible.
2) Image Color Conversion
The original image has a red, green, and blue color base
which is often also called RGB image. RGB images have a
total pixel intensity value of 24-bits, each color channel has
an 8-bit pixel intensity value which means it has a color
variation of 28 = 256 color degrees (0 to 255). A grayscale
image is a grayscale image with an intensity value of at
most 255 in white to black with an intensity value of at least
0 [10]. Image conversion from RGB images to grayscale
images aims to simplify images so that the segmentation
process becomes more efficient.
3) Noise Repair
Genuine imagery is usually contaminated with noise,
whether it's during image shooting, image delivery, image
downloading. Noise is a disturbance caused by the storage
of digital data received by image data receivers that can
interfere with image quality [11]. The use of noise and
filters is based on measuring image quality values. There are
four image quality measurements, namely Mean Square
Error (MSE) [12]-[13], Peak Signal Noise Ratio (PSNR)
[14]-[16], Variation of Information (VOI) [17]-[18], and
Global Consistency Error (GCE) [19]-[21]. The smaller the
MSE value, the better the recovery value from the noise
effect, and in PSNR the higher the value, the better because
it is considered close to the original image.
a) Mean Square Error (MSE) and Peak Signal Noise Ratio
(PSNR)
MSE and PSNR are expressed mathematically using
Equations 1 and 2.
(1)
(
)
(2)
Where m and n are the column and row sizes of the
image, f (I,j) is the original image and f' (I,j) is the recovered
image. Max is the maximum possible pixel value, if 8 Bits
means 28-1 = 255.
b) Variation of Information (VOI) and Global Consistency
Error (GCE)
VOI or information variation serves to measure the
amount of missing information obtained between two
clusters, while GCE or global consistency error according to
D. Martin is some error measure to measure consistency
between human partitions and algorithm partitions. VOI and
GCE are mathematically expressed using equations 3 and 4.
(3)
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
134
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Where FSx and FSy exhibit K-Means segmentation,
entropy is represented by E(FS), then E(FSx, FSy) indicates
a combination of two images, and M (FSx, FSy) sends
reciprocal information from the two images.
GCE (Sx, Sy) =
(4)
Where Sx and Sy show two segmentations and pi
represents the position of the pixel.
Based on these measurements, the closest thing to the
original image is using uniform noise with a 3x3 Median
low-pass filter. Produces clear images with minimal noise
contamination so that segmentation becomes more effective.
C. K-Means Segmentation
The K-Means clustering algorithm is a centroid model.
The centroid model is a model that uses centroids to create
clusters, centroid is the midpoint value of a cluster and
centroid is used to calculate the distance of an object to the
centroid. A data object is included in a cluster if it has the
shortest distance to the cluster centroid.
The K-means algorithm uses an iterative process to
obtain a cluster database. It takes the desired initial number
of clusters as input and produces the final centroid point as
output. The K-means method will choose k patterns as the
starting point of the centroid randomly. K-means uses the
parameters in equation 5.
(4)
Vij is the centroid/average of the first cluster for the j
variable, Ni is the amount of data that is a member of the i
cluster, i,k is the index of the cluster, j is the index of the
variable, xkj is the k data value in the cluster for the j
variable.
III. RESULTS AND DISCUSSION
In the results of this discussion, wound images were
segmented using the K-Means algorithm by testing nine
different wound images. This wound image is divided into
three types of wounds, namely abrasion, puncture, and laser
wounds. All images used have jpg extension. This test aims
to test the extent of the ability of the K-Means algorithm to
segment the three types of wound images used as test data.
This segmentation was tested in three test categories,
namely image segmentation with 2 segments, 3 segments,
and 4 segments. The selection to test three different numbers
of segments, namely 2, 3, and 4, in testing K-Means
segmentation algorithm is aimed at exploring how well the
algorithm can adapt to various levels of complexity in
wound images. By including different numbers of segments,
we can evaluate the algorithm's performance in scenarios
with few objects (2 segments), moderate complexity (3
segments), or higher complexity (4 segments). This helps us
understand whether the algorithm exhibits strong
generalization capabilities across different situations and
ensures that the segmentation results can be optimized
according to different task contexts. The original image and
segmented image can be seen in Table 1.
Table 1 displays the original image and segmented
images that have been processed using the K-Means
algorithm. The original image is the image before the
segmentation process, while the segmented image is the
image that has been processed to divide the pixels by color.
Every color image consists of a combination of colors
formed from the pixels in the image, creating a complete
image. There are several color models commonly used in
image processing, such as the RGB color model that uses
red, green, and blue color channels, the CMYK color model
used in printing, the HSB color model that describes the
shades, saturation, and brightness of colors, and the CIE-
XYZ color model used in lighting science and industry.
Based on the original image used along with the RGB color
distribution outlined in 3D graphics as in Fig. 2.
When viewed with the naked eye, the results of new scar
segmentation will be more clearly visible when the image is
segmented into 4-segments. While results on 2-segments
tend to show less than optimal results, because some
wounds are not detected properly. Dark skin is always
identified as a wound. In addition, blood splashes on
uninjured skin are also identified as wounds by the results of
2-segment segmentation. However, to better see the extent
of the performance of the K-Means algorithm in segmenting
images, it is based on four indicators, namely the value of
VOI, GCE, MSE, and PSNR. The values of these four
indicators can be seen in full in Table 2.
Table 2 provides complete information regarding image
segmentation that has been carried out by providing VOI,
GCE, MSE, and PSNR values. In addition, it also calculates
the program execution time during image processing. The
file size of each image is also displayed to see its influence
in the image process. The effect of file size on program
execution time is shown in Fig. 3.
Fig. 3 indicates that the image file size greatly affects the
ET of the program. However, this statement is not absolute
because there are some files that are higher in size, but
programs can execute faster when compared to lower file
sizes. For example, the 3rd abrasion wound image has a
lower file size when compared to the 1st and 2nd laser
wound images. But the required ET is greater. The highest
ET was obtained in the 2nd abrasion wound image of the 4-
segment, with an ET of 2.0206 s. While the lowest ET was
obtained in the 2nd puncture wound image of the 2-segment,
with an ET of 0.0653 s.
In addition, Fig. 3 also shows how the number of image
segments affects ET. The more segments used, the slower
the program execution. In this test, images divided into 4-
segments require more ET when compared to 2-segments
and 3-segments. Although this is also not absolute, the tests
carried out show almost certainty. Because only 2 tests
contradict this statement, namely ET 2nd puncture wound
image with 3-segment (0.1936 s) greater than ET at 4-
segment (0.1922 s) and ET 3rd puncture wound image with
3-segment (0.5314 s) greater than ET at 4-segment (0.5214
s). Then, in determining the difference in information, the
percentage of inconsistent pixels, the average square
difference between pixel intensity, and the comparison
between peak signal power in the original image and
distortion power in the segmented image, the VOI, GCE,
MSE, and PSNR values depicted in Fig 4.
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
135
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Table 1. Wound image segmentation results
No
Types of
Wounds
Original Image
Segmentation Results
2 Segments
3 Segments
4 Segments
1
Abrasion
1
2
3
2
Stab
1
2
3
3
Laser
1
2
3
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
136
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Fig. 2. RGB color distribution 3D graphic (a) First image of abrasion wound, (b) The second image of the abrasion wound, (c) The third image of the
abrasion wound, (d) The first image of the stab wound, (e) The image of both stab wounds, (f) The image of the three stab wounds, (g) First image of
laser wound, (h) Images of both laser wounds, (i) Image of the three laser wounds
Table 2. Wound image segmentation results
No
Types
of
Wounds
Original Image
2-Segments
3-Segments
4-Segments
Imagery
to-
File size
(KB)
VOI
GCE
MSE
PSNR
ET
VOI
GCE
MSE
PSNR
ET
VOI
GCE
MSE
PSNR
ET
1
Abrasion
1
138
6.3218
0.9712
207.21
24.967
0.6873
6.0785
0.9613
114.41
27.546
0.9599
5.8753
0.9571
83.353
28.922
1.6309
2
121
7.0982
0.9795
289.05
23.521
0.4026
6.9543
0.9726
178.15
25.623
0.7555
6.7307
0.9612
117.99
27.412
1.2398
3
140
6.8201
0.9746
263.45
23.924
0.7553
6.621
0.9687
155.89
26.203
1.0154
6.5326
0.9631
110.64
27.692
2.0206
2
Stab
1
50
7.1391
0.977
423.51
21.862
0.1052
6.988
0.9738
270.34
23.812
0.1595
6.8136
0.9696
159.16
26.112
0.1656
2
42
7.381
0.9769
534.6
20.851
0.0653
7.1679
0.9692
248.18
24.183
0.1936
6.9836
0.9694
168.91
25.854
0.1922
3
112
7.3688
0.9792
398.88
22.122
0.2992
7.0344
0.9774
228.79
24.537
0.5314
6.6558
0.9693
151.04
26.34
0.5214
3
Laser
1
239
7.9809
0.9842
664.89
19.903
0.8723
7.4285
0.9811
360.78
22.558
0.8219
7.4377
0.9703
263.23
23.927
1.2572
2
258
7.7414
0.9852
722.26
19.544
0.7508
7.3257
0.9796
366.46
22.491
0.6773
7.3184
0.9754
254.32
24.077
0.9817
3
89
7.9178
0.99
981.04
18.214
0.1492
7.476
0.9784
453.45
21.566
0.1678
7.3754
0.9755
291.63
23.483
0.2588
Fig. 4a is the VOI value in each image segmentation.
This VOI result shows that the more the number of
segments, the lower the potential VOI value. This means
that the more the number of segments, the degree of
similarity between the original image and the better the
image of the segment results and vice versa. Then to get the
percentage of pixels that are inconsistent between the
segmented image and the original image, Fig. 4b explains
that the more the number of segments, the more potential to
produce a lower GCE value. This proves that the more the
number of segments, the more potential for consistency of
the two images.
On Fig. 4c, MSE describes the average squared
difference between the pixel intensity in the segmented
image and the original image. The lower the value, the more
similar the two images are. Just like GCE, MSE also shows
the same results. The more the number of segments, the
more potential to produce a lower MSE value. This proves
that the more the number of segments, the more similar the
two images.
Meanwhile, to get a comparison between the peak signal
power in the original image with the distortion power in the
segmented image, the PSNR value is needed as in Fig. 4d.
The highest average PSNR value is obtained from
segmentation of abrasion wound images. While the lowest
average PSNR value is obtained from segmentation of laser
wound images. The higher the value, the more similar the
two images are. This means that the original image of the
abrasion wound can be better segmented by K-Means than
the image of the puncture wound and laser. This is also
supported by VOI, GCE, and MSE values in each image
segmentation.
Fig. 3. Impact of file size on ET
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
137
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Fig. 4. (a) VOI value; (b) GCE value; (c) MSE value; and (d) the PSNR value of each image segmentation
IV. CONCLUSION
The use of the K-Means algorithm in segmenting wound
images through testing on three types of wounds (abrasion,
puncture, and laser) with variations in the number of
segments showed mixed results. Segmentation with a larger
number of segments tends to produce better results in terms
of similarity and consistency between the segmented image
and the original image. Evaluation using evaluation metrics
such as VOI, GCE, MSE, and PSNR provides an objective
assessment of segmentation quality, with abrasion wound
imagery generally providing better results. However, it is
necessary to pay attention to the characteristics of each type
of wound and the number of segments in choosing the
appropriate method of segmentation. These conclusions
provide important insights in the development of image
segmentation techniques for medical and analytical
applications. The results of this study need to be
investigated further to provide maximum results in wound
image segmentation. It is recommended to explore optimal
preprocessing techniques, develop hybrid methods that
combine multiple segmentation algorithms, and involve
external evaluation with better ground truth to obtain
accurate validation and a better understanding of
segmentation quality. In addition, the development of
clinical applications using wound image segmentation is
also an important step to test its usefulness and effectiveness
in medical practice.
REFERENCES
[1] G. Zaman Khan et al., "An Efficient Deep Learning Model based
Diagnosis System for Lung Cancer Disease," 2023 4th International
Conference on Computing, Mathematics and Engineering
Technologies (iCoMET), pp. 1-6, 2023,
https://doi.org/10.1109/iCoMET57998.2023.10099357.
[2] C. N. Elangwe, S. N. Morozkina, R. O. Olekhnovich, A. Krasichkov,
V. O. Polyakova, and M. V. Uspenskaya, “A Review on Chitosan and
Cellulose Hydrogels for Wound Dressings,” Polymers (Basel), vol.
14, no. 23, p. 5163, 2022, https://doi.org/10.3390/polym14235163.
[3] P. Cho, H. J. Yoon, “Evaluation of U-net-based image segmentation
model to digital mammography,” Medical Imaging 2021: Image
Processing, vol. 11596, pp. 593-599, 2021,
https://doi.org/10.1117/12.2581401.
[4] M. M. Hasan, M. Md. Jahangir Kabir, M. R. Haque and M. Ahmed,
"A Combined Approach Using Image Processing and Deep Learning
to Detect Pneumonia from Chest X-Ray Image," 2019 3rd
International Conference on Electrical, Computer &
Telecommunication Engineering (ICECTE), pp. 89-92, 2019,
https://doi.org/10.1109/ICECTE48615.2019.9303543.
[5] E. Jangam, A. C. S. Rao, “Segmentation of lungs from chest X rays
using firefly optimized fuzzy C-means and level set algorithm,”
International Conference on Recent Trends in Image Processing and
Pattern Recognition, pp. 303-311, 2019, https://doi.org/10.1007/978-
981-13-9184-2_27.
[6] C. Zhang et al., “White Blood Cell Segmentation by Color-Space-
Based K-Means Clustering,” Sensors, vol. 14, no. 9, pp. 16128–
16147, 2014, https://doi.org/10.3390/s140916128.
[7] C. Sun et al., “Gastric histopathology image segmentation using a
hierarchical conditional random field,” Biocybernetics and
Biomedical Engineering, vol. 40, no. 4, pp. 1535-1555, 2020,
https://doi.org/10.1016/j.bbe.2020.09.008.
[8] D. Riana, S. Hadianti, S. Rahayu, Frieyadie, M. Hasan, I. N. Karimah,
R. Pratama, “Repomedunm: A new dataset for feature extraction and
training of deep learning network for classification of pap smear
images,” International Conference on Neural Information Processing,
pp. 317-325, 2021, https://doi.org/10.1007/978-3-030-92307-5_37.
[9] W. M. Baihaqi, C. R. A. Widiawati, T. Insani, “K-Means Clustering
Based on Otsu Thresholding For Nucleus of White Blood Cells
Segmentation,” Jurnal RESTI (Rekayasa Sistem dan Teknologi
Informasi), vol. 4, no. 5, pp. 907-914, 2020,
https://doi.org/10.29207/resti.v4i5.2309.
[10] K. Liu, W. Wang, and J. Wang, “Pedestrian Detection with Lidar
Point Clouds Based on Single Template Matching,” Electronics, vol.
8, no. 7, p. 780, 2019, http://dx.doi.org/10.3390/electronics8070780.
[11] B. Karthik, T. K. Kumar, S. P. Vijayaragavan, M. Sriram, “Removal
of high density salt and pepper noise in color image through modified
cascaded filter,” Journal of Ambient Intelligence and Humanized
Computing, vol. 12, pp. 3901-3908, 2021,
https://doi.org/10.1007/s12652-020-01737-1.
[12] U. Sara, M. Akter, and M. S. Uddin, “Image Quality Assessment
through FSIM, SSIM, MSE and PSNR—A Comparative Study,”
Control Systems and Optimization Letters, Vol. 1, No. 3, 2023
138
Furizal, Analysis of the Influence of Number of Segments on Similarity Level in Wound Image Segmentation Using K-Means Clustering
Algorithm
Journal of Computer and Communications, vol. 7, no. 3, pp. 8-18,
2019, https://doi.org/10.4236/jcc.2019.73002.
[13] T. Badriyah, N. Sakinah, I. Syarif, and D. R. Syarif, “Segmentation
Stroke Objects based on CT Scan Image using Thresholding
Method,” 2019 First International Conference on Smart Technology
& Urban Development (STUD), pp. 1–6. 2019,
https://doi.org/10.1109/STUD49732.2019.9018825.
[14] A. R. W. Putri, “Classification of Breast Cancer Using the Digital
Mammogram Method,” JATISI (Jurnal Teknik Informatika dan
Sistem Informasi), vol. 9, no. 4, pp. 2752–2761, 2022,
https://doi.org/10.35957/jatisi.v9i2.2100.
[15] C. Huang, X. Li, and Y. Wen, “AN OTSU image segmentation based
on fruitfly optimization algorithm,” Alexandria Engineering Journal,
vol. 60, no. 1, pp. 183–188, 2021,
https://doi.org/10.1016/j.aej.2020.06.054.
[16] H. Jia, X. Peng, W. Song, C. Lang, Z. Xing, and K. Sun, “Multiverse
Optimization Algorithm Based on Lévy Flight Improvement for
Multithreshold Color Image Segmentation,” IEEE Access, vol. 7, pp.
32805-32844, 2019, https://doi.org/10.1109/ACCESS.2019.2903345.
[17] L. Guo, P. Shi, L. Chen, C. Chen, and W. Ding, “Pixel and region
level information fusion in membership regularized fuzzy clustering
for image segmentation,” Information Fusion, vol. 92, pp. 479–497,
2023, https://doi.org/10.1016/j.inffus.2022.12.008.
[18] B. Jai Shankar, K. Murugan, A. Obulesu, S. Finney Daniel Shadrach,
and R. Anitha, “MRI Image Segmentation Using Bat Optimization
Algorithm with Fuzzy C Means (BOA-FCM) Clustering,” J Med
Imaging Health Inform, vol. 11, no. 3, pp. 661–666, 2021,
https://doi.org/10.1166/jmihi.2021.3365.
[19] D. Anandan, S. Hariharan, and R. Sasikumar, “Deep learning based
two-fold segmentation model for liver tumor detection,” Journal of
Intelligent & Fuzzy Systems, vol. 45, no. 1, pp. 77–92, 2023,
https://doi.org/10.3233/JIFS-230694.
[20] M. Poojary and Y. Srinivas, “Optimization Technique Based
Approach for Image Segmentation,” Curr Med Imaging Rev, vol. 19,
no. 10, 2023, https://doi.org/10.2174/1573405619666221104161441.
[21] H. Liu, X. Diao, and H. Guo, “Quantitative analysis for image
segmentation by granular computing clustering from the view of set,”
J Algorithm Comput Technol, vol. 13, p. 174830181983305, 2019,
https://doi.org/10.1177/1748301819833050.