Conference PaperPDF Available

Automatic Skin Lesion Segmentation and Melanoma Detection: Transfer Learning approach with U-Net and DCNN-SVM

Authors:

Abstract and Figures

Industrial pollution resulting in ozone layer depletion has influenced increased UV radiation in recent years which is a major environmental risk factor for invasive skin cancer Melanoma and other keratinocyte cancers. The incidence of deaths from Melanoma has risen worldwide in past two decades. Deep learning has been employed successfully for dermatologic diagnosis. In this work, we present a deep learning based scheme to automatically segment skin lesions and detect melanoma from dermoscopy images. U-Net was used for segmenting out the lesion from surrounding skin. The limitation of utilizing deep neural networks with limited medical data was solved with data augmentation and transfer learning. In our experiments, U-Net was used with spatial dropout to solve the problem of overfitting and different augmentation effects were applied on the training images to increase data samples. The model was evaluated on two different datasets. It achieved a mean dice score of 0.87 and a mean jaccard index of 0.80 on ISIC 2018 dataset. The trained model was assessed on PH² dataset where it achieved a mean dice score of 0.93 and a mean jaccard index of 0.87 with transfer learning. For classification of malignant melanoma, a DCNN-SVM model was used where we compared state of the art deep nets as feature extractors to find the applicability of transfer learning in dermatologic diagnosis domain. Our best model achieved a mean accuracy of 92% on PH² dataset. The findings of this study is expected to be useful in cancer diagnosis research.
Content may be subject to copyright.
Automatic Skin Lesion Segmentation and Melanoma
Detection: Transfer Learning approach with U-Net and
DCNN-SVM
Zabir Al Nazi
1
and Tasnim Azad Abir
2
Department of Electronics and Communication Engineering,
Khulna University of Engineering and Technology, Khulna-9203, Bangladesh.
1
zabiralnazi@yahoo.com
2
tasnimabir@gmail.com
Abstract. Industrial pollution resulting in ozone layer depletion has influenced
increased UV radiation in recent years which is a major environmental risk fac-
tor for invasive skin cancer Melanoma and other keratinocyte cancers. The in-
cidence of deaths from Melanoma has risen worldwide in past two decades.
Deep learning has been employed successfully for dermatologic diagnosis. In
this work, we present a deep learning based scheme to automatically segment
skin lesions and detect melanoma from dermoscopy images. U-Net was used
for segmenting out the lesion from surrounding skin. The limitation of utilizing
deep neural networks with limited medical data was solved with data augmenta-
tion and transfer learning. In our experiments, U-Net was used with spatial
dropout to solve the problem of overfitting and different augmentation effects
were applied on the training images to increase data samples. The model was
evaluated on two different datasets. It achieved a mean dice score of 0.87 and a
mean jaccard index of 0.80 on ISIC 2018 dataset. The trained model was as-
sessed on PH² dataset where it achieved a mean dice score of 0.93 and a mean
jaccard index of 0.87 with transfer learning. For classification of malignant
melanoma, a DCNN-SVM model was used where we compared state of the art
deep nets as feature extractors to find the applicability of transfer learning in
dermatologic diagnosis domain. Our best model achieved a mean accuracy of
92% on PH² dataset. The findings of this study is expected to be useful in can-
cer diagnosis research.
Keywords: Dermoscopy image, Skin cancer, Segmentation, Deep learning,
Augmentation, Melanoma classification, Transfer learning.
1 Introduction
Skin cancer is one of the most common types of cancers with almost half of cancer
diagnosis worldwide being of this type [1]. Each year more than 5 million people are
diagnosed with skin cancer in the United States alone [2]. Melanoma is a malignant
skin cancer which develops from melanocytes, a group of cells containing pigment. In
recent years, melanomas’ incidence and death rate have increased extensively. In the
2
year 2017, around 87 thousand new cases of melanoma and roughly 10 thousand new
melanoma-related deaths are reported in U.S. According to a study, the lifetime risk
of developing invasive melanoma in the United States is going to rise up to more than
1.8% [3-5]. However, melanoma is curable if detected at an early stage. Melanoma is
wrongly classified as seborrheic keratosis which is of benign class. So, automated
diagnosis of melanoma could improve the diagnosis accuracy by assisting dermatolo-
gists [6].
Pham et al. used data augmentation, Inception-v4 for feature extraction and support
vector machine, random forest and neural network for the classification of skin lesion
[1]. In [2], authors presented a deep learning scheme for melanoma classification with
many pre-processing steps such as morphological operations and harmonic inpainting
for hair removal from skin images. A fully convolutional-deconvolutional architecture
is proposed in [6], which achieves jaccard index of 0.507 on ISIC 2017 dataset and
vgg16 is used for lesion classification with transfer learning. In [7], authors proposed
U-Net with residual connections for skin lesion segmentation.
In this paper, we propose an automatic deep learning based scheme to segment skin
lesions and classify cancer type with transfer learning. In medical imaging, data is not
abundant to train deep models which is a limitation. We used different augmentation
on the original training images to increase the size of the training set. Our model was
tested on ISIC 2018 and PH² dataset for segmentation problem and PH² dataset for
classification problem.
Rest of the paper is organized as follows: in section 2, the proposed methodology
is discussed. Next section contains a detailed analysis of the result and finally, a con-
clusion of this work is drawn.
2 Methodology
The methods of this work are mainly focused on two tasks: lesion segmentation and
lesion classification. Deep learning is a generalized tool for solving many problems
from different domains with high success rate. But, due to the scarcity of labeled
dermoscopy images, it’s challenging to train Deep Convolutional Neural Networks
for lesion segmentation. So, we used U-Net with dropout and data augmentation
which is demonstrated in Fig 1.
Fig. 1. Training of U-Net for semantic segmentation with augmented data
3
For melanoma detection task, we proposed a DCNN based feature extraction method
with SVM classifier. The overall classification method is shown in Fig 2.
Fig. 2. Proposed classification scheme for melanoma detection
2.1 Dermoscopy Image Augmentation
Augmentation is widely being used in natural images. Medical images are relatively
sensitive to different operations, so augmentation must be applied carefully as exces-
sive augmentation can distort the actual distribution of the training data by introduc-
ing too many outliers. In [1, 8], the authors used different data augmentation tech-
niques on medical, dermoscopy images and achieved better results. In our work, we
applied nine types of augmentation on training images: random rotation, random flip,
random zoom, gaussian distortion, random brightness, random color, random contrast,
random elastic distortion and histogram equalization [9]. The dermoscopy images are
collected from different sources and generated using different devices. Hence, the
augmentations are chosen so that we can introduce more variations in the training set
without altering the semantic meaning of the images. Initially, from 2594 images 390
images were kept for the test set. Additional 2500 images were generated using data
augmentation. Finally, from 5094 images, 3815 images were used for training and
889 images were used for validation purposes.
Fig. 3. Sample augmented images from ISIC 2018 dataset
4
Augmentation reduces the models’ variance and generalization error by working as a
regularization method. Augmentation on ISIC 2018 dataset is demonstrated in Fig 3.
2.2 Lesion Segmentation with U-Net
U-Net is an efficient fully convolutional network which has been proposed for bio-
medical image segmentation [10]. In this work, we have used a U-Net inspired ‘fully
convolutional network’ architecture which can be trained end to end for lesion seg-
mentation from dermoscopy images. The network consists of the repeated application
of two 3×3 convolutions with the same number of feature maps, we have used batch
normalization and spatial dropout in our network which showed improved perfor-
mance. Rectifier Linear Unit (ReLU) was used as activation function and 2×2 max
pooling was applied for down-sampling. In the latter half of the network, for up-
sampling of the feature maps, a 2×2 up-convolution is applied followed by feature
concatenation and two 3×3 convolutions. At the final layer, 1×1 convolution is ap-
plied and sigmoid activation is used. The detailed network architecture of the pro-
posed model is shown in Fig 4 and Table 1. Batch normalization is used to reduce the
internal covariance shift of the network and to improve the performance of backprop-
agation by gradient scaling. Spatial dropout is used as a regularization method along
with data augmentation. The model showed better generalization with dropout and
augmentation which is demonstrated in Fig 6. The model was trained on 3815 images
for 100 epochs with learning rate .001. Adam optimizer was used with a batch size of
32. Dice coefficient loss was used to train the network. The Dice score coefficient
(DSC) is a measure of similarity between the segmented mask and gold standard
ground truth widely used to assess the performance of semantic segmentation. Mil-
letari et al. first proposed it as a loss function [11-12], in our work we have used
slightly different 2 class variant of dice loss. Let
be the ground truth and
be the
predicted mask for

pixel then for length, the dice loss can be expressed as:
    2 ∗
∗ 
 

 
 

1
Here, is used to ensure numerical stability of the loss function.
Fig. 4. Proposed architecture for end-to-end semantic lesion segmentation
5
Table 1. Architecture of the proposed U-Net for lesion segmentation
Layer
Feature
size
No. of filters
Layer
Feature
size
No. of f
input 224×224 conv_block 7 14×14 512
conv_block 1
224×224
32
upsampling 2
28×28
512
pooling 1 112×112 32 concatenate 2 28×28 768
conv_block 2 112×112 64 conv_block 8 28×28 256
pooling 2 56×56 64 upsampling 3 56×56 256
conv_block 3 56×56 128 concatenate 3 56×56 384
pooling 3 28×28 128 conv_block 9 56×56 128
conv_block 4 28×28 256 upsampling 4 112×112 128
pooling 4
14×14
256
concatenate 4
112×112
192
conv_block 5 14×14 512 conv_block 10 112×112 64
pooling 5 7×7 512 upsampling 5 224×224 64
conv_block 6 7×7 1024 concatenate 5 224×224 96
upsampling 1 14×14 1024 conv_block 11 224×224 32
concatenate 1 14×14 1536 conv 1×1 224×224 1
In Table 1, the conv_block consists of the sequence of operations Conv + BN +
ReLU + Conv + BN + ReLU + Spatial Dropout layers with kernel size of 3×3 and
stride 1. At the final layer, 1×1 convolution is applied and sigmoid activation is used.
2.3 Feature extraction with DCNN
Deep Convolutional Neural Networks have been prominently successful in solving
many machine learning and computer vision problems. The convolutional and pooling
layers in a DCNN perform feature extraction which can be fed to a trainable classifier.
Mallat initiated the mathematical analysis of DCNN as feature extractors with scatter-
ing networks where signals are passed to convolution layers computing a semi-
discrete wavelet transform and modulus non-linearity without pooling operations.
Such networks are translation-invariant and stable as feature extractors. It has been
argued that invariance of the features depends on the network depth and pooling oper-
ators (sub-sampling, max pooling, or average pooling). The depth of the network
determines the translation invariance property whereas pooling is crucial for vertical
translation invariance [13-14]. DCNNs have a large number of parameters, so without
enough training samples, it’s not effective to train deep convolutional neural net-
works. In deep learning, representation language from one domain can be reused in a
different domain – which is commonly called transfer learning. A DCNN trained on
task
can be used as a feature extractor for another task
along with other machine
learning models on top of the representation map. In this work, we use pre-trained
DCNNs as feature extractors which were trained on large ImageNet dataset [15-16].
DenseNet is a deep convolutional neural network which exploits extensive feature
reuse. For a dense block with
layers, there are

direct connections, the feature
maps are shared by all the layers in DenseNet. DenseNet has many advantages over
other models such as: attenuating the vanishing gradient problem, robust feature
6
propagation, reduced parameters and feature sharing [17]. The block architecture of
DenseNet feature extractor is demonstrated in Fig 5.
Fig. 5. Feature extraction with DenseNet
3 Result Analysis
To evaluate the performance of the segmentation method, the model was tested on
ISIC 2018 and PH² dataset. For ISIC dataset, 2500 additional samples were generated
by applying augmentation. From 2594 images 390 images were kept for testing.
Fig. 5. U-Net, Dropout U-Net (augmentation) training with dice loss
Fig. 6. Jaccard index comparison for U-Net, Dropout U-Net (augmentation) training
7
Finally, from 5094 images, 3815 images were used for training and 889 images were
used for validation purposes [18-20].
The trained model was then re-trained on PH² dataset [21]. The model showed bet-
ter results on PH² dataset as transfer learning was utilized. In case of training, two
variations of the model were compared U-Net without spatial dropout and U-Net
with spatial dropout, data augmentation. In Fig 6-7, Dropout U-Net with augmenta-
tion generalized well and performed better than the first one in the segmentation test.
A comparison of the binary masks predicted by the model is demonstrated in Fig 8.
Augmentation and dropout worked as regularization method, so Dropout U-Net with
augmentation was less prone to overfitting and performed well which can be shown in
Fig 6-7.
Table 2. SEGMENTATION RESULT
Dataset Dice-coefficient score Jaccard index
ISIC 2018 0.87±0.31 0.80±0.36
PH² 0.93±0.13 0.87±0.19
mean ± 2×standard deviation
Fig. 7. Binary masks predicted by U-Net, Dropout U-Net (augmentation)
8
Table 3. CLASSIFICATION RESULT
Feature extractor
No. of features
Classifier
Accuracy (10
-
fold
cross validation)
VGG-16 25088 SVM
Random Forest
Decision Tree
AdaBoost
Gradient boosting
0.89±0.11
0.86±0.10
0.82±0.12
0.89±0.13
0.91±0.13
VGG-19 25088 SVM
Random Forest
Decision Tree
AdaBoost
Gradient boosting
0.90±0.12
0.86±0.10
0.76±0.21
0.91±0.13
0.86±0.11
InceptionResNetV2 38400 SVM
Random Forest
Decision Tree
AdaBoost
Gradient boosting
0.90±0.11
0.82±0.10
0.82±0.17
0.88±0.10
0.88±0.09
ResNet50 2048 SVM
Random Forest
Decision Tree
AdaBoost
Gradient boosting
0.91±0.11
0.85±0.11
0.85±0.15
0.90±0.15
0.88±0.17
Xception
100352
SVM
Random Forest
Decision Tree
AdaBoost
Gradient boosting
0.89±0.10
0.86±0.12
0.79±0.12
0.86±0.12
0.82±0.17
InceptionV3 51200
SVM
Random Forest
Decision Tree
AdaBoost
Gradient boosting
0.89±0.12
0.81±0.10
0.78±0.18
0.86±0.15
0.83±0.12
DenseNet201 94080 SVM
Random Forest
Decision Tree
AdaBoost
Gradient boosting
0.92±0.14
0.84±0.10
0.83±0.13
0.91±0.13
0.84±0.17
mean ± 2×standard deviation
The summary of segmentation performance metrics is given in Table 2 and a histo-
gram analysis of the performance of the model is demonstrated in Fig 8.
For classification, we have first compared the state of the art deep convolutional
neural networks as feature extractors which are applicable to be used without further
training as the amount of training data is low. After feature extraction, we have classi-
fied the lesion type with different classifiers and a comparative analysis of the result
was studied.
9
Fig. 8. Histogram analysis of Dice Coefficient Score and Jaccard’s Coefficient
DenseNet has shown the most promising transfer characteristics for representation
learning as feature extractor in our experiment. DenseNet-SVM achieved a maximum
mean accuracy of 92.00% on PH² dataset.
For melanoma detection, 200 dermoscopy images were used to assess the perfor-
mance of the classification scheme. The images and the generated binary masks are
fused together and then the lesion region is cropped from the image. The resultant
image is used for feature extraction. The features are then fed to classifiers. Class
weight was imposed to train the support vector machine as the data distribution is
imbalanced. Finally, a 10 fold cross-validation was performed to fine-tune the classi-
fiers. A comparative study has been analyzed with many possible feature extractor-
classifier pairs. The summary of the classification result is shown in Table 3.
4 Conclusion
In this paper, we have presented a deep learning based automatic method for skin
lesion segmentation and melanoma detection. Approaches for these tasks involving
parameterized algorithms which are relatively hard to tune and training deep neural
networks with lots of labeled data - have many limitations. U-Net with spatial dropout
performed well as it was trained with additional augmented training images. For do-
mains where there are not enough data samples, transfer learning can be used effec-
10
tively. We have used state of the art computer vision models which are pre-trained on
natural images as feature extractors for skin lesion classification from dermoscopy
images which showed promising performance. The model can be applied even on a
domain with small data samples with the generalization for automatic segmentation
and classification. In the future, more augmentation methods can be explored in order
to improve the performance of the method. The model may be tested on dermoscopy
images with diverse skin tones from different ethnicities.
References
1.
Pham, T.-C., Luong, C.-M., Visani, M., Hoang, V.-D.: Deep CNN and Data Augmentation
for Skin Lesion Classification. In: Lecture Notes in Computer Science. pp. 573–582
(2018). doi: 10.1007/978-3-319-75420-8_54
2.
Salido, J.A.A., De La Salle University, Philippines, Ruiz, C., Jr.: Using Deep Learning for
Melanoma Detection in Dermoscopy Images. International Journal of Machine Learning
and Computing. 8, 61–68 (2018). doi: 10.18178/ijmlc.2018.8.1.664
3.
Kohli, J.S., Tolomio, E., Frigerio, S., Maurichi, A., Rodolfo, M., Bennett, D.C.: Common
Delayed Senescence of Melanocytes from Multiple Primary Melanoma Patients. J. Invest.
Dermatol. 137, 766–768 (2017). doi: 10.1016/j.jid.2016.10.026
4.
Ahmed, H.M., Al-azawi, R.J., Abdulhameed, A.A.: Evaluation Methodology between
Globalization and Localization Features Approaches for Skin Cancer Lesions Classifica-
tion. J. Phys. Conf. Ser. 1003, 012029 (2018). doi: 10.1088/1742-6596/1003/1/012029
5.
Glazer, A.M., Winkelmann, R.R., Farberg, A.S., Rigel, D.S.: Analysis of Trends in US
Melanoma Incidence and Mortality. JAMA Dermatol. (2016). doi:
10.1001/jamadermatol.2016.4512
6.
Thao, L.T., Quang, N.H.: Automatic skin lesion analysis towards melanoma detection. In:
2017 21st Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES) (2017).
doi: 10.1109/IESYS.2017.8233570
7.
Venkatesh, G.M., Naresh, Y.G., Little, S., O’Connor, N.E.: A Deep Residual Architecture
for Skin Lesion Segmentation. In: Lecture Notes in Computer Science. pp. 277–284
(2018). doi: 10.1007/978-3-030-01201-4_30
8.
Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: GAN-
based synthetic medical image augmentation for increased CNN performance in liver le-
sion classification. Neurocomputing. 321, 321–331 (2018). doi:
10.1016/j.neucom.2018.09.013
9.
D Bloice, M., Bloice, M.D., Stocker, C., Holzinger, A.: Augmentor: An Image Augmenta-
tion Library for Machine Learning. The Journal of Open Source Software. 2, 432 (2017).
doi: 10.21105/joss.00432
10.
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical
Image Segmentation. In: Lecture Notes in Computer Science. pp. 234–241 (2015). doi:
10.1007/978-3-319-24574-4_28
11.
F. Chollet, Keras, https://github.com/fchollet/keras, 2015.
12.
Weiss, K., Khoshgoftaar, T.M., Wang, D.: A survey of transfer learning. Journal of Big
Data. 3, (2016). doi: 10.1186/s40537-016-0043-6
13.
Wiatowski, T., Bolcskei, H.: A Mathematical Theory of Deep Convolutional Neural Net-
works for Feature Extraction. IEEE Trans. Inf. Theory. 64, 1845–1866 (2018). doi:
10.1109/TIT.2017.2776228
11
14.
Mallat, S.: Group Invariant Scattering. Commun. Pure Appl. Math. 65, 1331–1398 (2012).
doi: 10.1002/cpa.21413
15.
Garcia-Gasulla, D., Parés, F., Vilalta, A., Moreno, J., Ayguadé, E., Labarta, J., Cortés, U.,
Suzumura, T.: On the Behavior of Convolutional Nets for Feature Extraction. J. Artif. In-
tell. Res. 61, 563–592 (2018). doi: 10.1613/jair.5756
16.
Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Corri-
gendum: Dermatologist-level classification of skin cancer with deep neural networks. Na-
ture. 546, 686 (2017). doi: 10.1038/nature21056
17.
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely Connected Convolu-
tional Networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR) (2017). doi: 10.1109/CVPR.2017.243
18.
ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection.
19.
Codella, N.C., Gutman, D., Celebi, M.E.e.a.: Skin lesion analysis toward melanoma detec-
tion: A challenge at the 2017 international symposium on biomedical imaging (isbi), host-
ed by the international skin imaging collaboration (isic). arXiv preprint arXiv:1710.05006
(2017). doi: 10.1109/ISBI.2018.8363547
20.
Li, X., Chen, H., Qi, X., Dou, Q., Fu, C.-W., Heng, P.-A.: H-DenseUNet: Hybrid Densely
Connected UNet for Liver and Tumor Segmentation from CT Volumes. IEEE Trans. Med.
Imaging. (2018). doi: 10.1109/TMI.2018.2845918
21.
Mendonca, T., Ferreira, P.M., Marques, J.S., Marcal, A.R.S., Rozeira, J.: PH2 - a dermo-
scopic image database for research and benchmarking. Conf. Proc. IEEE Eng. Med. Biol.
Soc. 2013, 5437–5440 (2013). doi: 10.1109/EMBC.2013.6610779
... There are many existing works on watermarking most of the recent ones are based on deep learning. Deep learning algorithnls are mostly used in image classification [3,4,5], segmentation [6,7,8], signal processing domain [9,10,11]. The DCT coefficients for the host image as well as the watermark image are determined. ...
... The low value of MSE indicates that the watermarked image is almost similar to the original image which is desired for invisible or transparent watermarking. The MSE between the host image x and the watermarked image y is defined by (7). ...
Conference Paper
Full-text available
Nowadays improving lossless data hiding methods is on demand. For this demand, here in this paper the conventional transforms- Discrete Cosine Transform (DCT), Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) have been implemented to do watermark on various host images. To find out which transform method is more reliable and lossless, the performance parameters- Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR) are analyzed here. To compare the robustness among the methods, the watermarked images have been attacked by various noises. Different noise removing filters have been run over the noisy images to find out an appropriate filter for a corresponding noise
... To prove that our model (Improved-U-Net++) performs exemplary segmentation, we selected some current model architectures to conduct experiments on the ISIC2018 Task I dataset [2,22,24,41,53,64]. Table 7 shows the segmentation of the different models. ...
... Segmentation of skin lesion images under dermoscopy still presents some technical challenges, such as two skin lesions in the same [53] 0 . 7 7 5 U-Net and DCNN-SVM [2,53] 0 . 8 0 0 MultiResUNet [22] 0 . ...
Article
Full-text available
In the medical field, melanoma is one of the most dangerous skin cancers. However, the accuracy rate of doctors’ identification of melanoma is only 60%. Diagnosis requires higher technical experience and low fault tolerance for doctors who identify melanoma and other skin lesions. Therefore, the accurate segmentation of melanoma is of vital importance for clinical diagnosis and treatment. The current segmentation of melanoma is mainly based on fully connected networks (FCNs) and U-Net. Nevertheless, these two kinds of neural networks are prone to parameter redundancy, and the gradient disappears when depth increases, which reduces the Jaccard index of the skin lesion image segmentation model. To solve the above problems and improve the survival rate of melanoma patients, this paper proposes an improved skin lesion segmentation model based on U-Net++. In particular, we introduce a new loss function, which improves the Jaccard index of skin lesion image segmentation. The experiments show that our model has excellent performance on the segmentation of the ISIC2018 Task I dataset, and achieves a Jaccard index of 84.73%. The proposed method improves the Jaccard index of segmentation of skin lesion images and could also assist dermatological doctors in determining and diagnosing the types of skin lesions and the boundary between lesions and normal skin.
... Also, it was proposed by the authors of [73] to use sparse kernel models to represent feature data in a high-dimensional feature vector. According to the authors of [74], U-Net can be used to detect malignant tumors automatically. As a part of their IoT system, the authors of [75] employed transfer learning and CNN. ...
Article
Full-text available
The Internet of Medical Things (IoMT) has dramatically benefited medical professionals that patients and physicians can access from all regions. Although the automatic detection and prediction of diseases such as melanoma and leukemia is still being investigated and studied in IoMT, existing approaches are not able to achieve a high degree of efficiency. Thus, with a new approach that provides better results, patients would access the adequate treatments earlier and the death rate would be reduced. Therefore, this paper introduces an IoMT proposal for medical images’ classification that may be used anywhere, i.e., it is an ubiquitous approach. It was designed in two stages: first, we employ a transfer learning (TL)-based method for feature extraction, which is carried out using MobileNetV3; second, we use the chaos game optimization (CGO) for feature selection, with the aim of excluding unnecessary features and improving the performance, which is key in IoMT. Our methodology was evaluated using ISIC-2016, PH2, and Blood-Cell datasets. The experimental results indicated that the proposed approach obtained an accuracy of 88.39% on ISIC-2016, 97.52% on PH2, and 88.79% on Blood-cell datsets. Moreover, our approach had successful performances for the metrics employed compared to other existing methods.
... Similarly , the proposed method is tested for ISIC 20 18 dataset where the images are having high definition quality along with more increase in complexity or artefacts present in the images, while evaluating the images for ISIC 20 18 dataset. The proposed method is compared to lesion extraction methods that have been developed previously like Venkatesh et al. [33], Shahin et al. [34] and Nazi et al. [35]. Table 3 gives the proposed method's quantitative measures as well as the approaches mentioned above. ...
Conference Paper
Full-text available
Computer vision-based disease diagnosis is the recent trend for skin melanoma detection. This results in early detection resulting in reduced fatalities due to skin melanoma. A number of both supervised and unsupervised computer vision algorithms have been developed. However, a single approach may not apply for all types of dermoscopic images. The algorithm complexity increases while processing color dermoscopic images. Therefore, in this paper, a clustering-based method is developed for melanocytic lesion extraction. The algorithm processes the original dermoscopic images for lesion extraction. Initially, the dermoscopic images are pre-processed for removal of artifacts like hairs, gels, ruler marks etc. Furthermore, it is followed by mean shift clustering technique for lesion feature segregation from that of healthy regions. Moreover, the feature image is post processed using morphological operations for extraction of lesions from dermoscopic images. The proposed algorithm is tested quantitatively and qualitatively via publicly available datasets such as ISIC 2016, ISIC 2017, ISIC 2018 and PH2. The average accuracies for all datasets are 93.76% which can be compared with different approaches developed so far. The results repesent the superiority of the proposed method.
... The method of moving a portioned mask of skin lesions, alongside descriptions of clinical rules, such as, shading, texture, and morphological qualities, to the CNN through U-Net [19] has been used. Later, it introduced a two-advance model for the classification of skin lesions [20]. Shading, texture, and shape qualities were removed from images through U-Net and sent to a Support Vector Machine (SVM) classifier to analyze favourable or dangerous dermoscopy pictures. ...
Article
Full-text available
Among various types of skin diseases, skin cancer is the deadliest form of the disease. This paper classifies seven types of skin diseases: Actinic keratosis and intraepithelial carcinoma, Basal cell carcinoma, Benign keratosis, Dermatofibroma, Melanoma, Melanocytic type, and Vascular lesions. The primary objective of this paper is to evaluate the performance of these deep learning networks on skin lesion images. The lesion classification is implemented through transfer learning on fourteen deep learning networks: AlexNet, GoogleNet, ResNet50, VGG16, VGG19, ResNet101, InceptionV3, InceptionResNetV2, SqueezeNet, DenseNet201, ResNet18, MobileNetV2, ShuffleNet and NasNetMobile. The dataset used for these experiments are from ISIC 2018 of about 10,154 images. The results show that DenseNet201 performs best with 0.825 accuracy and improves skin lesion classification under multiple diseases. The proposed work shows the various parameters, including the accuracy of all fourteen deep learning networks, which helped build an efficient automated classification model for multiple skin lesions.
... Another study [38] proposed two-stage frameworks; in the first stage, the interclass difference of data distribution was carried out, while in the second stage, training of deep CNN on the ISIC-2016 dataset was performed and achieved a 94% of F-score. In several other studies, such as [10,39,40], transfer learning has been applied using AlexNet for classification on the HAM1000 dataset and achieved an accuracy of 96.87%. Some other studies [37], and [41], used the transfer learning technique using VGG-16 for feature extraction and SVM, decision tree, linear discriminant analysis, and K-nearest neighbor algorithms for classification on HAM10000 and ISIC dataset. ...
Article
Full-text available
Human skin is the most exposed part of the human body that needs constant protection and care from heat, light, dust, and direct exposure to other harmful radiation, such as UV rays. Skin cancer is one of the dangerous diseases found in humans. Melanoma is a form of skin cancer that begins in the cells (melanocytes) that control the pigment in human skin. Early detection and diagnosis of skin cancer, such as melanoma, is necessary to reduce the death rate due to skin cancer. In this paper, the classification of acral lentiginous melanoma, a type of melanoma with benign nevi, is being carried out. The proposed stacked ensemble method for melanoma classification uses different pre-trained models, such as Xception, Inceptionv3, InceptionResNet-V2, DenseNet121, and DenseNet201, by employing the concept of transfer learning and fine-tuning. The selection of pre-trained CNN architectures for transfer learning is based on models having the highest top-1 and top-5 accuracies on ImageNet. A novel stacked ensemble-based framework is presented to improve the generalizability and increase robustness by fusing fine-tuned pre-trained CNN models for acral lentiginous melanoma classification. The performance of the proposed method is evaluated by experimenting on a Figshare benchmark dataset. The impact of applying different augmentation techniques has also been analyzed through extensive experimentations. The results confirm that the proposed method outperforms state-of-the-art techniques and achieves an accuracy of 97.93%.
... In general, deep CNN models are powerful for feature extraction in the computer vision field [50], especially the DenseNet 201 model, which has many advantages over other deep models because it is designed with a 201 layer that provides robust feature extraction and multi-level features, as well as a vanishing gradient problem with a reduced number of parameters [51]. Moreover, MLP was widely and successfully used in breast cancer classification using medical imaging modalities [6]; this led to develop and compare a hybrid architecture that outperforms the others since it relied one the best deep CNN model for feature extraction (DenseNet 201) and the best classifier (MLP) for binary classification in breast cancer diagnosis with pathological images. ...
Article
The diagnosis of breast cancer in the early stages significantly decreases the mortality rate by allowing the choice of adequate treatment. This study developed and evaluated twenty-eight hybrid architectures combining seven recent deep learning techniques for feature extraction (DenseNet 201, Inception V3, Inception ReseNet V2, MobileNet V2, ResNet 50, VGG16, and VGG19), and four classifiers (MLP, SVM, DT, and KNN) for a binary classification of breast pathological images over the BreakHis and FNAC datasets. The designed architectures were evaluated using: (1) four classification performance criteria (accuracy, precision, recall, and F1-score), (2) Scott Knott (SK) statistical test to cluster the proposed architectures and identify the best cluster of the outperforming architectures, and (3) the Borda Count voting method to rank the best performing architectures. The results showed the potential of combining deep learning techniques for feature extraction and classical classifiers to classify breast cancer in malignant and benign tumors. The hybrid architecture using the MLP classifier and DenseNet 201 for feature extraction (MDEN) was the top performing architecture with higher accuracy values reaching 99% over the FNAC dataset, 92.61%, 92%, 93.93%, and 91.73% over the four magnification factor values of the BreakHis dataset: 40X, 100X, 200X, and 400X, respectively. The results of this study recommend the use of hybrid architectures using DenseNet 201 for the feature extraction of the breast cancer histological images because it gave the best results for both datasets BreakHis and FNAC, especially when combined with the MLP classifier.
Preprint
Full-text available
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 134 research papers that deal with deep learning based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. We summarize all examined works in a comprehensive table to facilitate comparisons.
Conference Paper
Deep learning techniques have been widely employed in semantic segmentation problems, especially in medical image analysis, for understanding image patterns. Skin cancer is a life-threatening problem, whereas timely detection can prevent and reduce the mortality rate. The aim is to segment the lesion area from the skin cancer image to help experts in the process of deeply understanding tissues and cancer cells' formation. Thus, we proposed an improved fully convolutional neural network (FCNN) architecture for lesion segmentation in dermoscopic skin cancer images. The FCNN network consists of multiple feature extraction layers forming a deep framework to obtain a larger vision for generating pixel labels. The novelty of the network lies in the way layers are stacked and the generation of customized weights in each convolutional layer to produce a full resolution feature map. The proposed model was compared with the top four winners of the International Skin Imaging Collaboration (ISIC) challenge using evaluation metrics such as accuracy, Jaccard index, and dice co-efficient. It outperformed the given state-of-the-art methods with higher values of the accuracy and Jaccard index.
Chapter
Skin cancer is one of the most common forms of cancer causing millions of deaths around the world. Recently, many advanced approaches are adopted to detect the lesion with the help of image processing and deep learning. Skin cancer segmentation is a challenging task which eventually enhances the detection and feature extraction from a dermoscopic image for further evaluation. In this paper, CNN pipeline architecture is proposed with the help of wide residual network for segmentation of dermoscopic images. The proposed method involves simultaneous introduction of noise for the training process. Proposed architecture is trained with images provided by the ISCI 2019 dataset consisting of 8000 patients. The proposed state of the art achieved a Jaccard index of 0.887 and an accuracy of 0.949.
Article
Full-text available
Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.
Article
Full-text available
Melanoma is a common kind of cancer that affects a significant number of the population. Recently, deep learning techniques have achieved high accuracy rates in classifying images in various fields. This paper uses deep learning to automatically detect melanomas in dermoscopy images. The system first preprocesses the images by removing unwanted artifacts like hair removal and then automatically segments the skin lesion. It then classifies the images using Convolution Neural Network (CNN). The classifier has been tested on preprocessed and unprocessed dermoscopy images to evaluate its effectiveness. The results show an outstanding performance in terms of sensitivity, specificity and accuracy on the PH² dataset. The system was able to achieve accuracies 93% for classifying melanoma and non-melanoma, with sensitivities and specificities in 86-94% range. © 2018 International Association of Computer Science and Information Technology.
Article
Full-text available
Deep learning methods, and in particular convolutional neural networks (CNNs), have led to an enormous breakthrough in a wide range of computer vision tasks, primarily by using large-scale annotated datasets. However, obtaining such datasets in the medical domain remains a challenge. In this paper, we present methods for generating synthetic medical images using recently presented deep learning Generative Adversarial Networks (GANs). Furthermore, we show that generated medical images can be used for synthetic data augmentation, and improve the performance of CNN for medical image classification. Our novel method is demonstrated on a limited dataset of computed tomography (CT) images of 182 liver lesions (53 cysts, 64 metastases and 65 hemangiomas). We first exploit GAN architectures for synthesizing high quality liver lesion ROIs. Then we present a novel scheme for liver lesion classification using CNN. Finally, we train the CNN using classic data augmentation and our synthetic data augmentation and compare performance. In addition, we explore the quality of our synthesized examples using visualization and expert assessment. The classification performance using only classic data augmentation yielded 78.6% sensitivity and 88.4% specificity. By adding the synthetic data augmentation the results increased to 85.7% sensitivity and 92.4% specificity. We believe that this approach to synthetic data augmentation can generalize to other medical classification applications and thus support radiologists' efforts to improve diagnosis.
Article
Full-text available
Recent work has shown that convolutional networks can be substantially deeper, more accurate and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper we embrace this observation and introduce the Dense Convolutional Network (DenseNet), where each layer is directly connected to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections, one between each layer and its subsequent layer (treating the input as layer 0), our network has L(L+1)/2 direct connections. For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. Our proposed connectivity pattern has several compelling advantages: it alleviates the vanishing gradient problem and strengthens feature propagation; despite the increase in connections, it encourages feature reuse and leads to a substantial reduction of parameters; its models tend to generalize surprisingly well. We evaluate our proposed architecture on five highly competitive object recognition benchmark tasks. The DenseNet obtains significant improvements over the state-of-the-art on all five of them (e.g., yielding 3.74% test error on CIFAR-10, 19.25% on CIFAR-100 and 1.59% on SVHN).
Article
Full-text available
The generation of artificial data based on existing observations, known as data augmentation, is a technique used in machine learning to improve model accuracy, generalisation, and to control overfitting. Augmentor is a software package, available in both Python and Julia versions, that provides a high level API for the expansion of image data using a stochastic, pipeline-based approach which effectively allows for images to be sampled from a distribution of augmented images at runtime. Augmentor provides methods for most standard augmentation practices as well as several advanced features such as label-preserving, randomised elastic distortions, and provides many helper functions for typical augmentation tasks used in machine learning.
Chapter
In this paper, we propose an automatic approach to skin lesion region segmentation based on a deep learning architecture with multi-scale residual connections. The architecture of the proposed model is based on UNet [22] with residual connections to maximise the learning capability and performance of the network. The information lost in the encoder stages due to the max-pooling layer at each level is preserved through the multi-scale residual connections. To corroborate the efficacy of the proposed model, extensive experiments are conducted on the ISIC 2017 challenge dataset without using any external dermatologic image set. An extensive comparative analysis is presented with contemporary methodologies to highlight the promising performance of the proposed methodology.
Article
Liver and liver tumor segmentation plays an important role in hepatocellular carcinoma diagnosis and treatment planning. Recently, fully convolutional neural networks (FCNs) serve as the back-bone in many volumetric medical image segmentation tasks, including 2D and 3D FCNs. However, 2D convolutions can not fully leverage the spatial information along the $z$-axis direction while 3D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2D DenseUNet for efficiently extracting intra-slice features and a 3D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm. In this way, the H-DenseUNet can harness the hybrid deep features effectively for volumetric segmentation. We extensively evaluated our method on the MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge. Our method outperformed other state-of-the-art methods on the overall score, with dice per case on liver and tumor as 0.961 and 0.686, as well as global dice score on liver and tumor as 0.965 and 0.829, respectively.