Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI
Artificial Intelligence in Cosmetic
Dermatology: A Systematic Literature
Review
PAT VATIWUTIPONG1, SIRAWICH VACHMANUS1, THANAPON NORASET1, AND
SUPPAWONG TUAROB.1
1Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom 73170 Thailand
Corresponding author: Suppawong Tuarob (e-mail: suppawong.tua@mahidol.edu).
This research project is supported by Mahidol University (Fundamental Fund: fiscal year 2023 by National Science Research and
Innovation Fund (NSRF)).
ABSTRACT
Over the last ten years, the field of dermatology has experienced significant advancements through the
utilization of artificial intelligence (AI) technologies. The adoption of such technologies is multifaceted,
encompassing tasks such as screening, diagnosis, treatment, and prediction of treatment outcomes. The
majority of prior systematic reviews in this domain were centered on medical dermatology, with the aim
of detecting and managing serious skin diseases such as skin cancer. However, the adoption of AI in
cosmetic dermatology, which focuses on improving skin conditions for cosmetic purposes, has not been
comprehensively reviewed. Therefore, the objective of this systematic review article is to analyze the
existing and recent research revolving around applications of AI in the field of cosmetic dermatology.
The study encompasses articles published between 2018 and 2023, where a total of 63 publications are
deemed relevant based on the established inclusion criteria, divided into five categories based on utilization
domains, namely cosmetic product development, skin assessment, skin condition diagnosis, treatment
recommendation, and treatment outcome prediction. This systematic review article provides not only
valuable insights for researchers interested in exploring new research areas related to aesthetic medicine
but also applicable guidance for practitioners seeking to implement AI technologies to address real-world
challenges in cosmetic services.
INDEX TERMS Artificial Intelligence, Machine Learning, Deep Learning, Computer Vision, Cosmetic
Dermatology, Sensitization Testing, Skin Condition Diagnosis, Skin Assessment, Treatment Recommenda-
tion
I. INTRODUCTION
Dermatology is a medical subspecialty that focuses on the
scientific investigation, diagnosis, treatment, and prevention
of disorders affecting the integumentary system, including
skin, hair, and nails. Dermatological conditions are varied
in terms of causes, severity, and symptoms [1]. Despite the
fact that dermatological disorders have been a longstand-
ing concern for humans, it was only in the 18th and 19th
centuries that skin disorders were investigated through a
broader medical lens. The progress of dermatology during
that period was concurrent with the advancements made in
the field of science. The field of dermatology experienced
a significant surge in growth subsequent to the scientific
revolution of the 19th century and has continued to undergo
further development in the present day [2].
Artificial intelligence (AI) pertains to the emulation of
human intelligence in machines that are programmed to
simulate human thought processes and behaviors [3]. AI
technologies have been adopted in medicine to assist repet-
itive tasks that rely on human experts, such as screening,
diagnosis, treatment, and analyses in epidemiology [4]. Ma-
chine learning (ML) is a subfield of AI that involves the
development of algorithms and statistical models that en-
able machines to learn from data without being explicitly
programmed. In essence, it emulates the cognitive processes
of human learning by utilizing experiential data to inform
VOLUME 4, 2016 1
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
decision-making. The task can be executed under the super-
vision of an expert, in a semi-supervised manner, or without
any supervision (i.e., unsupervised learning). Recently, the
progress in computational hardware technologies has played
a significant role in the emergence of deep learning (DL)
as a subfield of machine learning (ML) [5]. DL utilizes
deep neural network architectures to automatically extract
features from input data, thereby foregoing the traditional
domain-expert-dependent feature engineering processes [6].
Numerous studies have indicated that DL exhibits superior
performance in the field of medicine [7, 8], specifically
in dermatology [9, 10] when compared to traditional ML
methods. However, compensating for the absence of guided
feature engineering processes, the superior accuracy of deep
learning is contingent upon the extensive scale of the under-
lying training datasets [11]. Consequently, it is crucial that
deep learning algorithms possess the ability to comprehend
patterns in disparate data derived from diverse sources and
formats. This is essential not only to guarantee the availabil-
ity of adequate training data but also to enable the algorithms
to capture the wide variety of diseases that afflict patients
from different geographical regions and backgrounds [12].
According to Borade and Kalbande [13], a significant
number of dermatologists have relied on conventional diag-
nostic techniques in the past, which can be laborious and
time-consuming. In addition, the field of skincare demands
a high level of precision and expertise from many profes-
sions, necessitating specialized knowledge and abilities. For
instance, certain dermatological conditions may present with
a similar appearance, posing a challenge for even experts in
their classification. The aforementioned issues necessitate the
utilization of automated procedures that possess the ability to
furnish dermatologists and relevant healthcare professionals
with the requisite information necessary for their decision-
making processes. The current trend indicates a significant
rise in the adoption of AI and ML methodologies within
the dermatology domain, owing to the vast accumulation of
medical data. These technologies have been utilized as an
assistant to dermatologists for various tasks such as disease
diagnosis, evaluation of the severity of conditions, and devel-
opment of treatment recommendations [14]. Certain studies
even discovered that AI algorithms exhibit a high level of ac-
curacy when functioning as clinical assistants, and in certain
cases, their accuracy surpassed that of human dermatologists
[15].
Common obstacles encountered in the field of dermatol-
ogy often involve decision-making processes that entail skin
or hair pictures, which are frequently presented as computer
vision tasks that can be addressed by ML techniques. ML
approaches utilized in dermatology have the ability to ac-
quire knowledge from various types of image data, including
clinical, dermoscopic, histopathological, and self-captured
images. Such ability to intelligently process and extract use-
ful information from patient or specimen images has proved
useful not only for clinical dermatology but also teleder-
matology [16], where consultant sessions are performed re-
motely and online. The proliferation of teledermatology and
self-assessment via smartphone applications can be attributed
to the restricted availability of dermatologists and advanced
healthcare services [17]. Furthermore, the exigencies of the
COVID-19 pandemic served as a catalyst for the expedited
implementation of teledermatology, where the utilization of
online dermatological consulting was observed to emerge as
a viable solution during the period of social distancing [18].
Considerable research has been carried out regarding the
utilization of AI within the realm of dermatological disorders.
The majority of prior studies have utilized AI in medical
dermatology, specifically in the context of diagnosing and
treating dermatological diseases that, if left untreated, may
have detrimental effects on patients’ well-being or even result
in mortality [19]. Therefore, several review articles have
focused on the utilization of AI and ML methodologies to ad-
dress challenges in medical dermatology. For instance, Wells
et al. [20] examined the utilization of AI in dermatopathol-
ogy. Furthermore, Zhang et al. [21] and Mosquera-Zamudio
et al. [22] conducted a systematic review of scholarly articles
that explore the use of DL to analyze melanoma images.
Recently, Jeong et al. [23] analyzed the research trends, find-
ings, and constraints pertaining to applying DL in medical
dermatology.
Cosmetic dermatology is distinct from medical dermatol-
ogy in that it focuses on addressing skin conditions that are
not attributable to illness, including but not limited to wrin-
kles, age spots, acne, freckles, and melasma [24]. Although
non-fatal and not posing a direct threat to a patient’s physical
health, these beauty-related dermatological conditions may
have psychological implications for individuals, including
diminished self-esteem and confidence, as well as enduring
adverse long-term mental effects [25]. In recent years, re-
search in cosmetic dermatology has also integrated AI and
ML techniques to aid dermatologists in the diagnosis [26, 27]
and prescription of treatment [28], as well as enhancing
cosmetic product development [29] and engaging with poten-
tial consumer bases [30]. However, to our knowledge, there
has been no systematic review, synthesis, or consolidation
of research on the applications of AI and ML in cosmetic
dermatology. Hence, the objective of this article is to employ
the standard methodology of systematic literature review
[31] to collect and compile recent and relevant research that
pertains to utilizing AI technologies in tackling challenging
issues in cosmetic dermatology, with the aim of bridging the
aforementioned contribution gap.
The contribution of this systematic review is to examine
the utilization of AI and ML techniques in cosmetic derma-
tology research, encompassing the entire spectrum of der-
matological procedures, including the upstream development
of cosmetic products, middle-stream diagnosis and treatment
activities carried out by dermatologists, and downstream as-
pects focusing on ensuring customer satisfaction. The subse-
quent sections of this review paper are structured as follows.
Section II explains the review methodology, which entails
the specification of the search terms used to query scholarly
2VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
documents, research questions, and inclusion and exclusion
criteria. Section III provides a comprehensive representation
of the chosen papers from various demographic angles. Sec-
tion IV comparatively discusses the chosen papers, which
are divided into categories based on the different stages in
cosmetic dermatological services. Section V sheds light on
the relevant potential future challenges and research topics.
Finally, Section VI concludes this systematic review article.
II. REVIEW METHODOLOGY
This systematic literature review uses a methodology that fol-
lows the Preferred Reporting Items for Systematic Reviews
and Meta-Analyses (PRISMA) method [32]. Specifically, the
research questions are first established. Then, a search strat-
egy is determined, including the selection of databases and
search keywords and filtering keyword-matched papers using
inclusion and exclusion criteria. Finally, all selected articles
are analyzed on their objective, techniques, strengths, and
limitation. Pertaining to recent literature on AI applications
in cosmetic dermatology, three research questions are raised:
(Q1) What are the specific tasks in cosmetic dermatology
where AI approaches are employed?
(Q2) What AI methodologies are employed within the cos-
metic dermatology domain?
(Q3) How well does AI demonstrate proficiency in tasks
related to cosmetic dermatology? Could AI effectively
aid dermatologists in their diagnosis and treatment
procedures, as well as reduce human involvement in
non-clinical cosmetic tasks?
A. SEARCH STRATEGY
We used the SCOPUS, IEEE Xplore, and PubMed databases
as our resources. The following query is used to retrieve the
initial set of papers:
(dermatology OR skin) AND (cosmetic)
AND (artificial intelligence OR machine learning
OR deep learning)
Duplicated and non-English papers are removed. Further-
more, papers published before 2018 are removed to retain
only recent papers.
B. SELECTION METHOD
Keyword-matched papers from the preceding stage must un-
dergo a manual check for inclusion in further reviewing pro-
cesses. First, the papers go through a screening process based
on their titles, whereby only those papers that are deemed
relevant to the fields of AI and dermatology are retained,
while those that are not are excluded. The primary objective
of the title screening process is to eliminate unrelated papers
while maintaining the recall. Nevertheless, the possibility
of false positives persists, wherein papers may contain AI
and dermatology components but do not specifically pertain
to the utilization of computational intelligent technologies
in addressing challenges in cosmetic dermatology. Conse-
quently, the remaining papers undergo additional screening
based on their abstract content, whereby only those papers
that specifically address the applications of AI in cosmetic
dermatology are selected for further review. The articles that
successfully pass the initial abstract screening procedure are
subjected to an in-depth review and comparative analysis.
The scholarly articles reviewed in this study are limited
to journal articles and conference proceedings. This review
excluded posters, abstracts, extended abstracts, review pa-
pers, letters, and preprints. Furthermore, this review en-
compasses AI utilization in cosmetic dermatology, including
not only the diagnosis and treatment procedures but also
all aspects pertaining to cosmetic businesses, ranging from
the development of cosmetic products and measurement of
skin sensitization to the evaluation of customer satisfaction.
The scholarly articles being reviewed must necessitate the
utilization of either AI or ML in a certain portion of their
research addressing challenges in cosmetic dermatology. In
this study, AI is not typically attributed to rule-based decision
algorithms, classical mathematical models, formulas, or mere
statistical techniques. Furthermore, the investigation pertain-
ing to the implementation of AI in the field of clinical or
medical dermatology is also deemed ineligible for inclusion.
Recall that the differentiation between cosmetic dermatology
and medical dermatology is based on the evaluation of the
outcome of the symptom. Medical dermatology encompasses
conditions that are classified as illnesses or diseases and those
that have the potential to cause mortality, such as malignant
tumors or cancer. If a medical condition pertains to aesthetics
or lacks a direct association with well-being or mortality,
such as conditions like acne, wrinkles, melasma, and hair
loss, it falls under the category of cosmetic dermatology. To
summarize, the exclusion criteria utilized are as follows:
(EC1) not being a full research paper,
(EC2) not related to dermatology,
(EC3) mainly focused on medical dermatology,
(EC3) not using AI approaches in any part of the studies.
III. SELECTION RESULTS
The number of articles in each screening stage is shown in
the diagram in Figure 1. This diagram was generated by [33].
The search string was queried on 30 December 2023.
Initially, the SCOPUS database returned 937 articles. The
253 papers that were published before 2018 were removed.
In the titles and abstracts screening stage, 546 papers that did
not meet our inclusion and exclusion criteria were excluded.
The full text of all remaining 174 papers was found. After
the full texts were screened, we finally obtained 63 articles,
which were 49 journal articles and 14 conference papers.
The papers were classified into five categories based on
the target tasks in cosmetic dermatology, which AI was ap-
plied to address, including the cosmetic product development
process, skin assessment, skin condition diagnosis, treatment
recommendation, and treatment outcome prediction. The
number of papers in each category is shown in Table 1.
The number of selected articles in each category by year of
publication is shown in Figure 2.
VOLUME 4, 2016 3
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
FIGURE 1: The PRISMA flow diagram of the study.
TABLE 1: Number of papers of each category
Category Number of papers
A Cosmetic Product Development 12
B Skin Assessment 6
C Skin Condition Diagnosis 32
D Treatment Recommendation 4
E Treatment Outcome Prediction 9
This review article focused on examining and comparing
the performance of AI techniques employed in the selected
literature. It is noteworthy that the predominant AI method-
ologies utilized in the field of cosmetic dermatology are those
falling under the umbrella of machine learning (ML). As
such, the subsequent sections of this review article will be
dedicated to expounding upon ML techniques for specificity
rather than referring to them as AI in a broader sense. We di-
vided ML into two main classes, including conventional ML
and DL approaches. Deep learning (DL) is a ML technique
that autonomously extracts and evaluates valuable features
from raw data. Conversely, traditional ML necessitates expert
proficiency in feature selection and engineering tasks. In
addition, research has shown that, with sufficiently large data,
DL performance was shown to surpass that of traditional
ML in a variety of predictive tasks [34]. However, if the
training dataset is small, traditional ML with guided feature
engineering was reported to outperform DL [35]. Popular
conventional ML methods utilized in cosmetic dermatology
research include Support Vector Machine (SVM), Discrim-
inative Analysis (DA), Naive Bayes (NB), Decision Trees
4VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
FIGURE 2: Number of papers of each category by year
(DT), k-Nearest Neighbors (kNN), k-Means, Principal Com-
ponent Analysis (PCA), and Neural Networks (NN). The
widely used DL methods are based on Convolutional Neural
Networks (CNN) and Recurrent Neural Networks (RNN). In
addition, while DL technologies have only recently emerged
[36], it is noteworthy that the utilization of deep learning
methodologies in cosmetic dermatology has gained signif-
icant traction in recent years, as evident from the steep
upward trajectory of research publications incorporating DL
techniques, as depicted in Figure 3.
FIGURE 3: Distribution of the numbers of papers utilizing
conventional ML, DL, and both by year.
Among the 63 articles that were selected, a significant
number of them had first authors affiliated with institutions
located in Asia. India is the country from which the majority
of papers originated, with a total of 11 papers. China, South
Korea, and Taiwan follow closely behind with 10, 9, and 6
papers, respectively. Figure 4 illustrates the presentation of
papers categorized by country.
FIGURE 4: Number of papers by country
IV. ANALYSIS OF REVIEWED PAPERS
This review study involved the comparative analysis of 63
articles, with a focus on the utilization of ML in the field
of cosmetic dermatology. The investigation aimed to assess
the current state-of-the-art contribution and novelty of such
computational intelligent technologies.
A. COSMETIC PRODUCT DEVELOPMENT
Developing cosmetic products is a crucial element in cos-
metic dermatology [37]. This process involves three distinct
subtasks, namely generating, making, and testing products
[38]. During the generation stage, a recipe is formulated
based on specific requirements. Then, the developed formu-
lae are created in a laboratory setting. Finally, it is imperative
to conduct testing procedures to guarantee the safety and
the absence of any adverse consequences. Indeed, recent re-
search has discovered the use of ML to automate and improve
the efficiency and efficacy of cosmetic product development
processes, as detailed in the following subsections.
1) Cosmetics Development
Cosmetics development involves both generating formulae
and creating actual products for testing. In the generation
stage, experts must create new recipes according to the re-
quirements. Normally, this process is carried out manually.
In order to streamline this laborious task, Sunkle et al. [38]
proposed an integrated automated recommendation system
that utilizes a knowledge graph incorporating previous for-
mulation recipes and contextual information. Their method
produces a cosmetic formulation template predicated on the
input specifications. This approach may be employed to
suggest a template to the specialist as an initial reference.
Furthermore, Zhang et al. [39] transformed the cosmetic
formulation into an optimization problem. All conditions
were mathematically expressed as a variable and defined as
Mixed-Integer Nonlinear Programming (MINLP) problems.
The objective function was defined to be the overall sensorial
rating. The optimization problem was numerically solved us-
ing generalized disjunctive programming reformulation and
model substitution. Linear Regression, Artificial Neural Net-
works (ANN), and Support Vector Regression (SVR) were
employed to predict the sensorial rating. In a recent study,
VOLUME 4, 2016 5
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
Yeh et al. [40] employed a deep neural network to forecast
drug-target interactions based on established relationships.
Their proposed method involved narrowing down potential
candidates for multi-molecule drugs aimed at mitigating the
effects of skin aging in humans. The achieved accuracy of the
test was 93.077%.
2) Sensitization Testing
In the cosmetics sector, sensitization tests are employed to
evaluate the sensitizing potential of chemicals and medical
devices on the skin. The aforementioned tests are designed to
evaluate the capacity of a given material or product to induce
a delayed hypersensitivity response. Traditionally, there are
generally four ways to conduct sensitization experiments: in
vivo,in vitro,in chemico, and in silico [41]. In vivo refers to
studies carried out inside a living thing, typically an animal,
whereas in vitro refers to experiments carried out in a lab en-
vironment employing cells, tissues, or biological substances
outside of a living thing. In chemico refers to tests carried out
in a lab setting apart from a biological environment. Lastly,
in silico is testing without any laboratory, the test was done
completely by computation, and this is where ML was mostly
employed. It is worth noting that the conventional method
for testing sensitization through In vivo experiments often
involves animal testing. This approach can be both financially
burdensome [42] and ethically controversial [43]. The ethical
implications surrounding cosmetic testing on animals have
been a topic of controversy, particularly following the 2013
ban by the European Union on the use of animals for testing
cosmetic products and ingredients [44]. Therefore, the ethi-
cal concerns and financial pressures associated with animal
testing may have a substantial impact on the adoption of ML
techniques to optimize skin sensitization testing procedures
[45, 46, 47, 48, 49, 50, 51, 52].
Several ML models have been devised to evaluate skin sen-
sitization through well-defined methodologies. However, the
primary factor contributing to inaccuracies was the presence
of imbalanced data – the number of sensitizers is typically
greater than that of non-sensitizers. Li et al. [53] aimed to
tackle this problem by applying a data-rebalancing approach
before training an SVM model. The best-proposed model
for hazard prediction, namely hazard-DA, reached 90.63%
accuracy on the test set. For potency prediction, the potency-
DA model yielded 68.75% accuracy on the test set. The
utilization of Quantitative Structure-Activity Relationship
(QSAR) has been widely employed as a means of predicting
toxicity through in silico methods. The study established a
correlation between the chemical composition of a substance
and its level of toxicity. Recent studies have reported the use
of diverse machine-learning classification algorithms in the
QSAR modeling processes [54]. Akturk et al. [55] employed
the QSAR model to predict the comedogenic compounds
present in cosmetic commodities, which underlie the prob-
lem of acne cosmetica. The study examined KStar, Ran-
dom Forest, and NNge as classification algorithms, and the
Random Forest model yielded the most favorable outcomes.
The descriptor packages, Mold2 and alvaDesc, modeled with
Random Forest yielded satisfactory results with an accuracy
of 75.86% and 79.31%.
Instead of using QSAR, Sharma et al. [56] developed
a skin sensitization method predicting the allergenicity of
chemicals. They employed many classification algorithms,
including Logistic Regression (LR), k-Nearest Neighbors
(kNNs), Decision Tree (DT), Gaussian Naive Bayes (GNB),
XGBoost (XGB), Support Vector Classifier (SVC), and Ran-
dom Forest (RF). The chemical compounds were obtained
from the IEDB database. Features were selected by removing
the low variance and highly correlated, redundant features.
Finally, a Support Vector Machine classifier with a linear
kernel was trained with these dimension-reduced features.
The obtained feature set consists of fourteen 2D features, six
3D features, and 22 fingerprint features. For the experiment,
Random Forest based model using all features performed the
best (accuracy of 83.39%). Wilm et al. [57] highlighted the
lack of interpretability associated with utilizing non-intuitive
descriptors as features in a model. They introduced a novel
compatible model, Skin Doctor CP:Bio, which utilizes a con-
cise set of ten highly interpretable features. Their proposed
method, using Random Forest, achieved an efficacy of 82%
with a significance level of 0.20. Recently, Jeon et al. [58]
used a graph convolutional network in their study to evaluate
skin sensitization. This study evaluated the potency and cat-
egorized it into three distinct classes based on its strength:
strong, weak, and non-sensitizer. The model for assessing
hazards, which utilized GCN, KeratinoSens, and h-CLAT
features, yielded the most optimal outcomes, exhibiting an
accuracy of 88%. However, they found that the model based
on potency alone yielded an accuracy of only 64%.
In addition to conducting in silico testing, researchers
have employed ML algorithms to analyze laboratory re-
sults obtained from in vitro testing. The Genomic Allergen
Rapid Detection (GARD) method was employed as a means
of conducting cell-based testing for skin sensitization. The
study employed an in vitro model that consistently demon-
strated a predictive accuracy of approximately 90% for the
classification of the test data. Forreryd et al. [59] provided
additional information to the preceding study regarding the
classification efficacy by examining a sizable external test
dataset comprising 70 observations with Support Vector Ma-
chine (SVM). The results indicated an accuracy of 79%.
Furthermore, they presented a conformal prediction frame-
work that enables the regulation of the error rate by adjusting
the confidence threshold. The findings indicated that their
proposed approach achieved an accuracy of 88% using a
confidence level of 0.85.
Skin toxicity refers to the capacity of a substance to induce
a localized response and/or systemic toxicity upon dermal
exposure [60]. The reduction in skin thickness has been iden-
tified as a potential indicator of skin toxicity. However, the
conventional approach of assessing epidermal layers through
manual examination by a pathologist poses challenges in
terms of efficiency and scalability. To facilitate such delicate
6VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
processes, Hu et al. [61] utilized DL and image processing
methods to measure the epidermal thickness. The estima-
tion was significantly correlated with a pathologist’s semi-
quantitative evaluation and mildly agreed with one performed
by other pathologists. For toxicity prediction, the method
yielded 0.8 sensitivity. Furxhi et al. [62] predicted Nano-
Particles in vitro toxicity using ML techniques on the Safe
and Sustainable Nanotechnology datasets. Eight classifiers in
different categories of algorithms were selected. The result
showed that Random Forest and Neural Networks performed
best among the classifiers chosen for the evaluation. Jun and
Shin [63] utilized convolutional neural networks and con-
volutional long short-term memory (ConvLSTM) to predict
the artificial skin images for testing. The evaluation was
conducted by comparing the projected images with the real
ones in a 3D culture setting.
The reviewed papers in this section emphasize ML algo-
rithms used to predict skin sensitivity and toxicity of cos-
metic chemicals, especially in the absence of animal testing.
The article discusses different approaches, focused mostly on
in silico type testing. Since ML algorithms require sufficient
and representative data to achieve high accuracy, the lack of
these elements could be a huge challenge. Specifically, the
lack of datasets and data imbalance were shown to hinder
the ML algorithms’ ability to learn. Various classification
algorithms, both DL and traditional ML ones, were used
to validate the hypotheses in the studies. Additionally, ML
techniques were also employed to increase the efficacy and
efficiency of in vitro methods, such as the genomics allergen
rapid detection method. These articles reviewed in this sec-
tion also highlight the requirement for additional studies in
order to create precise models that can be used as economical
and humane alternatives to animal testing.
B. SKIN ASSESSMENT
The skin assessment task involves assessing primitive skin
properties, such as color, oiliness, and hydration, as well as
the compatibility between the patient’s skin and treatments or
products. Individuals undergoing the skin assessment process
do not necessarily have unwanted skin conditions but rather
seek to better comprehend their skin properties for selecting
fitting cosmetic products or treatments. While the traditional
skin assessment procedure is carried out by dermatology
experts, recent literature has shown that such redundant tasks
could be assisted with AI technologies.
Skin hydration is one of the essential characteristics for
adjusting a recommended cosmetic treatment or a product
suggestion. Chirikhina et al. [64] employed contact capac-
itive imaging and high-resolution ultrasound imaging tech-
niques to estimate the water content in the skin. The study
involved conducting experiments on multiple facial regions,
such as the volar forearm, cheek, chin, eye corner, forehead,
lips, neck, and nose. The skin Epsilon value was utilized as a
reference standard for measuring water content. Several deep
learning algorithms were evaluated for the classification of
contact capacitive images, including AlexNet, GoogLeNet,
VGG16, ResNet-50, InceptionV3, MobileNetV2, DenseNet
201, SqueezeNet, InceptionResNetV2, and Xception. Re-
gardless of the training time, DenseNet201 exhibited the
highest level of accuracy. Two novel feature-based tech-
niques were introduced for obtaining high-resolution ul-
trasound images. First, an analysis was conducted on the
color-based features, including the mean, standard devia-
tion, median, and luminosity value of the RGB channels.
This analysis used various ML algorithms, namely Logistic
Regression, K-Nearest Neighbor (KNN), Neural Networks
(NNs), and Random Forest. Second, a texture-oriented ap-
proach was employed, utilizing a set of five conventional
textural characteristics in conjunction with an additional five
outputs derived from single-layer pre-trained convolutional
networks, which were projected onto a lower dimensional
space using principal component analysis (PCA).
Zegour et al. [65] proposed a method for assessing skin
hydration levels using High-resolution Magnetic Resonance
Imaging (MRI). The T2 sequence data extracted from MRI
were utilized as the model features. The segmentation algo-
rithms employed in the study were DenseNet and U-net. The
utilization of the Hausdorff distance metric facilitated the
comparison between an algorithmic output and a particular
variant of human-derived segmentation. The findings indi-
cated that ML techniques yielded hydration assessment that
was notably closer to manual expert assessment.
The outermost layer of the skin system, known as the
skin barrier, serves as a protective wall that shields the body
from external hazards and maintains the body’s homeostasis
by regulating water loss [66]. A traditional approach to as-
sessing the efficacy of the skin barrier function is estimating
transepidermal water loss, which can be time-consuming. To
speed up this process, Koseki et al. [67] introduced an ML
algorithm that utilizes topological data analysis to forecast
skin barrier function by analyzing skin images. Microscopic
skin images were utilized to identify the structural charac-
teristics of the skin through the application of topological
data analysis. They found a strong correlation between the
topological characteristics and the transepidermal water loss.
Borade et al. [68] conducted a study to examine the skin’s
sebum production and determined whether it exhibits char-
acteristics of oily or dry skin. The experiment involved an
examination of SVM, VGG-16, and ResNets in the classifi-
cation of preprocessed skin images. The findings indicated
that ResNets yielded a 98% accuracy, surpassing the per-
formance of the other models. Furthermore, Kothari et al.
[69] categorized skin images into four distinct types, namely
normal, dry, oily, and combination. A face detection tech-
nique based on Multi-Task Cascaded Convolutional Neural
Network (MTCNN) was applied to each facial image, which
partitioned it into four distinct regions, namely the forehead,
left cheek, right cheek, and nose. Subsequently, the Convo-
lutional Neural Network (CNN) was trained to estimate the
oiliness level for each image and proceeded to classify them
based on their respective types.
Skin thickness is another important integral attribute de-
VOLUME 4, 2016 7
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
termining skin composition and function. Measuring the
thickness and density of skin layers in individuals poses
a significant challenge due to the considerable variability
observed across different sites, genders, ages, and regions
[70]. The conventional approach to determining thickness
involves the extraction of a skin sample through a biopsy,
followed by microscopic examination, which is considered
to be an invasive procedure [71]. To mitigate this issue,
Vyas et al. [72] introduced a technique for estimating skin
thickness that is non-invasive. According to the authors, the
lack of available true skin thickness data in the past prevented
an accurate determination of the estimation method’s true
accuracy. Nonetheless, the authors were able to obtain the
gold standard established by dermatologists and use them
to calculate the prediction error directly. Furthermore, the
problem of estimating skin thickness using a Lytro camera
was addressed by Ko et al. [73], where they proposed a novel
approach that incorporates texture information and employs
Conditional Generative Adversarial Networks (CGANs) to
produce a skin depth map with enhanced precision.
Understanding skin properties such as hydration, thick-
ness, and oily/dry classification is crucial for developing
effective cosmetic treatments and product recommendations.
The reviewed articles in this section have shown that skin
moisture and thickness can be precisely measured by non-
invasive imaging methods such as touch capacitive imag-
ing, high-resolution ultrasound imaging, and high-resolution
magnetic resonance imaging. These techniques can also be
used to precisely classify skin as oily or dry using DL
algorithms. Such advancements in AI technologies provided
a strong basis for creating treatment plans and the opportunity
to create more specialized goods and treatments that cater to
the particular demands of different skin types.
C. SKIN CONDITION DIAGNOSIS
In contrast with the skin assessment discussed in the previous
section, skin condition diagnosis mainly involves assessing
and identifying the types and severity of unwanted cosmetic
conditions. Such a process is crucial for predicting effec-
tive treatment and management. However, even for expert
dermatologists, it could be challenging to diagnose certain
conditions based on symptom appearances alone, as numer-
ous skin conditions can present similar features. Addressing
this problem, ML approaches can be particularly useful,
as they can learn to capture patterns in large datasets and
identify relationships that may not be immediately apparent
to human dermatologists. In the field of medical dermatology,
automated ML-powered disease diagnosis has become appli-
cable, especially in skin cancer detection and classification
[74, 75, 76, 77].
Likewise, in cosmetic dermatology, researchers have
demonstrated that ML approaches could be used to improve
the accuracy and speed of diagnosis, leading to better pa-
tient outcomes. Articles in this category were divided into
five subcategories: (1) single condition diagnosis, a binary
classification problem of determining whether the patient has
a particular condition or not; (2) condition classification, a
multi-class classification problem of identifying which con-
dition a patient has among the predefined classes of cosmetic
conditions; (3) localization, to identify the location and type
of the condition; and (4) severity estimation which may
involve a regression problem or a multi-class classification
problem to grade the level of severity of a specific condition.
1) Single Condition Diagnosis
In this subsection, articles that aimed to diagnose specific
cosmetic conditions were reviewed. By focusing specifically
on the diagnosis of cosmetic conditions, we aimed to ex-
plore the current state of research and highlight some of the
potential benefits associated with using ML in this context.
Previously, computer vision techniques were wildly used
to extract properties from lesion images for dermatological
disease diagnosis. This problem can be framed as an im-
age classification task where conventional ML models are
trained to spot images containing the target skin conditions.
Nowadays, convolutional neural networks (CNN) and other
DL approaches have become more utilized because of the
fact that they generally provide better classification efficacy
compared to traditional ML approaches in medical image
classification tasks [78].
Huang et al. [79] employed a CNN model based on
ResNet-50 to distinguish photos of subjects with rosacea, a
chronic inflammatory disease, from other skin conditions.
The model yielded 89.8% in accuracy. They also classified
rosacea lesions into three subtypes: Erythematotelangiectatic
Rosacea (ETR), Phymatous Rosacea (PhR), and Papulo-
pustular Rosacea (PPR). Sameera et al. [80] used CNN
to evaluate the probability of the existence of three facial
spots, wrinkles, dark spots, and puffy eyes. Their model
can simultaneously differentiate these characteristics and ex-
hibit various potential applications, owing to its utilization
of specialized convolution and pooling operations, as well
as parameter shifting. Consequently, an overall accuracy of
94.11% was achieved. Liu et al. [81] used four DL archi-
tectures, namely DenseNet, ResNet, Swin Transformer, and
MobileNet, to diagnose images of subjects with and without
melasma. The research investigated the effect of different
photo-taking modes used by VISIA, a device for measuring
a patient’s dyschromia from images [82]. Each subject was
taken five shots, including Normal, UV Spots, Porphyrins,
Brown Spots, and Red Areas modes. The experimental re-
sults showed that DenseNet121 performed the best. They
also discovered that the Brown Spots mode gave the best
performance among all five modes (Accuracy 0.9442), and
the best combination was Brown Spots together with Normal
and UV Spots modes (accuracy 0.974). Aditya et al. [83]
diagnosed patients who had symptoms that indicate Alopecia
Areata, an autoimmune disorder that causes hair loss in
patches on the scalp, by taking images of patients’ back heads
to create training data for five ML models, including SVM,
CNN, KNN, Random Forest, and Gaussian Naive Bayes.
CNN was reported to perform best.
8VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
In addition to skin images, other various health and
histopathological data were utilized as inputs for the classifi-
cation tasks. Alagi´
c et al. [84] employed an artificial neural
network (ANN) to analyze skin health data, which included
parameters such as pH value, sebum, and transepidermal
water loss. These features were utilized to distinguish indi-
viduals who are in good health from those who have skin
conditions. The dataset comprises various dermatological
conditions such as acne, dry skin, decreased elasticity, and
wrinkles. Dubey et al. [85] diagnosed the scalp by per-
forming an optical coherence tomography. This non-invasive
imaging technique uses light waves to create high-resolution
images of internal tissues and structures. The A-line and B-
scan features were extracted from the OCT. Seven models,
including classical ML models and neural networks, were
used in the pilot experiment. The multilevel ensemble model
gave the highest performance using eight features. Jansen
et al. [86] diagnosed Seborrheic Keratosis, a type of benign
skin condition commonly found in senior patients. The data
comprised images of tissue slides from three dermatologi-
cal research centers. ResNet34 was used as a classification
model. Wang et al. [87] investigated the skin disease from
metagenomic sequencing data of acne, which is a kind of
lipid in the face skin. The data were analyzed using Principal
Component Analysis (PCA), Kernel Principal Component
Analysis (KPCA), and Multiset Canonical Correlation Anal-
ysis (MCCA). Each method provided information pertaining
to lipids that influence the diagnosis results.
To summarize the literature reviewed in this section, the
majority of research utilizing AI to identify a single cosmetic
skin condition relied on computer vision and ML technolo-
gies to train predictive models using annotated historical
data. While traditional computer vision methods relied on
hand-crafted color and texture features, the advent of DL
applications enabled the elimination of the feature engineer-
ing process while achieving cutting-edge performance. The
single-condition diagnosis research classified skin images
into two classes, with a condition or without a condition.
One major limitation of this approach is that it needs proper
prior information on which condition this patient may have.
In fact, some condition is very similar and hard to distinguish
by physical observation. Such issues pose challenges for ML
models trained specifically for diagnosing particular condi-
tions in that they may be unable to tell apart closely-resemble
skin conditions. Furthermore, cosmetic patients may have
varied conditions. Therefore, the ability to detect only one
condition may not suffice in practice. As a result, these issues
behoove the ability to automatically tell apart different types
of cosmetic conditions, especially those that appear similar
to each other, which will be elaborated on in the next section.
2) Conditions Classification
Identifying various skin conditions can be difficult due to
their similar visual presentations. The articles within this
particular subcategory examine the utilization of AI in dis-
criminating between similar beauty-related conditions that
bear a resemblance to one another. The majority of research
presents such issues as multi-class classification tasks, which
permit the direct application of traditional and advanced ML-
based image classification techniques. The task of classifying
skin images into respective skin conditions has been pre-
viously accomplished through the utilization of traditional
ML techniques, including Support Vector Machines and K-
Nearest Neighbor algorithms. For example, Abas et al. [88]
employed a methodology wherein RGB facial images fea-
turing acne were transformed into Grayscale, followed by
applying an entropy-based filter to isolate the regions of the
image that exhibit acne. The segmented image was analyzed,
and various features were categorized into six distinct skin
conditions, one of which was acne. The experimental inves-
tigation involved using Binary Tree, Discriminant Analysis,
k-NN, and Naive Bayesian techniques, where an accuracy of
85.5% was attained.
Recently, DL models, especially CNN-based ones, such as
AlexNet, GoogleNet, and DenseNet, have been popularized
in the medical areas. For example, Yang et al. [89] used
DenseNet-96 and ResNet-152 to classify the 12,816 cropped
benign and pigmented facial skin lesion photos collected at
the Hospital for Skin Diseases of the Chinese Academy of
Medical Sciences from 2004 to 2016 into six classes based
on their skin conditions, including acquired nevi of Ota,
melasma, café-au-lait spots, freckles, seborrheic keratoses,
and nevi of Ota. The automated classification result was
compared with the three expert dermatologists. The results
showed that ResNet-152 outperformed the other methods.
While these DL algorithms often are accompanied by pre-
trained models that can be directly fine-tuned and applied
to image classification tasks, recent literature in cosmetic
dermatology has found that revising the architectures of
these DL algorithms could improve the predictive efficacy.
For instance, Huong et al. [90] focused on the problem of
limited training data by proposing to ensemble pre-trained
CNN with SVM and KNN. The performance of classify-
ing four skin disease classes of transfer-learned AlexNet,
AlexNet-SVM, and AlexNet-KNN was evaluated and com-
pared. The modified model outperformed the non-modified
one and also reduced the computational time. López-Leyva
et al. [91] addressed this problem through the development
of a method aimed at classifying ten distinct categories of
skin lesions. Their method relied on the Fourier spectral
information of images within a color model. The Edinburgh
Dermofit Library utilized a 26 ×1vector to represent each
image, consisting of Fourier spectral indicators that pertain
to both the original size image and the cropped version.
Subsequently, the vector that was represented was inputted
into a Two-Layer Feed-Forward Neural Network (TLFN)
with the purpose of accurately categorizing the lesion ac-
cording to its respective type. Overall, the proposed method
exhibited a 99.33% accuracy, 94.16% precision, 92.9% sen-
sitivity, and 99.63% specificity. Jain et al. [92] proposed
the Optimal Probability-based Deep Neural Network (OP-
DNN) for the purpose of classifying skin images into four
VOLUME 4, 2016 9
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
distinct categories, namely Basal Cell Carcinoma, Seborrheic
Keratosis, Melanoma, and Squamous Cell Carcinoma. The
study involved extracting seven distinct color and texture
features from each image, namely mean, standard devia-
tion, skewness, contrast, correlation, energy, and extropy.
The OP-DNN methodology was designed to expedite the
training process of conventional DNNs by leveraging the
WOA optimization algorithm instead of refreshing weight
values at each cycle. The results indicated that the OP-DNN
approach achieved a marginally superior level of accuracy
and precision compared to the baseline method while also
exhibiting a notable reduction in training time. Ito et al.
[93] employed the Google Cloud AutoML technology to
classify scar images into four distinct categories, namely
immature scar, mature scar, hypertrophic scar, and keloid.
The outcomes of the classification were compared with the
expert medical judgment. In a recent study conducted by
Borade et al. [94], the authors expanded their analysis beyond
images in the RGB color space to include three additional
color spaces: YUV, YCbCr, and HSV. The study employed
five traditional ML techniques, namely Support Vector Ma-
chine (SVM), k-Nearest Neighbor (kNN), Naive Bayes (NB),
Multilayer Perceptron (MLP), and Random Forest (RF) to
categorize images of four distinct dermatological conditions,
namely acanthosis nigricans, melasma, alopecia areata, and
acne.
Kim and Song [95] identified several limitations associated
with the utilization of CNN-based models in the classification
of facial skin conditions. These limitations include the chal-
lenge of accurately identifying minor skin issues, the need to
classify over 20 distinct conditions, the presence of variations
within the same condition, the potential for confusion be-
tween similar conditions, and the possibility of false segmen-
tation on non-facial regions. The authors proposed effective
strategies for overcoming each constraint, and the empirical
findings demonstrated a 32.58% higher diagnostic efficacy in
comparison to the traditional Convolutional Neural Network
(CNN).
Apart from cosmetic skin conditions, Jeong et al. [96]
employed the EfficientNet to categorize ten distinct scalp
symptoms, namely normal, drying, oily, sensitivity, atopy,
seborrheic, trouble, dry dandruff, oily dandruff, and hair loss.
Their classification technique formed an integral component
of AI-ScalpGrader, which comprised a handheld scalp imag-
ing device, a smartphone application, and a cloud-based ad-
ministration platform. The system provided accuracy values
ranging from 87.3% to 91.3%.
The capacity to categorize skin images into various skin
conditions could potentially assist in initial self-diagnosis
and could be integrated into a decision support system for
dermatologists. However, the ability to precisely identify
the location and dimensions of each lesion could offer vital
supplementary insights, especially in cases where the lesions
are diminutive or consist of diverse skin conditions. For
example, some grading systems necessitate the quantification
of papules and pustules, which are indicative of inflamed
acne, in order to evaluate the corresponding severity levels
[97]. It is imperative for automated systems to possess the
ability to identify and quantify any and all instances of
inflammatory lesions present on a patient’s facial image. This
issue necessitates a redefinition of the diagnosis task as a
lesion detection task that is cognizant of location, type, and
size information to provide requisite information at a more
granular level.
3) Conditions Localization and Detection
The task of diagnosing cosmetic conditions through image
classification can pose a challenge due to the presence of
extraneous information in the background of the images. In
addition, specific therapies for skin conditions that present as
isolated, non-adjacent spots necessitate knowledge regarding
the precise locations and dimensions of the lesions, which
do not accompany the image classification task. The ability
to localize and confine the lesions in the input image before
performing further analyses could mitigate such issues. In-
deed, the process of localizing a condition is a crucial aspect
of both diagnosis and the estimation of its severity, as it
eliminates extraneous information from the images. The task
can be accomplished through the application of conventional
image processing methodologies, including edge detection,
thresholding, and clustering. Object detection is a more spe-
cific task that combines both localization and classification
together. This section examines scholarly articles that en-
deavor to identify cosmetic conditions, beginning with the
pure localization approach and progressing to the utilization
of object detection methodologies.
The detection of facial wrinkles through automated tech-
niques poses a significant challenge to cosmetic condition
detection tasks. Yap et al. [98] conducted a survey on
the topic of automated facial wrinkles detection, including
various types of research, including handcraft-based image
processing techniques, stochastic models, and mathematical
filters. The conclusion highlighted that while there has been
a notable surge in the application of DL methods for image
inpainting, there is a scarcity of research that has specifically
explored its potential for addressing facial wrinkles.
Rew et al. [99] utilized the Deeplab-v3+ and Inception-
ResNet-v2 models to perform pixel segmentation on skin
wrinkles. Then, LightGBM and MP algorithms were em-
ployed to enhance the segmentation procedure’s efficacy.
Their proposed segmentation scheme yielded a mean accu-
racy of 85.4%, a mean intersection over union of 74.9%,
and a mean boundary F1 score of 85.2%, improving over
the panoptic-based semantic segmentation method by 1.1%,
6.7%, and 14.8%, respectively. Ismail and Sung [100] intro-
duced a deep-learning framework designed to identify the
locations and types of acne lesions and wrinkles in facial and
half-body images. Various deep-learning networks, namely
the Faster RCNN and the Residual Network, were explored.
The convolutional feature map was generated by utilizing
50 layers of a residual neural network to extract the char-
acteristics of the image. The mAP score of the detection
10 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
model was found to be 47.96%. Shih et al. [101] employed a
weakly supervised algorithm to localize the area of Vitiligo
to evaluate the treatment system. Specifically, Wood’s lamp
was utilized to capture and photograph both the impacted
and unaffected regions in order to establish a repository of
training images. Then, a CNN model was used to perform
an initial segmentation of the affected region from large-
scale images, such as those of the head and face. Next,
physician-validated and authorized images depicting the im-
pacted region were utilized alongside a substantial quantity
of images captured by Wood’s lamp to enable self-learning
and classification. Finally, facial recognition technology was
employed to rectify the camera’s shooting angle, thereby
mitigating image distortion arising from disparate shooting
angles.
In addition to detecting skin lesions, Gallucci et al. [102]
employed U-Net, an image segmentation technique, to seg-
ment and detect hair from images for quantifying hair num-
bers. Furthermore, they utilized the said method in con-
junction with the detection of skin lesions. The experiment
involved a comparison between U-net and several other mod-
els, namely Lenet-5, VGG-16, ResNet-50, and DenseNet-
121. The employment of U-Net yielded the highest correla-
tion with the manual count by experts.
In addition to identifying the location of the lesions, ob-
ject detection methodologies were also used to classify the
localized lesions into respective condition types. Phan et al.
[103] designed an LED therapy device that incorporates an
automated algorithm for diagnosing acne vulgaris. The pro-
posed model was derived from a modified version of ResNet-
50 architecture, which was integrated with YOLO-v2. Once
the location of acne was automatically identified on the
input self-image, the information will be transmitted to the
intelligent LED therapy device for further processing. Wen
et al. [104] conducted a comparative analysis of the efficacy
of utilizing CNN as the underlying framework for detecting
acne vulgaris. A number of object detection algorithms were
evaluated, namely MobileNetv1, Yolov4, and Inception-v2.
They also implemented an automated severity evaluation
tool that was made publicly available through the WeChat
application for self-monitoring of acne. In addition to acne
detection, Maknuna et al. [105] suggested a scar lesions
detection model for the WSI of HE-stained tissue. Mask
R-CNN was used to detect scar lesions. Then, ResNet-101
was used as a backbone of the region proposal network. The
detected region of interest was fed to the image clustering
model, k-Means, to partition the structure and character of
the scar.
You Only Look Once (YOLO) [106] has emerged as an
efficient object detection algorithm and has also been used for
skin condition recognition by framing the problems as object
detection tasks. Liao et al. [107] experimented with distin-
guishing acne, freckle, and wrinkle images with YOLO-v3,
YOLO-v4, and with and without Mask R-CNN. The results
showed that using Mask R-CNN as a face segmentation
algorithm before using YOLO to detect the symptoms per-
formed slightly better than using only the YOLO model. The
highest obtained accuracy was 60.38% by using Mask R-
CNN and YOLO-v4 with 500 training images. Ding et al.
[108] conducted a comparable experiment utilizing YOLO-
v4, YOLO-v5, Single-Shot Multi-box Detection (SSD), and
Faster R-CNN. As anticipated, YOLO-v5 demonstrated su-
perior performance in all skin conditions, with the exception
of melasma, when compared to other methods.
The literature examined in this section presents the di-
agnosis of cosmetic conditions as a task of localization or
object detection. This involves obtaining not only detailed
locations of the lesions but also identifying the specific types
of conditions. The utilization of lesion localization and object
detection techniques enables the advancement of automated
skin condition diagnosis to a finer granularity. Automated
lesion localization methods have significantly reduced the
necessity for a manual cropping process, facilitating stream-
lined and informed diagnosis.
4) Severity Estimation
In addition to ascertaining a patient’s skin condition, it is
also crucial to consider its degree of severity. Severity es-
timation is a technique employed to assess the degree of
criticality of a given condition or its level. The ability to
acquire this knowledge automatically diminishes the duration
of dermatologists’ involvement in evaluating the severity
of a condition and facilitates the selection of more fitting
treatments [109]. The degree of severity can be assessed
either through continuous scoring, which poses a regression
problem, or through discrete categorization of the condition,
ranging from normal to extremely severe [110, 111].
There has been a growing interest in utilizing smartphone-
generated selfie images to automatically assess the severity
of various skin conditions, owing to their ease of acquisi-
tion. However, it should be noted that the accuracy of the
estimation may be influenced by factors such as lighting
conditions, facial expressions, and individual variations in
skin type. To address these particular problems, Jiang et al.
[112] conducted an investigation wherein they modified the
Convolutional Neural Network (CNN) classifier to a CNN
regressor. This allowed them to obtain a score for various skin
facial conditions, such as wrinkles, folds, lines, and pores.
The dataset utilized in the study comprised a diverse range
of age groups spanning from 18 to 80 years old, distinct
cohorts including Asian, Caucasian, and African American,
and various lighting conditions such as outdoor natural day-
light, indoor natural daylight, indoor artificial diffused light,
and indoor artificial direct light. Additionally, the dataset
contained diverse facial expressions characterized by slight
smiles, slight pouts, or disapproval. The obtained scores
were compared with the evaluation provided by the proficient
evaluator. The findings indicate a lack of complete concur-
rence between the automated approach and the evaluation
of the specialist. However, they do suggest that the outcome
was marginally affected by the lighting situation and facial
expression.
VOLUME 4, 2016 11
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
Subsequently, the aforementioned research team pro-
ceeded with the advancement of an automated grading sys-
tem for facial conditions utilizing self-portrait photographs,
where Flament et al. [113] yielded a superior correlation
between the automated outcome and the evaluation provided
by the expert. In addition, Flament et al. [114] employed
an algorithm derived from [112, 113] to examine images
of selfies. A total of 465,587 images of European women
and 79,016 images of Chinese women were utilized in the
study, where the researchers assessed the severity of nine
skin conditions and examined their correlation with the age
of the subjects. In a recent study, Flament et al. [115] eval-
uated the accuracy of grading the severity of various skin
conditions using images captured through selfies. The study
utilized a sample of 1,041 self-portrait images captured by
women in the United States, featuring diverse age ranges,
Fitzpatrick skin types, geographical locations, and ancestral
backgrounds. The severity of seven facial skin conditions was
estimated utilizing algorithms as described in [112, 113]. The
results generated through automation were compared with
the severity levels assessed by proficient dermatologists from
the United States. The findings of the study indicated a robust
correlation between the automated and dermatologist results
for five out of seven conditions, namely Forehead wrinkles,
Periorbital wrinkles, Nasolabial fold, Ptosis of the lower part
of the face, and Diffused redness. On the contrary, there exists
a moderate and weak correlation between the pores present
on the skin of the cheeks and the darkest skin tones. The
findings indicated that neither age nor ancestry exerted any
influence on the observed correlations.
In addition to easily obtainable selfie images, ML solu-
tions have been devised to aid in the assessment of severity
grading on high-quality images captured by professional
cameras. The detection and evaluation of facial wrinkles,
pores, and acne were conducted by Seck et al. [116] using
the high-resolution 3D surface texture obtained from the
light stage. Furthermore, the study conducted by Wang et al.
[117] sought to develop a tool for evaluating the severity
of acne vulgaris. To achieve this objective, the researchers
introduced a convolutional neural network (CNN) model,
which they named lightweight Acne-RegNet. This model
is capable of accurately categorizing lesions and providing
a corresponding severity score. The comparative analysis
involved the proposed models and other lightweight deep-
learning models such as MobileNet-V3, SENet, EfficientNet-
B0, and ghostNet. The Acne-RegNet exhibited superior per-
formance compared to other models, achieving an accuracy
of 94.11% on the test dataset. Furthermore, they continued
to examine a visual condition that impacts precision. The
findings indicated that the utilization of a front-facing camera
had a negative impact on the algorithm’s efficacy. The study
also found that the accuracy was not significantly affected by
the device brand or the light conditions, including outdoor,
indoor, and flash.
In addition to facial skin conditions, ML techniques were
employed to assess the severity of various cosmetic con-
ditions. Wang et al. [118] examined Microtia, a congeni-
tal ear malformation, through the utilization of nine CNN-
based models, namely AlexNet, Inception-v3, DenseNet-
121, ResNet-18, ResNet-50, ResNet-101, ShuffleNet-v2,
MobileNet-v2, and MnasNet. The objective was to determine
the efficacy of these models in accurately classifying the
degree of Microtia based on ear images. The images were
assessed and categorized into four distinct levels, namely
normal ears, grade I microtia, grade II microtia, and grade
III microtia. Man et al. [119] introduced a novel approach,
SACN-Net, for assessing the extent of hair damage based
on SEM images. The study’s findings indicated that SACN-
Net outperformed other established CNN-based models, as
evidenced by an accuracy rate of 98.38%. Chang et al. [120]
proposed the ScalpEye system, a handy scalp hair imaging
microscope with a mobile device application that connects
to the AI training server. The scalp hair imaging microscope
took images, then proceeded to the DL model and reported
severity scores for the four common scalp hair symptoms:
Dandruff, Folloculitis, Hair Loss, and Oily Har. The reported
severity levels are: minor, normal, middle, and high. For the
DL model, the authors tried Faster R-CNN Inception-v2,
SSD Inception-v2, and Faster R-CNN Inception-ResNet-v2-
Atrous. The experimental result showed that Faster R-CNN
InceptionResNetV2Atrous was the best algorithm for all four
symptoms, with average precision ranging from 97.41% to
99.09%. Still, its training time was significantly higher than
the others.
The reviewed papers presented in this section show that
deep convolutional neural networks and their variants have
emerged as commonly employed algorithms in severity as-
sessment tasks. The experiment on estimating a specific con-
dition’s severity involves the implementation and comparison
of numerous ML algorithms trained on image datasets whose
samples were annotated with appropriate severity levels. The
evaluation of their performance was primarily conducted
through a comparative analysis of their graded outcome and
that of an expert. Later on, after the condition type and sever-
ity grade were indicated, the next natural research question
would be whether such predictive diagnosis methods could
be used to infer dermatologists’ treatment options. Indeed,
this process is framed as a treatment recommendation task
which will be discussed in the next section.
D. TREATMENT RECOMMENDATION
Patient-centered medicine is an approach that aims to con-
sider the treatment effectiveness and patient satisfaction by
tailoring it to the specific disease and patient, considering
individual variability in clinical presentation, medical history,
genes, environment, and lifestyle. Especially in cosmetic
dermatology, which relates to personal preferences, the treat-
ment paradigm has shifted from disease-centered to patient-
centered health care [121]. To accomplish this, the develop-
ment of intelligent technologies is needed to overcome these
challenges and fulfill this goal [122].
12 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
Huang et al. [123] invented Alluring, a cloud-based sys-
tem for dermatological analysis of skin and scalp, which
utilized skin images to provide treatment recommendations.
The system comprises a handheld device equipped with a
camera for capturing dermatological images. A skin image
is processed by a comprehensive analysis for various factors,
including moisture, oil, sensitivity, color, pore size, and pore
distribution, utilizing YOLO-v2. Subsequently, the outcome
of the analysis was utilized to suggest a dermatological
product and facilitate customers to make a purchase within
the application.
In addition to utilizing skin images, genetic information
was also employed. Liu et al. [124] developed a method for
recommending cosmetic products by integrating genetic data
related to consumer skin health, product data, demand factors
based on phenotype, and data on the relationship between
ingredients and their functions. The data pertaining to skin-
health products were transformed into numerical data and
subsequently classified using three ML algorithms, namely
Random Forest, Logistic Regression, and Support Vector
Machine. The empirical findings demonstrated that Support
Vector Machine exhibited better time efficiency compared
to other methods, while Random Forest (RF) marginally
outperformed other classifiers in terms of classification ef-
ficacy. The utilization of genetic data and the consideration
of trade-offs between phenotypic demands resulted in an
improvement in the recommendation performance.
Ray et al. [30] proposed a scheme for recommending cos-
metic products that utilize Convolutional Neural Networks
(CNN). The study employed image analysis techniques to
predict various categories of consumer facial images based
on skin health. This was achieved by extracting relevant
features such as shape, texture, and color from the pho-
tographs. The algorithm that was proposed attained a success
rate of 97.38% accuracy in recommending items on the test
data. Zhang et al. [125] utilized knowledge graphs to de-
velop a recommendation system for cosmetic sequences. The
construction of the knowledge graph for skincare products
was achieved through a combination of manual screening
and multi-label classification techniques applied to an open
dataset. A ranking algorithm was developed with the aim of
suggesting the optimal product based on the specific needs of
consumers and their individual skin types.
ML techniques were employed to evaluate skin images,
genetic data, and other pertinent factors for the purpose of
providing personalized treatment recommendations in the
field of aesthetic dermatology. This methodology facilitated
the development of a therapeutic regimen that is both ef-
fective and tailored to specific customer requirements while
considering the unique clinical manifestations, medical back-
ground, genetic makeup, surroundings, and lifestyles. These
techniques aim to enhance the accuracy and efficiency of
the system in aligning customers with the optimal products
and treatments based on their individual requirements, skin
characteristics, and dermatological conditions.
E. TREATMENT OUTCOME PREDICTION
Following the process of diagnosing the condition, the subse-
quent step involves selecting an appropriate treatment plan.
In the context of clinical dermatology, the responsibility of
selecting treatments primarily falls upon the dermatologist.
Cosmetic cases are different as they do not pose any harm to
the patient, and there exists a plethora of treatment options
that patients may opt for. Dermatologists are presented with
a wide range of treatment options, and patients may play a
role in selecting a treatment modality based on their per-
sonal preferences and financial considerations. The treatment
outcome prediction task involves forecasting a patient’s re-
sponses after receiving a particular treatment. The prediction
of treatment outcomes holds significant importance at this
stage as it could guide dermatologists and patients toward
narrowing down appropriate treatment plans. This section
will delve into the articles examining AI techniques for
predicting treatment outcomes.
The utilization of simulated postoperative images derived
from preoperative images can serve as a valuable tool for pa-
tients in making informed decisions regarding their treatment
options. Shah et al. [126] have demonstrated the ability to
generate a precise three-dimensional facial image subsequent
to the Rejuvenation procedure. The model has been utilized
as an input for generating 3D facial scan images. Facial
landmarks were identified as injection sites for dermal fillers.
Their study introduced a model that forecasts the quantity
of dermal filler required for facial application by utilizing
a multi-layered neural network architecture comprising two
concealed layers. Their approach yielded an accuracy of
62.5%, surpassing that of the baseline methods 3D-Div by
51.5% and 3D-Vor by 55.8%. Shah et al. [127] proposed
enhancement to their simulation model for postoperative
rejuvenation image prediction using ML techniques. In this
study, a deep neural network model, Rejuv3DNet, and a
kernel regression-based (KR) model were developed and
demonstrated accuracy rates of 62.5% and 66.7%, respec-
tively. In addition, they produced the initial 3D facial dataset
that includes 3D facial images before and after receiving
treatments. Lin et al. [128] utilized cosmetic laser therapy to
modify the melanin and hemoglobin components of the skin,
resulting in the desired outcome of the treatment. In their
study, ML algorithms were employed to retouch freckles and
adjust skin tone by considering variations in melanin and
hemoglobin levels based on the training data.
While the aforementioned articles focus on simulating the
posttreatment outcomes, other studies in treatment outcome
prediction also investigated the possibility of using AI tech-
niques to quantify the treatment success chances. Akben
[129] employed a decision tree-based fuzzy informative ap-
proach to predict the success of various wart treatments. The
utilization of an automated prediction model has been pro-
posed as a computer-assisted tool for medical professionals.
The variables used in their study consisted of the patient’s
gender, age, duration of time before treatment, quantity and
classification of warts, surface area, and the induration diam-
VOLUME 4, 2016 13
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
eter of the initial test. These features were utilized to forecast
the outcome of the treatment as a dichotomous variable,
namely, positive or negative. The findings indicated that the
duration between onset and treatment provided the most
information gain, followed by age, as determined through
a comparative analysis with established classification tech-
niques, including SVM, KNN, Random Forest, and Logistic
Regression. The Decision Tree approach demonstrated the
highest level of accuracy of 94.4%.
Erdoˇ
gan et al. [130] constructed the post-operation evalu-
ation of the FUE hair transplantation procedure. Their algo-
rithm was implemented as part of KEBOT, a comprehensive
device designed for hair transplantation. The KEBOT system
comprised an operational infrared-based depth camera that
was utilized to produce a three-dimensional model of the
user’s head. The acquired data was processed to extract
information. Subsequently, the DL algorithm was employed
to conduct an analysis that commenced with object detection,
followed by hair thickness estimation, and culminated in
metrical analysis. The investigation focused on RetinaNet,
M2Det, YOLO-v4, and EfficientDet during the object detec-
tion stage. The hair thickness was estimated through the uti-
lization of SegNet, UNet, and ERFNet for hair segmentation.
Finally, the surgeon was presented with the post-operative
prediction in order to strategize the surgical procedure. Shi
et al. [131] created SkincareMirror, a personalized appear-
ance prediction after using skincare products. SkincareMirror
was developed for applications by males and females, regard-
less of their knowledge pertaining to skincare products. The
study conducted on the cosmetic product website revealed
that users exhibited different behavioral patterns when using
SkincareMirror. Specifically, the results indicated that users
who utilized SkincareMirror tended to click on a greater
number of products, albeit spending comparatively less time
reading through the product descriptions. The results also
indicated that the male cohorts who did not have skincare
knowledge exhibited higher levels of satisfaction with the
system in comparison to the remaining groups.
Some cosmetic treatments, especially plastic surgery, can
completely alter a patient’s appearance. K and Krishnaveni
[132] pointed out that this change may affect the face
recognition and identity identification system. Hence, pair-
ing posttreatment images with pretreatment images is also
an essential task. They compared the performance of two
common feature extraction techniques: Extended Uniform
Circular Local Binary Pattern (EUCLBP) and Scale Invari-
ant Feature Transform (SIFT). The findings indicated that
the optimal outcome was achieved through the combined
utilization of SIFT and EUCLBP, as opposed to individual
models. Bahçeci ¸Sim¸sek and ¸Sirolu [133] studied the changes
of patients who did upper eyelid blepharoplasty surgery and
compared the result with and without a Müller’s muscle-
conjunctival resection (MMCR). In the experiment, upper
eyelid blepharoplasty surgery patients were divided into two
groups, with and without MMCR. After six months, the full-
face image was analyzed by measuring the change from the
preoperative image compared between the two groups.
In the cosmetic businesses, customer satisfaction held a
significant important attribute. In this direction, The study
conducted by Kim et al. [134] investigated the emotional
responses of customers during the use of cosmetic cream
through the analysis of EEG data. The study was conducted
by assessing the electroencephalogram (EEG) activity of par-
ticipants during the administration of four distinct categories
of topical skincare products. Subsequently, participants were
administered a questionnaire to assess their level of satisfac-
tion with the cream. The proposed features were extracted
from the EEG signal and processed in a CNN-based model
to predict satisfaction. The findings revealed that the stacked
CNN model yielded an accuracy of 75.4%, surpassing all
other selected models for the experiment.
Reviewed papers in this section demonstrated the applica-
tions of ML models for predicting treatment outcomes, suc-
cess rates, and postoperative changes. These models could be
used to assist patients and dermatologists in making informed
decisions about treatment options and to help dermatologists
select suitable courses of treatment for their patients. Addi-
tionally, research in this field may result in the development
of advanced simulation tools, such as those that can simulate
the outcomes of several cosmetic procedures simultaneously
or those that can take a wider range of patient preferences and
characteristics into account.
V. DISCUSSION
In the discussion section, a summary of the findings from the
reviewed articles is presented. The discussion is structured
into three distinct segments, namely trend, limitation, and
opportunity.
The review analysis presented in the previous section
reveals an apparent trend of heightened adoption of ML tech-
niques in the domain of cosmetic dermatology. Historically,
traditional ML models served as the primary means of anal-
ysis. However, modern practices have shifted to utilizing DL
methods due to their various advantages in medical domains
[135]. Numerous studies have conducted a comparative anal-
ysis between conventional ML and DL methodologies. Ana-
lyzing facial images has historically posed a challenge due
to the quality and quantity of input data. However, recent
advancements in DL techniques offer potential solutions
to these issues. Currently, there is a significant research
emphasis on utilizing self-portrait photographs as input, as
evidenced by recent studies [112, 113]. Furthermore, the
exponential growth in the population of smartphone users has
brought automatic self-diagnosis and product recommenda-
tions to the forefront of attention. As a result, AI-powered
dermatology applications have been witnessed to continue
rising [136]. Numerous scholarly articles have extensively
utilized ML methodologies as an integral component of their
respective application or website systems [103, 117, 131].
The issue of insufficient data persisted as a constraint in
the adoption of ML techniques for diagnosis and assessment
of the severity of cosmetic dermatological conditions. Due
14 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
to ethical concerns, a majority of research studies have been
limited to small and specific datasets. Furthermore, such
concerns also influence the sharing of data for research
purposes. The absence of diverse data may lead to overfitting
of the model and a deficiency in its ability to generalize. For
example, the utilization of color and skin texture as extracted
features to indicate skin health in various models resulted in
a constructed model that exhibited satisfactory performance
solely for the learned data, which was predominantly derived
from limited skin types. Some research has attempted to
solve this problem by varying the skin types for machines
to learn, but this was still done in a limited variation.
Another significant critique regarding the utilization of
ML, particularly DL, approaches in the field of cosmetic der-
matology pertains to their opaque nature, commonly referred
to as ’black boxes.’ Despite achieving satisfactory levels of
accuracy, the model’s incapacity to provide a clear expla-
nation for its decision-making process during the prediction
of diagnoses, evaluations, or treatments could potentially
result in unforeseen consequences during practical applica-
tion [137]. Explainable artificial intelligence (XAI) has the
potential to address this concern and presents a promising
avenue for further investigation within this domain.
VI. CONCLUSION
The utilization of artificial intelligence and machine learning
is a significant factor in numerous functions within the field
of cosmetic dermatology. This systematic literature review
was undertaken to provide a comprehensive summary of
the contemporary research utilizing machine learning in this
domain in accordance with the PRISMA protocol. The 63 pa-
pers that underwent review were categorized into five distinct
groups according to their respective tasks, namely: cosmetic
product development, skin assessment, skin condition di-
agnosis, treatment recommendation, and treatment outcome
prediction. The utilization of machine learning approaches
in the domain of cosmetic dermatology was highlighted,
with a focus on identifying trends, limitations, and future
opportunities. The primary contribution of this article is a
methodical examination of existing recent studies aimed at
the utilization of artificial intelligence (AI) technologies in
cosmetic dermatology. We expect this study to provide an
overview for researchers seeking to explore contribution gaps
in this area as well as medical and IT practitioners looking
to utilize intelligent technologies to address real-world chal-
lenges in cosmetic industries.
ACKNOWLEDGMENT
This research project is supported by Mahidol University
(Fundamental Fund: fiscal year 2023 by National Science
Research and Innovation Fund (NSRF)).
REFERENCES
[1] S. A. Davis, S. Narahari, S. R. Feldman, W. Huang,
R. O. Pichardo-Geisinger, and A. J. McMichael, “Top
dermatologic conditions in patients of color: an analy-
sis of nationally representative data.” Journal of drugs
in dermatology: JDD, vol. 11, no. 4, pp. 466–473,
2012.
[2] I. G. Ferreira, M. B. Weber, and R. R. Bonamigo,
“History of dermatology: the study of skin diseases
over the centuries,” Anais Brasileiros de Dermatolo-
gia, vol. 96, pp. 332–345, 2021.
[3] S. Tripathi, “Artificial intelligence: A brief review,”
Analyzing future applications of AI, sensors, and
robotics in society, pp. 1–16, 2021.
[4] Y. Mintz and R. Brodie, “Introduction to artificial
intelligence in medicine,” Minimally Invasive Therapy
& Allied Technologies, vol. 28, no. 2, pp. 73–81, 2019.
[5] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,”
nature, vol. 521, no. 7553, pp. 436–444, 2015.
[6] S. Chibani and F.-X. Coudert, “Machine learning ap-
proaches for the prediction of materials properties,”
Apl Materials, vol. 8, no. 8, p. 080701, 2020.
[7] S. Khan, M. Sajjad, T. Hussain, A. Ullah, and A. S.
Imran, “A review on traditional machine learning and
deep learning models for wbcs classification in blood
smear images,” Ieee Access, vol. 9, pp. 10 657–10 673,
2020.
[8] R. Brehar, D.-A. Mitrea, F. Vancea, T. Marita,
S. Nedevschi, M. Lupsor-Platon, M. Rotaru, and R. I.
Badea, “Comparison of deep-learning and conven-
tional machine-learning methods for the automatic
recognition of the hepatocellular carcinoma areas from
ultrasound images,” Sensors, vol. 20, no. 11, p. 3085,
2020.
[9] D. H. Murphree, P. Puri, H. Shamim, S. A. Bezalel,
L. A. Drage, M. Wang, M. R. Pittelkow, R. E. Carter,
M. D. Davis, A. G. Bridges et al., “Deep learning for
dermatologists: Part i. fundamental concepts,” Journal
of the American Academy of Dermatology, vol. 87,
no. 6, pp. 1343–1351, 2022.
[10] P. Puri, N. Comfere, L. A. Drage, H. Shamim, S. A.
Bezalel, M. R. Pittelkow, M. D. Davis, M. Wang, A. R.
Mangold, M. M. Tollefson et al., “Deep learning for
dermatologists: Part ii. current applications,” Journal
of the American Academy of Dermatology, vol. 87,
no. 6, pp. 1352–1360, 2022.
[11] F. Hashimoto, H. Ohba, K. Ote, A. Teramoto, and
H. Tsukada, “Dynamic pet image denoising using
deep convolutional neural networks without prior
training datasets,” IEEE access, vol. 7, pp. 96 594–
96 603, 2019.
[12] M. Jiang, Z. Wang, and Q. Dou, “Harmofl: Harmo-
nizing local and global drifts in federated learning on
heterogeneous medical images,” in Proceedings of the
AAAI Conference on Artificial Intelligence, vol. 36,
no. 1, 2022, pp. 1087–1095.
[13] S. Borade and D. Kalbande, “Survey paper based
critical reviews for cosmetic skin diseases,” in 2021
International Conference on Artificial Intelligence and
Smart Systems (ICAIS). IEEE, 2021, pp. 580–585.
VOLUME 4, 2016 15
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
[14] R. Bhardwaj, A. R. Nambiar, and D. Dutta, “A study
of machine learning in healthcare,” in 2017 IEEE 41st
annual computer software and applications conference
(COMPSAC), vol. 2. IEEE, 2017, pp. 236–241.
[15] T.-C. Pham, C.-M. Luong, V.-D. Hoang, and
A. Doucet, “Ai outperformed every dermatologist in
dermoscopic melanoma diagnosis, using an optimized
deep-cnn architecture with custom mini-batch logic
and loss function,” Scientific Reports, vol. 11, no. 1,
p. 17485, 2021.
[16] M. Xiong, J. Pfau, A. T. Young, and M. L. Wei,
“Artificial intelligence in teledermatology,” Current
Dermatology Reports, vol. 8, pp. 85–90, 2019.
[17] A. Coustasse, R. Sarkar, B. Abodunde, B. J. Metzger,
and C. M. Slater, “Use of teledermatology to improve
dermatological access in rural areas,” Telemedicine
and e-Health, vol. 25, no. 11, pp. 1022–1032, 2019.
[18] Y. Goldust, F. Sameem, S. Mearaj, A. Gupta, A. Patil,
and M. Goldust, “Covid-19 and artificial intelligence:
Experts and dermatologists perspective,” Journal of
cosmetic dermatology, vol. 22, no. 1, pp. 11–15, 2023.
[19] D. R. Crowe, M. B. Morgan, S. Somach, and K. Trapp,
Deadly Dermatologic Diseases. Springer, 2016.
[20] A. Wells, S. Patel, J. B. Lee, and K. Motaparthi,
“Artificial intelligence in dermatopathology: Diagno-
sis, education, and research,” Journal of Cutaneous
Pathology, vol. 48, no. 8, pp. 1061–1068, 2021.
[21] S. Zhang, Y. Wang, Q. Zheng, J. Li, J. Huang, and
X. Long, “Artificial intelligence in melanoma: A sys-
tematic review,” Journal of Cosmetic Dermatology,
vol. 21, no. 11, pp. 5993–6004, 2022.
[22] A. Mosquera-Zamudio, L. Launet, Z. Tabatabaei,
R. Parra-Medina, A. Colomer, J. Oliver Moll, C. Mon-
teagudo, E. Janssen, and V. Naranjo, “Deep learning
for skin melanocytic tumors in whole-slide images:
A systematic review,” Cancers, vol. 15, no. 1, p. 42,
2022.
[23] H. K. Jeong, C. Park, R. Henao, and M. Kheterpal,
“Deep learning in dermatology: a systematic review
of current approaches, outcomes and limitations,” JID
Innovations, p. 100150, 2022.
[24] M. Alam, “Teaching cosmetic dermatology without
sacrificing the proper emphasis on medical dermatol-
ogy,” JAMA dermatology, vol. 150, no. 2, pp. 123–
124, 2014.
[25] S. Sood, M. Jafferany, and S. Vinaya Kumar, “De-
pression, psychiatric comorbidities, and psychosocial
implications associated with acne vulgaris,” Journal
of Cosmetic Dermatology, vol. 19, no. 12, pp. 3177–
3182, 2020.
[26] N. Alamdari, K. Tavakolian, M. Alhashim, and
R. Fazel-Rezai, “Detection and classification of acne
lesions in acne patients: A mobile application,” in
2016 IEEE International Conference on Electro Infor-
mation Technology (EIT). IEEE, 2016, pp. 0739–
0743.
[27] Y. Yang, L. Guo, Q. Wu, M. Zhang, R. Zeng, H. Ding,
H. Zheng, J. Xie, Y. Li, Y. Ge et al., “Construction
and evaluation of a deep learning model for assessing
acne vulgaris using clinical images,” Dermatology and
Therapy, vol. 11, no. 4, pp. 1239–1248, 2021.
[28] F. Linming, H. Wei, L. Anqi, C. Yuanyu, X. Heng,
P. Sushmita, L. Yiming, and L. Li, “Comparison of
two skin imaging analysis instruments: The visia®
from canfield vs the antera 3d® cs from miravex,”
Skin Research and Technology, vol. 24, no. 1, pp. 3–8,
2018.
[29] T. LAOHAKANGVALVIT, T. ACHALAKUL, and
M. OHKURA, “A method to obtain effective attributes
for attractive cosmetic bottles by deep learning,” In-
ternational Journal of Affective Engineering, vol. 19,
no. 1, pp. 37–48, 2020.
[30] S. Ray, M. Abinaya, A. K. Rao, S. K. Shukla, S. Gupta,
and P. Rawat, “Cosmetics suggestion system using
deep learning,” in 2022 2nd International Conference
on Technological Advancements in Computational
Sciences (ICTACS). IEEE, 2022, pp. 680–684.
[31] A. Nightingale, “A guide to systematic literature re-
views,” Surgery (Oxford), vol. 27, no. 9, pp. 381–384,
2009.
[32] M. J. Page, J. E. McKenzie, P. M. Bossuyt, I. Boutron,
T. C. Hoffmann, C. D. Mulrow, L. Shamseer, J. M.
Tetzlaff, E. A. Akl, S. E. Brennan et al., “The prisma
2020 statement: an updated guideline for reporting
systematic reviews,” International journal of surgery,
vol. 88, p. 105906, 2021.
[33] N. R. Haddaway, M. J. Page, C. C. Pritchard, and
L. A. McGuinness, “Prisma2020: An r package and
shiny app for producing prisma 2020-compliant flow
diagrams, with interactivity for optimised digital trans-
parency and open synthesis,” Campbell Systematic
Reviews, vol. 18, no. 2, p. e1230, 2022.
[34] N. K. Chauhan and K. Singh, “A review on conven-
tional machine learning vs deep learning,” in 2018
International conference on computing, power and
communication technologies (GUCON). IEEE, 2018,
pp. 347–352.
[35] P. Wang, E. Fan, and P. Wang, “Comparative analysis
of image classification algorithms based on traditional
machine learning and deep learning,” Pattern Recog-
nition Letters, vol. 141, pp. 61–67, 2021.
[36] G. Hinton, Y. LeCun, and Y. Bengio, “Deep learning,”
Nature, vol. 521, no. 7553, pp. 436–444, 2015.
[37] H. A. Benson, M. S. Roberts, V. R. Leite-Silva,
and K. Walters, Cosmetic formulation: principles and
practice. CRC Press, 2019.
[38] S. Sunkle, D. Jain, K. Saxena, A. Patil, T. Singh,
B. Rai, and V. Kulkarni, “Integrated “generate, make,
and test” for formulated products using knowledge
graphs,” Data Intelligence, vol. 3, no. 3, pp. 340–375,
2021.
[39] X. Zhang, T. Zhou, and K. M. Ng, “Optimization-
16 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
based cosmetic formulation: Integration of mechanis-
tic model, surrogate model, and heuristics,” AIChE
Journal, vol. 67, no. 1, p. e17064, 2021.
[40] S.-J. Yeh, J.-F. Lin, and B.-S. Chen, “Multiple-
molecule drug design based on systems biology ap-
proaches and deep neural network to mitigate human
skin aging,” Molecules, vol. 26, no. 11, p. 3178, 2021.
[41] C. Johnson, E. Ahlberg, L. T. Anger, L. Beilke,
R. Benigni, J. Bercu, S. Bobst, D. Bower, A. Brigo,
S. Campbell et al., “Skin sensitization in silico proto-
col,” Regulatory Toxicology and Pharmacology, vol.
116, p. 104688, 2020.
[42] A. B. Raies and V. B. Bajic, “In silico toxicology:
computational methods for the prediction of chemical
toxicity,” Wiley Interdisciplinary Reviews: Computa-
tional Molecular Science, vol. 6, no. 2, pp. 147–172,
2016.
[43] D. Sreedhar, N. Manjula, S. Pise, and V. Ligade, “Ban
of cosmetic testing on animals: A brief overview,” Int.
J. Curr. Res. Rev, vol. 12, p. 113, 2020.
[44] N. Gilmour, P. S. Kern, N. Alépée, F. Boislève,
D. Bury, E. Clouet, M. Hirota, S. Hoffmann, J. Kühnl,
J. F. Lalko et al., “Development of a next genera-
tion risk assessment framework for the evaluation of
skin sensitisation of cosmetic ingredients,” Regulatory
Toxicology and Pharmacology, vol. 116, p. 104721,
2020.
[45] J. W. van der Veen, E. Rorije, R. Emter, A. Natsch,
H. van Loveren, and J. Ezendam, “Evaluating the
performance of integrated approaches for hazard iden-
tification of skin sensitizing chemicals,” Regulatory
Toxicology and Pharmacology, vol. 69, no. 3, pp. 371–
379, 2014.
[46] J. S. Jaworska, A. Natsch, C. Ryan, J. Strickland,
T. Ashikaga, and M. Miyazawa, “Bayesian integrated
testing strategy (its) for skin sensitization potency
assessment: a decision support system for quantita-
tive weight of evidence and adaptive testing strategy,”
Archives of toxicology, vol. 89, pp. 2355–2383, 2015.
[47] T. Luechtefeld, A. Maertens, J. M. McKim, T. Har-
tung, A. Kleensang, and V. Sá-Rocha, “Probabilistic
hazard assessment for skin sensitization potency by
dose–response modeling using feature elimination in-
stead of quantitative structure–activity relationships,”
Journal of Applied Toxicology, vol. 35, no. 11, pp.
1361–1371, 2015.
[48] D. Asturiol, S. Casati, and A. Worth, “Consensus of
classification trees for skin sensitisation hazard predic-
tion,” Toxicology in Vitro, vol. 36, pp. 197–209, 2016.
[49] A. P. Toropova and A. A. Toropov, “Hybrid optimal
descriptors as a tool to predict skin sensitization in
accordance to oecd principles,” Toxicology Letters,
vol. 275, pp. 57–66, 2017.
[50] Q. Zang, M. Paris, D. M. Lehmann, S. Bell, N. Klein-
streuer, D. Allen, J. Matheson, A. Jacobs, W. Casey,
and J. Strickland, “Prediction of skin sensitization
potency using machine learning approaches,” Journal
of Applied Toxicology, vol. 37, no. 7, pp. 792–805,
2017.
[51] J. Strickland, Q. Zang, N. Kleinstreuer, M. Paris,
D. M. Lehmann, N. Choksi, J. Matheson, A. Jacobs,
A. Lowit, D. Allen et al., “Integrated decision strate-
gies for skin sensitization hazard,” Journal of Applied
Toxicology, vol. 36, no. 9, pp. 1150–1162, 2016.
[52] J. Strickland, Q. Zang, M. Paris, D. M. Lehmann,
D. Allen, N. Choksi, J. Matheson, A. Jacobs,
W. Casey, and N. Kleinstreuer, “Multivariate models
for prediction of human skin sensitization hazard,”
Journal of Applied Toxicology, vol. 37, no. 3, pp. 347–
360, 2017.
[53] H. Li, J. Bai, G. Zhong, H. Lin, C. He, R. Dai,
H. Du, and L. Huang, “Improved defined approaches
for predicting skin sensitization hazard and potency in
humans,” ALTEX-Alternatives to animal experimen-
tation, vol. 36, no. 3, pp. 363–372, 2019.
[54] V. M. Alves, E. Muratov, D. Fourches, J. Strickland,
N. Kleinstreuer, C. H. Andrade, and A. Tropsha,
“Predicting chemically-induced skin reactions. part ii:
Qsar models of skin permeability and the relationships
between skin permeability and skin sensitization,”
Toxicology and applied pharmacology, vol. 284, no. 2,
pp. 273–280, 2015.
[55] S. O. Akturk, G. Tugcu, and H. Sipahi, “Development
of a qsar model to predict comedogenic potential of
some cosmetic ingredients,” Computational Toxicol-
ogy, vol. 21, p. 100207, 2022.
[56] N. Sharma, S. Patiyal, A. Dhall, N. L. Devi, and G. P.
Raghava, “Chalpred: A web server for prediction of
allergenicity of chemical compounds,” Computers in
Biology and Medicine, vol. 136, p. 104746, 2021.
[57] A. Wilm, M. Garcia de Lomana, C. Stork, N. Mathai,
S. Hirte, U. Norinder, J. Kühnl, and J. Kirchmair,
“Predicting the skin sensitization potential of small
molecules with machine learning models trained on
biologically meaningful descriptors,” Pharmaceuti-
cals, vol. 14, no. 8, p. 790, 2021.
[58] B. Jeon, M. H. Lim, T. H. Choi, B.-C. Kang, and
S. Kim, “A development of a graph-based ensemble
machine learning model for skin sensitization hazard
and potency assessment,” Journal of Applied Toxicol-
ogy, vol. 42, no. 11, pp. 1832–1842, 2022.
[59] A. Forreryd, U. Norinder, T. Lindberg, and
M. Lindstedt, “Predicting skin sensitizers with
confidence—using conformal prediction to determine
applicability domain of gard,” Toxicology in Vitro,
vol. 48, pp. 179–187, 2018.
[60] A. K. Singh, “Chapter 7 - mechanisms of
nanoparticle toxicity,” in Engineered Nanoparticles,
A. K. Singh, Ed. Boston: Academic Press,
2016, pp. 295–341. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/B9780128014066000078
[61] F. Hu, S. F. Santagostino, D. M. Danilenko, M. Tseng,
VOLUME 4, 2016 17
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
J. Brumm, P. Zehnder, and K. C. Wu, “Assessment
of skin toxicity in an in vitro reconstituted human
epidermis model using deep learning,” The American
Journal of Pathology, vol. 192, no. 4, pp. 687–700,
2022.
[62] I. Furxhi, F. Murphy, M. Mullins, and C. A. Poland,
“Machine learning prediction of nanoparticle in vitro
toxicity: A comparative study of classifiers and
ensemble-classifiers using the copeland index,” Tox-
icology letters, vol. 312, pp. 157–166, 2019.
[63] S. Y. Jun and H. S. Shin, “Analysis and prediction of
surface condition of artificial skin based on cnn and
convlstm,” Biotechnology and Bioprocess Engineer-
ing, vol. 26, pp. 369–374, 2021.
[64] E. Chirikhina, A. Chirikhin, S. Dewsbury-Ennis,
F. Bianconi, and P. Xiao, “Skin characterizations by
using contact capacitive imaging and high-resolution
ultrasound imaging with machine learning algo-
rithms,” Applied Sciences, vol. 11, no. 18, p. 8714,
2021.
[65] R. Zegour, A. Belaid, J. Ognard, and D. B. Salem,
“Convolutional neural networks-based method for
skin hydration measurements in high resolution mri,”
Biomedical Signal Processing and Control, vol. 81, p.
104491, 2023.
[66] C. Sreeja, S. S. Sumaya, B. Madhumitha, M. Jayaraj,
S. Francis, and S. Muthukumar, “Odland bodies: A
review,” Journal of Advanced Medical and Dental
Sciences Research, vol. 8, no. 11, pp. 12–15, 2020.
[67] K. Koseki, H. Kawasaki, T. Atsugi, M. Nakanishi,
M. Mizuno, E. Naru, T. Ebihara, M. Amagai, and
E. Kawakami, “Assessment of skin barrier function
using skin images with topological data analysis,” NPJ
systems biology and applications, vol. 6, no. 1, p. 40,
2020.
[68] S. Borade, D. Kalbande, K. Pereira, R. Patel, and
S. Kulkarni, “Deep scattering convolutional network
for cosmetic skin classification,” International Journal
of Engineering Trends and Technology, vol. 70, no. 7,
p. 10–23, 2022.
[69] A. Kothari, D. Shah, T. Soni, and S. Dhage, “Cos-
metic skin type classification using cnn with product
recommendation,” in 2021 12th International Confer-
ence on Computing Communication and Networking
Technologies (ICCCNT). IEEE, 2021, pp. 1–6.
[70] A. Firooz, A. Rajabi-Estarabadi, H. Zartab,
N. Pazhohi, F. Fanian, and L. Janani, “The influence
of gender and age on the thickness and echo-density
of skin,” Skin Research and Technology, vol. 23,
no. 1, pp. 13–20, 2017.
[71] C. L. Huang and A. C. Halpern, “Management of the
patient with melanoma,” Cancer of the Skin, pp. 265–
275, 2005.
[72] S. Vyas, J. Meyerle, and P. Burlina, “Non-invasive
estimation of skin thickness from hyperspectral imag-
ing and validation using echography,” Computers in
biology and medicine, vol. 57, pp. 173–181, 2015.
[73] M. Ko, D. Kim, M. Kim, and K. Kim, “Illumination-
insensitive skin depth estimation from a light-field
camera based on cgans toward haptic palpation,” Elec-
tronics, vol. 7, no. 11, p. 336, 2018.
[74] J. Sánchez-Monedero, A. Sáez, M. Pérez-Ortiz, P. A.
Gutiérrez, and C. Hervás-Martínez, “Classification of
melanoma presence and thickness based on computa-
tional image analysis,” in Hybrid Artificial Intelligent
Systems: 11th International Conference, HAIS 2016,
Seville, Spain, April 18-20, 2016, Proceedings 11.
Springer, 2016, pp. 427–438.
[75] D. Kucharski, P. Kleczek, J. Jaworek-Korjakowska,
G. Dyduch, and M. Gorgon, “Semi-supervised nests
of melanocytes segmentation method using convolu-
tional autoencoders,” Sensors, vol. 20, no. 6, p. 1546,
2020.
[76] I. Abunadi and E. M. Senan, “Deep learning and
machine learning techniques of diagnosis dermoscopy
images for early detection of skin diseases,” Electron-
ics, vol. 10, no. 24, p. 3158, 2021.
[77] F. W. Alsaade, T. H. Aldhyani, and M. H. Al-Adhaileh,
“Developing a recognition system for diagnosing
melanoma skin lesions using artificial intelligence al-
gorithms,” Computational and mathematical methods
in medicine, vol. 2021, pp. 1–20, 2021.
[78] S. Boumaraf, X. Liu, Y. Wan, Z. Zheng, C. Ferkous,
X. Ma, Z. Li, and D. Bardou, “Conventional machine
learning versus deep learning for magnification depen-
dent histopathological breast cancer image classifica-
tion: A comparative study with visual explanation,”
Diagnostics, vol. 11, no. 3, p. 528, 2021.
[79] Y. Huang, J. He, S. Zhang, Y. Tang, B. Wang, D. Jian,
H. Xie, J. Li, F. Chen, and Z. Zhao, “A novel multi-
layer perceptron model for assessing the diagnos-
tic value of non-invasive imaging instruments for
rosacea,” PeerJ, vol. 10, p. e13917, 2022.
[80] A. K. Sameera, V. Samuktha, T. Akash, M. Sabeshnav,
and S. Veni, “Real time detection of the various sign
of ageing using deep learning,” in 2022 International
Conference on Wireless Communications Signal Pro-
cessing and Networking (WiSPNET). IEEE, 2022,
pp. 38–43.
[81] L. Liu, C. Liang, Y. Xue, T. Chen, Y. Chen, Y. Lan,
J. Wen, X. Shao, and J. Chen, “An intelligent diag-
nostic model for melasma based on deep learning and
multimode image input,” Dermatology and Therapy,
vol. 13, no. 2, pp. 569–579, 2023.
[82] A. Goldsberry, C. W. Hanke, and K. E. Hanke, “Visia
system: a possible tool in the cosmetic practice.” Jour-
nal of drugs in dermatology: JDD, vol. 13, no. 11, pp.
1312–1314, 2014.
[83] S. Aditya, S. Sidhu, and M. Kanchana, “Prediction of
alopecia areata using machine learning techniques,” in
2022 IEEE International Conference on Data Science
and Information System (ICDSIS). IEEE, 2022, pp.
18 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
1–6.
[84] A. Alagi´
c, S. Alihodži´
c, N. Alispahi´
c, E. Beˇ
ci´
c,
A. Smajovi´
c, F. Beˇ
ci´
c, L. S. Be´
cirovi´
c, L. G. Pokvi´
c,
and A. Badnjevi´
c, “Application of artificial intelli-
gence in the analysis of the facial skin health condi-
tion,” IFAC-PapersOnLine, vol. 55, no. 4, pp. 31–37,
2022.
[85] K. Dubey, V. Srivastava, and D. S. Mehta, “Auto-
mated in vivo identification of fungal infection on
human scalp using optical coherence tomography and
machine learning,” Laser Physics, vol. 28, no. 4, p.
045602, 2018.
[86] P. Jansen, D. O. Baguer, N. Duschner, J. Le’Clerc Ar-
rastia, M. Schmidt, B. Wiepjes, D. Schadendorf,
E. Hadaschik, P. Maass, J. Schaller et al., “Evaluation
of a deep learning approach to differentiate bowen’s
disease and seborrheic keratosis,” Cancers, vol. 14,
no. 14, p. 3518, 2022.
[87] Y. Wang, M. Sun, Y. Duan et al., “Metagenomic
sequencing analysis for acne using machine learning
methods adapted to single or multiple data,” Compu-
tational and Mathematical Methods in Medicine, vol.
2021, 2021.
[88] F. S. Abas, B. Kaffenberger, J. Bikowski, and M. N.
Gurcan, “Acne image analysis: lesion localization and
classification,” in Medical imaging 2016: computer-
aided diagnosis, vol. 9785. SPIE, 2016, pp. 64–72.
[89] Y. Yang, Y. Ge, L. Guo, Q. Wu, L. Peng, E. Zhang,
J. Xie, Y. Li, and T. Lin, “Development and validation
of two artificial intelligence models for diagnosing
benign, pigmented facial skin lesions,” Skin Research
and Technology, vol. 27, no. 1, pp. 74–79, 2021.
[90] A. K. Huong, K. Tay, and X. T. NGU, “Customized
alexnet models for automatic classification of skin
diseases,” Journal of Engineering Science and Tech-
nology, vol. 16, no. 4, pp. 3312–3324, 2021.
[91] J. A. López-Leyva, E. Guerra-Rosas, and J. Álvarez-
Borrego, “Multi-class diagnosis of skin lesions using
the fourier spectral information of images on additive
color model by artificial neural network,” IEEE Ac-
cess, vol. 9, pp. 35 207–35 216, 2021.
[92] A. Jain, A. C. S. Rao, P. K. Jain, and A. Abraham,
“Multi-type skin diseases classification using op-dnn
based feature extraction approach,” Multimedia Tools
and Applications, pp. 1–26, 2022.
[93] H. Ito, Y. Nakamura, K. Takanari, M. Oishi, K. Mat-
suo, M. Kanbe, T. Uchibori, K. Ebisawa, and
Y. Kamei, “Development of a novel scar screening
system with machine learning,” Plastic and Recon-
structive Surgery, vol. 150, no. 2, pp. 465e–472e,
2022.
[94] S. Borade, D. Kalbande, H. Jakaria, and L. Patil, “An
automated approach to detect & diagnosis the type of
cosmetic skin & its disease using machine learning,” in
2022 IEEE 3rd Global Conference for Advancement
in Technology (GCAT). IEEE, 2022, pp. 1–8.
[95] M. Kim and M. H. Song, “High performing facial
skin problem diagnosis with enhanced mask r-cnn and
super resolution gan,” Applied Sciences, vol. 13, no. 2,
p. 989, 2023.
[96] J.-I. Jeong, D.-S. Park, J.-E. Koo, W.-S. Song, D.-
J. Pae, and H.-J. Choi, “Artificial intelligence (ai)
based system for the diagnosis and classification of
scalp health: Ai-scalpgrader,” Instrumentation Science
& Technology, pp. 1–11, 2022.
[97] N. Hayashi, H. Akamatsu, M. Kawashima, and A. S.
Group, “Establishment of grading criteria for acne
severity,” The Journal of dermatology, vol. 35, no. 5,
pp. 255–260, 2008.
[98] M. H. Yap, N. Batool, C.-C. Ng, M. Rogers, and
K. Walker, “A survey on facial wrinkles detection and
inpainting: datasets, methods, and challenges,” IEEE
Transactions on Emerging Topics in Computational
Intelligence, vol. 5, no. 4, pp. 505–519, 2021.
[99] J. Rew, H. Kim, and E. Hwang, “Hybrid segmentation
scheme for skin features extraction using dermoscopy
images,” Computers, Materials and Continua, vol. 69,
no. 1, pp. 801–817, 2021.
[100] M. I. M. Ismail and A. N. Sung, “Acne lesion and
wrinkle detection using faster r-cnn with resnet-50,” in
AIP Conference Proceedings, vol. 2676, no. 1. AIP
Publishing LLC, 2022, p. 020007.
[101] C. Shih, C.-H. Lin, Y.-C. Weng, J. Y. Jian, and Y.-
C. Lin, “Implementation of weakly supervised vitiligo
treatment evaluation system,” in 2022 IEEE 4th Eura-
sia Conference on Biomedical Engineering, Health-
care and Sustainability (ECBIOS). IEEE, 2022, pp.
44–47.
[102] A. Gallucci, D. Znamenskiy, N. Pezzotti, and
M. Petkovic, “Hair counting with deep learning,” in
2020 International Conference on Biomedical Innova-
tions and Applications (BIA). IEEE, 2020, pp. 5–9.
[103] D. T. Phan, Q. B. Ta, T. C. Huynh, T. H. Vo, C. H.
Nguyen, S. Park, J. Choi, and J. Oh, “A smart led
therapy device with an automatic facial acne vulgaris
diagnosis based on deep learning and internet of things
application,” Computers in Biology and Medicine, vol.
136, p. 104610, 2021.
[104] H. Wen, W. Yu, Y. Wu, J. Zhao, X. Liu, Z. Kuang, and
R. Fan, “Acne detection and severity evaluation with
interpretable convolutional neural network models,”
Technology and Health Care, vol. 30, no. S1, pp. 143–
153, 2022.
[105] L. Maknuna, H. Kim, Y. Lee, Y. Choi, H. Kim, M. Yi,
and H. W. Kang, “Automated structural analysis and
quantitative characterization of scar tissue using ma-
chine learning,” Diagnostics, vol. 12, no. 2, p. 534,
2022.
[106] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi,
“You only look once: Unified, real-time object de-
tection,” in Proceedings of the IEEE conference on
computer vision and pattern recognition, 2016, pp.
VOLUME 4, 2016 19
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
779–788.
[107] Y.-H. Liao, P.-C. Chang, C.-C. Wang, and H.-H.
Li, “An optimization-based technology applied for
face skin symptom detection,” in Healthcare, vol. 10,
no. 12. Multidisciplinary Digital Publishing Institute,
2022, p. 2396.
[108] H. Ding, E. Zhang, F. Fang, X. Liu, H. Zheng,
H. Yang, Y. Ge, Y. Yang, and T. Lin, “Automatic
identification of benign pigmented skin lesions from
clinical images using deep convolutional neural net-
work,” BMC biotechnology, vol. 22, no. 1, p. 28, 2022.
[109] A. Li, R. Fang, and Q. Sun, “Artificial intelligence
for grading in acne vulgaris: current situation and
prospect,” Journal of Cosmetic Dermatology, vol. 21,
no. 2, pp. 865–866, 2022.
[110] J. K. Tan, “Current measures for the evaluation of acne
severity,” Expert Review of Dermatology, vol. 3, no. 5,
pp. 595–603, 2008.
[111] K. Zarchi and G. B. Jemec, “Severity assessment and
outcome measures in acne vulgaris,” Current Derma-
tology Reports, vol. 1, pp. 131–136, 2012.
[112] R. Jiang, I. Kezele, A. Levinshtein, F. Flament,
J. Zhang, E. Elmoznino, J. Ma, H. Ma, J. Coquide,
V. Arcin et al., “A new procedure, free from human
assessment that automatically grades some facial skin
structural signs. comparison with assessments by ex-
perts, using referential atlases of skin ageing,” Inter-
national Journal of Cosmetic Science, vol. 41, no. 1,
pp. 67–78, 2019.
[113] F. Flament, Y. W. Lee, D. H. Lee, T. Passeron,
Y. Zhang, R. Jiang, A. Prunel, S. Dwivedi, C. Kroely,
Y. J. Park et al., “The continuous development of a
complete and objective automatic grading system of
facial signs from selfie pictures: Asian validation study
and application to women of three ethnic origins, dif-
ferently aged,” Skin Research and Technology, vol. 27,
no. 2, pp. 183–190, 2021.
[114] F. Flament, L. Jacquet, C. Ye, D. Amar, D. Kerob,
R. Jiang, Y. Zhang, C. Kroely, C. Delaunay, and
T. Passeron, “Artificial intelligence analysis of over
half a million european and chinese women reveals
striking differences in the facial skin ageing process,”
Journal of the European Academy of Dermatology and
Venereology, vol. 36, no. 7, pp. 1136–1142, 2022.
[115] F. Flament, R. Jiang, J. Houghton, Y. Zhang,
C. Kroely, N. G. Jablonski, A. Jean, J. Clarke, J. Steeg,
C. Sehgal et al., “Accuracy and clinical relevance
of an automated, algorithm-based analysis of facial
signs from selfie images of women in the united
states of various ages, ancestries and phototypes: A
cross-sectional observational study,” Journal of the
European Academy of Dermatology and Venereology,
vol. 37, no. 1, pp. 176–183, 2023.
[116] A. Seck, H. Dee, W. Smith, and B. Tiddeman, “3d sur-
face texture analysis of high-resolution normal fields
for facial skin condition assessment,” Skin Research
and Technology, vol. 26, no. 2, pp. 169–186, 2020.
[117] J. Wang, Y. Luo, Z. Wang, A. H. Hounye, C. Cao,
M. Hou, and J. Zhang, “A cell phone app for fa-
cial acne severity assessment,” Applied Intelligence,
vol. 53, no. 7, pp. 7614–7633, 2023.
[118] D. Wang, X. Chen, Y. Wu, H. Tang, and P. Deng, “Arti-
ficial intelligence for assessing the severity of microtia
via deep convolutional neural networks,” Frontiers in
Surgery, p. 1385, 2022.
[119] Q. Man, L. Zhang, and Y. Cho, “Efficient hair damage
detection using sem images based on convolutional
neural network,” Applied Sciences, vol. 11, no. 16, p.
7333, 2021.
[120] W.-J. Chang, L.-B. Chen, M.-C. Chen, Y.-C. Chiu,
and J.-Y. Lin, “Scalpeye: A deep learning-based scalp
hair inspection and diagnosis system for scalp health,”
IEEE Access, vol. 8, pp. 134 826–134 837, 2020.
[121] S. Tian, W. Yang, J. M. Le Grange, P. Wang,
W. Huang, and Z. Ye, “Smart healthcare: making
medical care more intelligent,” Global Health Journal,
vol. 3, no. 3, pp. 62–65, 2019.
[122] C. S. Pattichis, C. Pitris, J. Liang, and Y. Zhang,
“Guest editorial on the special issue on integrating
informatics and technology for precision medicine,”
IEEE Journal of Biomedical and Health Informatics,
vol. 23, no. 1, pp. 12–13, 2019.
[123] W.-S. Huang, B.-K. Hong, W.-H. Cheng, S.-W. Sun,
and K.-L. Hua, “A cloud-based intelligent skin and
scalp analysis system,” in 2018 IEEE Visual Commu-
nications and Image Processing (VCIP). IEEE, 2018,
pp. 1–5.
[124] X. Liu, C.-H. Chen, M. Karvela, and C. Toumazou,
“A dna-based intelligent expert system for person-
alised skin-health recommendations,” IEEE journal of
biomedical and health informatics, vol. 24, no. 11, pp.
3276–3284, 2020.
[125] Y. Zhang, J. Zhou, and W. Lin, “Knowledge
graph-based sequence recommendation,” in 2021
8th International Conference on Computational Sci-
ence/Intelligence and Applied Informatics (CSII).
IEEE, 2021, pp. 12–17.
[126] S. A. A. Shah, M. Bennamoun, and M. Molton, “A
fully automatic framework for prediction of 3d facial
rejuvenation,” in 2018 International Conference on
Image and Vision Computing New Zealand (IVCNZ).
IEEE, 2018, pp. 1–6.
[127] S. A. A. Shah, M. Bennamoun, and M. K. Molton,
“Machine learning approaches for prediction of facial
rejuvenation using real and synthetic data,” IEEE Ac-
cess, vol. 7, pp. 23 779–23 787, 2019.
[128] T.-Y. Lin, Y.-T. Tsai, T.-S. Huang, W.-C. Lin, and J.-H.
Chuang, “Exemplar-based freckle retouching and skin
tone adjustment,” Computers & Graphics, vol. 78, pp.
54–63, 2019.
[129] S. B. Akben, “Predicting the success of wart treatment
methods using decision tree based fuzzy informative
20 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
P. Vatiwutipong et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
images,” biocybernetics and biomedical engineering,
vol. 38, no. 4, pp. 819–827, 2018.
[130] K. Erdoˇ
gan, O. Acun, A. Küçükmanísa, R. Duvar,
A. Bayramoˇ
glu, and O. Urhan, “Kebot: an artificial
intelligence based comprehensive analysis system for
fue based hair transplantation,” IEEE Access, vol. 8,
pp. 200 461–200 476, 2020.
[131] C. Shi, Z. Jiang, X. Ma, and Q. Luo, “A personal-
ized visual aid for selections of appearance building
products with long-term effects,” in Proceedings of the
2022 CHI Conference on Human Factors in Comput-
ing Systems, 2022, pp. 1–18.
[132] S. S. K and D. S. Krishnaveni, “An analytical approach
for reconstruction of cosmetic surgery images using
euclbp and sift,” Sep 2022. [Online]. Available:
https://doi.org/10.14445/23488379/IJEEE-V9I8P107
[133] ˙
I. Bahçeci ¸Sim¸sek and C. ¸Sirolu, “Analysis of surgical
outcome after upper eyelid surgery by computer vision
algorithm using face and facial landmark detection,”
Graefe’s Archive for Clinical and Experimental Oph-
thalmology, vol. 259, no. 10, pp. 3119–3125, 2021.
[134] J. Kim, D.-U. Hwang, E. J. Son, S. H. Oh, W. Kim,
Y. Kim, and G. Kwon, “Emotion recognition while
applying cosmetic cream using deep learning from eeg
data; cross-subject analysis,” Plos one, vol. 17, no. 11,
p. e0274203, 2022.
[135] X. Chen, X. Wang, K. Zhang, K.-M. Fung, T. C. Thai,
K. Moore, R. S. Mannel, H. Liu, B. Zheng, and Y. Qiu,
“Recent advances and clinical applications of deep
learning in medical image analysis,” Medical Image
Analysis, p. 102444, 2022.
[136] F. Kaliyadan and K. T. Ashique, “Use of mobile appli-
cations in dermatology,” Indian Journal of Dermatol-
ogy, vol. 65, no. 5, p. 371, 2020.
[137] L. J. Caffery, M. Janda, R. Miller, L. M. Abbott,
C. Arnold, T. Caccetta, P. Guitera, S. Shumack,
P. Fernández-Peñas, V. Mar et al., “Informing a po-
sition statement on the use of artificial intelligence
in dermatology in australia,” Australasian Journal of
Dermatology, vol. 64, no. 1, pp. e11–e20, 2023.
PAT VATIWUTIPONG received a Master’s De-
gree in Applied Mathematics from Mahidol Uni-
versity, Thailand, in 2017. Then, he granted an
Erasmus Mundus full scholarship to study a joint
program in Europe. He received two Master’s De-
grees there, Laurea Magistrale in Mathematical
Modelling, issued by the University of L’Aquila,
Italy, and Master Mathématiques issued by the
Université Côte d’Azur, France, in 2019. From
2019 to 2020, he worked as an assistant lecturer
at the Department of Mathematics, Faculty of Science, Mahidol University.
His research interests include mathematical modeling, time series analysis,
and computational mathematics applied in natural science.
SIRAWICH VACHMANUS received the M.E.
and Ph.D. degrees from Hokkaido University,
Japan, in 2019 and 2022, respectively. He is cur-
rently a lecturer at the Faculty of Information and
Communication Technology, Mahidol University,
Thailand. His research interests include computer
vision, deep learning, machine learning, sensor
fusion, and artificial intelligence for robots.
THANAPON NORASET is a faculty member at
the Faculty of Information and Communication
Technology, Mahidol University, Thailand. He re-
ceived his BSc degree from the same faculty in
2007. He received his Ph.D. degree in Computer
Science from Northwestern University, USA, in
2017. His research work and interests are in the
field of natural language processing and machine
learning.
SUPPAWONG TUAROB received his PhD in
computer science and engineering and MS in in-
dustrial engineering both from the Pennsylvania
State University and his BSE and MSE both in
computer science and engineering from the Uni-
versity of Michigan-Ann Arbor. Currently, he is
an Associate Professor of Computer Science at
the Faculty of Information and Communication
Technology, Mahidol University, Thailand. His re-
search involves data mining in large-scale schol-
arly, social media, and healthcare domains, as well as applications of
intelligent technologies for social good.
VOLUME 4, 2016 21
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3295001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/