Content uploaded by Ashwinee Mehta
Author content
All content in this area was uploaded by Ashwinee Mehta on Aug 24, 2022
Content may be subject to copyright.
Human Face Shape Classification with Machine Learning
Mehta, Ashwinee
mehtaa20
Mahmoud, Taha Mohamed
mahmoudt20
Abstract
Face shape classification is useful for rec-
ommending hairstyle, eyeglasses, etc. It
also plays an important role in finding sim-
ilar faces for cosmetic or dental reconstruc-
tion treatments and procedures. In this pa-
per we have proposed a new method for
face shape classification. We used pho-
tos freely available online, automatically
annotated all of them with facial land-
marks using machine learning, and calcu-
lated key distances, ratios and angles for
uniquely classifying faces into five geomet-
ric shapes i.e. heart, oblong, oval, round
and square. The results show that the pro-
posed method, based on calculations from
facial landmarks gives effective results in
classifying face shapes.
1 Introduction
Face shape classification is an essential tool for ex-
ploiting facial image data and has a wide range of
applications that include grouping a collection of
face images and indexing for face retrieval. Face
shape identification has a great impact on secu-
rity and convenience, as it is currently being used
in multiple applications to make the world safer,
smarter, and more convenient. Some applica-
tions include smarter advertising like recommend-
ing hairstyles or sunglasses that suit customer’s
face shape, diagnosing facial diseases that can be
recognized by the introduced change in face shape,
and investigating crimes for direct investigation
towards suspects that have been witnessed with a
given specific face shape while excluding others,
etc.
The original goal of this project was to de-
velop a framework to impute the deformed parts
of a given deformed face image and find a sim-
ilar not-deformed face to the deformed face for
dental/facial reconstruction procedures. This goal
can be divided into three main tasks, the first is
to construct a dataset that contains a set of im-
ages categorized into heart, oblong, oval, round,
and square face shapes. Next, these images will
be used for initializing a clustering algorithm that
will cluster a given deformed face image into one
of the five main clusters based on the available
face landmarks while ignoring landmarks in the
deformed face area. Finally, we will use some im-
putation techniques to estimate the deformed area
for a given deformed face image using similar not-
deformed images from the same cluster assigned
to this image. This paper focuses on the first task
of classifying faces into heart, oblong, oval, round,
and square shapes.
There are different methods that have been pro-
posed and developed for classifying facial image
data into different shapes. Some treated this prob-
lem as a supervised classification problem while
on the other hand, some others treated this prob-
lem as an unsupervised clustering problem. In-
tegrated Multi-Model Face Shape and Eye At-
tribute Identification for Hair Style and Eyelashes
Recommendation [10], 3D Face data and SVM
[7], and Face shape classification using Incep-
tion v3 [11] treated this problem as a supervised
classification problem, while A Face Clustering
Method Based on Facial Shape [8], Automatic
clustering of faces in meetings [13], and Dual
threshold-based unsupervised face image cluster-
ing [3] treated this as an unsupervised clustering
problem.
The main contribution of the proposed method
is that the method tried to predict the right face
shape using a minimal number of landmarks and
using the most cost-efficient method. This is
achieved by using only one classifier to address
this problem. The proposed method uses a novice
criteria to identify the face shape based on dis-
tances, ratios, and angles over the traditional way
used in related work that is based on manually se-
lecting and labelling landmarks.
2 Related Work
A multi-model of face shape, eye features, and
gender identification models has been used to rec-
ommend hair-style and eyelashes [10]. A model
based on supervised classification using support
vector machine (SVM) along with a model based
on histogram of oriented gradients (HOG) features
been used to detect the face area and generate a
cropped face area. An inception V3 CNN will
learn the key features from the cropped image gen-
erated by the previous step along with 68 manually
defined landmarks to be used by face classifier to
identify the face-shape. Manual labeling by do-
main experts is used to label the whole dataset.
A very interesting method has been used to iden-
tify eye features which is based on calculating ra-
tios extracted from eye geometric measurements
using eye specifications like position and shape.
This method can be also used in face shape clus-
tering. The main disadvantage of the face shape
identification model introduced by [10] is using
supervised classification model which requires to
have labeled dataset, which been done manually
by domain expert in this case. This is not effi-
cient and will require huge amount of computa-
tional power to have a well trained model before
being able to start using it to classify new facial
images. While on the other hand, the main ad-
vantage of this model is that accuracy can be mea-
sured and confusion matrix can be used to enhance
the model accuracy, especially for face shapes for
which the model is confused.
A support vector machine (SVM) model can be
used to classify a projected front face plan from
3D face data [7]. The model helps to address
the challenge of classifying face shapes based on
3D face data. All known face shape clustering
and classification models were build to address the
problem using front face image. The algorithm
will help to extract a front face projected plan from
3D face data and then apply special method to
identify face shape using SVM model. Initially
the front face plan is extracted from the 3D face
data and then a frame is used to cover the extracted
front face plan and error is calculated by mea-
suring the distance between front face image and
added frame on angle of 360 degrees. The SVM
model will take error distances as an input and will
predict face shape class. As any supervised model,
the main challenge of this model is that we need
labeled data to train the SVM model before start-
ing to use it to classify face shapes correctly. As
pointed by the paper author, the accuracy of this
model is not good enough. Furthermore, the pre-
sented model works well with 3D faces although
it is a very challenging problem.
ZHANG [8] developed a framework to clas-
sify human faces into 7 shape classes. The model
uses Active Shape Model (ASM) [9] to extract the
most important 68 feature points using Principle
Component Analysis (PCA). These feature points
include information on face contour, eyes, noes,
and mouth. Face contour will be the main in-
put to predict face shape while the remaining fea-
tures will be used to do further sub-classification
within each shape class. Then, the ISODATA (It-
erative Self-Organizing Data Analysis Technique)
procedure by Tou and Gonzalez [12], which is a
modified version of k-means clustering algorithm,
uses Hausdorff distance method [4] to calculate
the distances of two facial shape point sets (sim-
ilarity). The major drawback of this approach is
that the clustering algorithm will group similar
faces together based on features extracted using
the ASM model without assigning a face shape
for each cluster. The initialization step in the pro-
posed ISODATA model is based on randomly se-
lected centroid, which will require many iterations
to converge. Finally, there is not enough testing
results demonstrated to show how this model will
behave in diffident scenarios.
Faces in long meetings can be automatically
clustered. This paper [13] has conveniently solved
the problem of clustering different faces in the
meetings so that one can know which and how
many people are attending the meeting. Two main
novelties have been proposed that are robust to
different facial expressions, positions/postures and
face illumination. The first is an adaptive subspace
tracker and the second is a robust clustering algo-
rithm. The clustering algorithm is robust, handles
outliers and rotation (up to 30 degrees), gives an
accurate estimate of clusters, enhances the perfor-
mance by imposing timing constraints and is more
accurate than the Normalized Cuts algorithm for
clustering. The weakness of this paper is that it
is not real-time as the clustering algorithm is it-
erative and needs time for processing. Also, it is
not specified if the algorithm works if objects likes
glasses, hats, masks, etc are added or removed
while the person is in the meeting.
Thresholds can be used in unsupervised face
image clustering and the same has been proposed
in this paper [3]. Deciding the initial and correct
number of clusters can be a tedious process. Also,
if the number of clusters defined is more that the
actual, it degrade the clustering algorithm perfor-
mance. This paper focuses on clustering of face
images without having to pre-define the number
of clusters. They have proposed a dual threshold
based method that does not require to define the
number of clusters before clustering. The method
first calculates the euclidean distances between all
the images in the dataset and then calculates the
euclidean distances between clusters for merging
them. The merging decision is based on thresholds
and they propose a method to decide the proper
time to stop clustering. This method performs bet-
ter than K-Means and HAC. The major drawback
of this paper is that they have not specified steps to
calculate the thresholds and what is the impact of
having wrong thresholds. The paper also does not
talk about handling outliers during clustering
Face shape classification has many applications
like medical diagnosis, detecting criminals, face
recognition, etc. This paper [1] focuses on clas-
sifying the face shapes into circle, ellipse, trian-
gle, square, etc. To perform face shape classifica-
tion, they have proposed 3 novel techniques- re-
gion similarity, correlation and fractal dimension.
They have tested their techniques on three differ-
ent data sets and found out that out of these three
methods, regional similarity based method gives
better results over the other two methods. It is also
observed that most of the faces are of the ellipse
shape. One of the many drawbacks of this paper
is that it uses only front profile images for classi-
fication. Also, there are too many calculations in-
volved in these methods before classifying the face
shape. The paper does not consider the case where
the face is deformed. When the face is deformed,
the shape of the face is not original. In this case
there is a need to reconstruct the deformed face
and then classify the actual shape of the face.
The proposed model is more efficient than other
models discussed as it doesn’t require extensive
calculations or high compute power. Face dis-
tances or ratios are computed only once per fa-
cial image and pre-defined shape. Also, landmarks
will also be extracted only once per facial image.
The distances or ratios for pre-defined shapes are
stored along with supported shapes library that the
proposed model supports for identification.
3 Proposed Method
There are 3 main tasks that need to be addressed
in order to find the best matching not-deformed
face shape for a given deformed face before ap-
plying imputing techniques. The three tasks are
explained as follows:
1. Face shape classification (One time prepa-
ration): The main objective of this task is to come
up with a high quality dataset that contains perfect
images of each face shape in each category. This
task needs to be done only once and the output of
this task can be used by any future work trying
to address similar problem. These images will be
used as an input to the second stream.
2. Cluster similar shaped images: This task
will focus on using a clustering algorithm to clus-
ter the deformed face into the closest accurate face
shape cluster based on the available and remaining
facial landmarks of the upper two third of the face.
This is a clustering algorithm which is already ini-
tialized with images generated from first stream to
create five main clusters where each cluster con-
tains the most accurate 25 images of same face
shape.
3. Landmarks imputation: The last phase
is to use imputation techniques to estimate the
missing landmarks from the deformed face using
not-deformed images from the same cluster which
have similar face shape based on the upper 2/3 face
landmarks.
This paper focuses only on the first problem
which related to face shape classification. The
remaining two problems will be addressed as fu-
ture work. This paper focuses on using a different
method to classify face shape using distances, ra-
tios and angles to describe the geometric shape of
the face. Face shape classification was done by
first finding a good dataset. Dlib’s 68-point fa-
cial landmark detector was used to annotate fa-
cial landmarks on facial images from the dataset.
Principal Component Analysis (PCA) was used to
define the most important features that affect the
classification based on the calculated distances, ra-
tios and angles. Different classification models
were used and fine-tuned using different hyperpa-
rameters by considering 5 fold cross-validation for
each run by using only the most important features
that have been defined by the PCA model. The
proposed workflow is explained in figure 5. The
major deliverables of this phase include-
• Finding the most important features i.e., dis-
tances, ratios and angles for predicting face
shape.
• Finding the best algorithm with the most im-
portant features for classifying front face im-
ages into one of five main clusters i.e., oval,
round, square, oblong and heart.
• Finding the most accurate 25 images of each
face shape to be used in for clustering.
3.1 Dataset
The dataset used in this paper is freely available
Face Shape Dataset [5]. This dataset comprises
of a total of 5000 images of the global female
celebrities which are categorized into Heart, Ob-
long, Oval, Round and Square face shapes. Each
category consists of 1000 images. The training set
of each category contains 800 images whereas the
testing set contains 200 images. The images The
dataset contains images taken from different an-
gles and positions and have varying resolutions.
The dataset does not mention about the method
of classifying the images in each category. Some
images in the dataset were not straight and front
profile due to which the landmarks were not ac-
curately placed as shown in the figure 2. Such
images with inaccurately placed landmarks were
eliminated and a subset of 124 images was used.
Figure 1show a summary for number of images
per class.
Figure 1: A bar chart describing the number of images
of each face shape category used for training the model.
3.2 Workflow
The proposed workflow has the following steps:
1. Annotation: The first step was to annotate
these images with facial landmarks. Dlib’s 68-
point facial landmark detector [6], the most pop-
ular facial landmark detector was initially evalu-
ated. It can find 68 different facial landmark points
including chin and jaw line, eyebrows, nose, eyes
and lips. It was determined that this library does
not provide facial landmarks for the forehead.
Therefore, an extended version of this library, [2],
which provides 13 additional landmarks that de-
lineate the forehead was used. Not all the land-
marks generated are needed to get the required
measurements. Out of all the 81 landmarks, only
the landmarks numbered 2, 14, 69, 75, 79, 72, 8,
12, 4, 6, 10, 7, and 9 were used.
2. Assess the annotation: The second step was
to visually inspect the placement of the landmarks,
on all the images from the dataset. During this
manual inspection it was noticed that the predic-
tor placed the landmarks in the correct positions
in images that had eyeglasses, hair on face, bald
heads, hats, different background colors and pat-
terns. However, some of the landmarks were not
correctly placed for images that did not have a
front profile or straight facial view. For some of
these images, the 81 Facial Landmarks Shape Pre-
dictor misplaced some of the landmarks.
3. Select images: Therefore, to get a correct
and unbiased measurement of the different facial
distances and ratios, in the third step 124 images
were planned to be selected, that had correctly po-
sitioned landmarks. 124 images were identified
from the dataset that had the correct automatic
placement of the required landmarks.
4. Calculate distances, ratios and angles:
These selected images were used in the fourth
step to calculate the seven facial distances, three
angles and ten ratios. The initial distance mea-
surements were taken in number of vertical pixels
between each two landmarks. Normalization of
all the seven distance values was performed using
equation 1since the images in the dataset were of
different resolutions.
d′
i=di
7
X
j=1
dj
(1)
where d′
iis the normalized distance of di, and dj
is one of the seven distances.
The different distances used for calculation are
defined as follows:
• Landmarks numbered 2 and 14 were used for
getting the distance D1 between the left and
Figure 2: Incorrect placement of landmarks on some images from the selected dataset due to shift in face positions.
right ear.
• Landmarks numbered 75 and 79 were used
for getting the distance D2 between the left
and right forehead.
• Landmarks numbered 69 and 72 for the left-
and right-forehead and 8 for the chin. These
landmarks were used for getting the distance
D3 from the hairline to the chin.
• Landmarks numbered 8 and 12 were used for
getting the distance D4 of the jawline.
• Landmarks numbered 4 and 12 were used for
getting the distance D5.
• Landmarks numbered 6 and 10 were used for
getting the distance D6.
• Landmarks numbered 7 and 9 were used for
getting the distance D7.
The different ratios used for calculation are de-
fined as follows:
• Ratio 1 (R1) = D2 / D1.
• Ratio 2 (R2) = D1 / D3.
• Ratio 3 (R2) = D2 / D3.
• Ratio 4 (R2) = D1 / D5.
• Ratio 5 (R2) = D6 / D5.
• Ratio 6 (R2) = D4 / D6.
• Ratio 7 (R2) = D6 / D1.
• Ratio 8 (R2) = D5 / D2.
• Ratio 9 (R2) = D4 / D5.
• Ratio 10 (R2) = D7 / D6.
The different angles used for calculation are de-
fined as follows:
• Angle 1 between the line from hairline to
chin and line from chin to landmark num-
bered 10.
• Angle 2 between the line from hairline to
chin and line from chin to landmark num-
bered 12.
• Angle 3 between the line from landmark
numbered 2 to landmark numbered 14 and
line from landmark numbered 14 to landmark
numbered 12.
Figure 3shows the selected landmarks from all
the 81 landmarks that were used for calculating 7
distances and 3 angles.
5. Feature Selection: To identify the most im-
portant features based on the amount of informa-
tion contained within each feature, different clas-
sifiers were used to predict the face shape. Prin-
cipal Component Analysis (PCA) was applied on
the top three performing classifiers. The average
of feature importance across top three classifiers
was considered to decide on the most important
features. Out of all the features the most impor-
tant features were R2, N5, R3, A1, N2, N3, N6,
A2, R1, R8, R7, N1, R10, R4. Figure 4shows
the comparison of features and ranking results for
feature selection.
Figure 3: Dlib-81 library was used to automatically place facial landmarks on images for calculating distances,
ratios and angles.
Figure 4: Principal Component Analysis was used for
selecting the most important features by comparing the
features of the top three performing models. The x-axis
shows the average importance percentages, and the Y-
axis shows the feature names.
6. Model Selection: To select a model, many
classifiers were tested and compared after test-
ing for different hyperparameters. The three best
classifiers that were identified are: Random For-
est Classifier (Gini), Gradient Boosted Trees Clas-
sifier, and eXtreme Gradient Boosted Classifier.
Gradient Boosted Trees Classifier has been ranked
on the top with an accuracy rate of 70%. The
framework is developed to keep iterating by intro-
ducing additional distances, angles, and ratios till
it reaches 90% accuracy, but due to time limit 70%
accuracy has been considered to be sufficient.
The final output of this workflow step was a set
of most important features to classify face shape,
best classifier model, and set of images predicted
correctly. The algorithm for face shape classifica-
Algorithm 1 Face Shape Classification Model
landmarks ←Dlib(image)
features ← {D1..D4}
Acc ←0%
while Acc ≤90% do
features ←f eatures +Newf eatures
features ←P C A(features)
for Min{listof classifiers}do
Run the model M
ˆ
Acc ←modelAcc
if Acc ≤ˆ
Acc then
Acc ←ˆ
Acc%
Model ←M
end if
end for
end while
Return Features, Model, Acc
tion is explained in 1. All these information will be
an input to next phase (deformed face clustering).
This part can be enhanced by trying to find addi-
tional supporting face measurements and enhanc-
ing the classifier model accuracy. The final output
of this phase can be used by future papers that need
to address problems related to face shape classifi-
cation by using features (normalized distances, ra-
tios and angles) introduced in this paper.
4 Experimental Results and Discussion
The results of the proposed method show that it is
a good model although it is not the best yet. The
accuracy percentage of related work discussed in
this paper ranges from 70% to around 85%. But
on the other side, the proposed algorithm has used
the least number of facial landmarks to predict the
Figure 5: The workflow of the proposed face shape
classification system.
face shape. The proposed method has used only 13
facial landmarks to calculate a total of 14 features
(5 normalized distances, 7 ratios, and 2 angles).
On the other hand, the ”3D Face data and SVM”
method uses 91 landmarks (Accuracy 73.68%),
and ”Integrated Multi-Model Face Shape and Eye
Attribute Identification for Hair Style and Eye-
lashes Recommendation” uses 68 landmarks (Ac-
curacy 85.6%). The proposed model in this pa-
per also used the least amount of computation
and comparison as it is required only to apply a
good classifier algorithm on introduced features.
The method described in ”Integrated Multi-Model
Face Shape and Eye Attribute Identification for
Hair Style and Eyelashes Recommendation” uses
4 different models (HOG, SVM, Classifier, Incep-
tion V3 CNN) (Accuracy 85.6%) to perform the
task, while for example ”3D Face data and SVM”
uses only SVM to predict the output. Inception
V3 uses 4000 iterations (epochs) with a learning
rate of 0.01 to achieve 84.4% which means that
it requires a massive compute power. The ”Inte-
grated Multi-Model Face Shape and Eye Attribute
Identification for Hair Style and Eyelashes Rec-
ommendation” also uses Inception V3 as a sup-
porting model (one out of four) which also means
that this method will consume massive compute
power. The proposed method in this paper has
used only one classifier ”Gradient Boosted Trees
Classifier” using tuned parameters and 5 k-fold
cross-validation to achieve 70% accuracy.
The defined face measurements which have
been calculated as part of the introduced method
are reusable and can be used by researchers to ad-
dress the same problem as the defined distances,
ratios and angles are equations that can be applied
to identify the geometric shape of a face. These
measurements can also be generalized and used
for any shape detection problem.
Table 1shows the confusion matrix for the top
ranked classifier which is ”Gradient Boosted Trees
Classifier”. The accuracy of the model is 70% and
size of testing detest is 20 images. The confu-
sion matrix shows that the model performed very
well to predict round and square faces while on the
other hand, it need to be enhanced to segregate be-
tween oval and oblong faces. Heart shape faces are
the most difficult to predict face type even for hu-
man as it’s geometric characteristics overlap with
other face shapes especially oval and oblong faces.
Classification
Heart Oblong Oval Round Square
Heart 15% 5% 5% 0 0
Oblong 0 10% 5% 0 5%
Oval 0 10% 15% 0 0
Round 0 0 0 15% 0
Square 0 0 0 0 15%
Table 1: Confusion Matrix for Gradient Boosted Trees
Classifier where the numbers inside the matrix are the
percentages of predicted images out of global number
of images in testing dataset.
Table 2shows the accuracy comparison be-
tween the method introduced in this paper against
existing face shape classification methods dis-
cussed in the related work section.
The experimental results of the proposed model
show that the model can predict round and square
face shapes with decent accuracy. This is an in-
dication that normalized distances, ratios and an-
gles are good features to identify these two face
shapes. The ratio R2 which is the ratio between
the face width (distance from left to right ear) and
face length, has played a very important role in
distinguishing round and square faces from other
face shapes as this ratio should be something close
to one in the round and square faces. The ratio R2
should be greater than 1 in oval, oblong, and heart
face shapes. Ratios R1 and R10 are very useful
to segregate the round and square faces. Ratio R1
should be less than one in round faces and closer to
one in square faces. Ratio R10 is the ratio between
chin line and mouth line and should be very wide
in square faces. The remaining introduced features
worked well for classifying the oval oblong and
heart face shapes but still need to be enhanced by
introducing additional features i.e. distances, ra-
tios, and angles, to focus on unique features for
each face shape. This topic is considered as a fu-
ture work task.
The next subject in this discussion is related
to the dataset used in this paper. There are not
many labeled face shape image datasets that can
be used for testing and experiments and to cross-
validate the results introduced in this paper. La-
beled datasets used in related work were manually
labeled, and are not publicly available for use. It
is more logical to use the same dataset while com-
paring the performance of different models. The
author of the dataset [5] used in this paper didn’t
share any information about the labeling mecha-
nism used to label images in this dataset. It has
been noticed that some images in this dataset are
labeled inaccurately and are not of high resolution
and perfect front profile. This was the main reason
for creating a small clean and accurate dataset for
training the model introduced in this paper.
5 Conclusions and Future Work
The paper has used a different method to classify
front face images into five face shapes i.e. heart,
oblong, oval, round and square. The proposed
method used only 13 facial landmarks to calculate
14 features that include 5 normalized distances, 7
ratios and 2 angles along with minimal computa-
tion and comparison by applying one classification
Method Accuracy
Inception V3 [10] 84.4%
Region Similarity [1] 80%
3D face data and SVM [7] 73.68%
Active Appearance Model (AAM),
segmentation, and SVM 72%
Hybrid approach VGG and SVM 70.3%
Proposed method 70%
Table 2: Accuracy comparison between the proposed
method and other methods discussed in related work.
model on introduced features. The model can be
enhanced in the future to to classify images with
side-angle faces or deformed faces. For images
with side-angles, we can consider the fact that the
human face is symmetric and try to detect which
side of the face is more visible to calculate the
required ratios and angles to perform an accurate
classification. For images with deformed faces, we
can use imputation techniques for missing facial
data to estimate the right shape for the deformed
face. Another challenge that can be considered for
future work is to develop a model that can do the
right prediction for human face shape with glasses,
masks, or accessories which may act as a distrac-
tor for the classification model. It will be a great
addition if the classification model can detect such
objects and eliminate their effect.
References
[1] Bansode, N. and Sinha, P. (2016). Face shape classifi-
cation based on region similarity, correlation and fractal
dimensions. International Journal of Computer Science
Issues (IJCSI), 13(1):24.
[2] codeniko (2019). 81 facial landmarks shape predictor.
[3] Deng, Q., Luo, Y., and Ge, J. (2010). Dual threshold
based unsupervised face image clustering. In 2010 The
2nd International Conference on Industrial Mechatronics
and Automation, volume 1, pages 436–439. IEEE.
[4] Huttenlocher, D., Klanderman, G., and Rucklidge, W.
(1993). Comparing images using the hausdorff distance.
IEEE Transactions on Pattern Analysis and Machine In-
telligence, 15(9):850–863.
[5] Kaggle (2020). Face shape dataset.
[6] King, D. E. (2009). Dlib-ml: A machine learning toolkit.
Journal of Machine Learning Research, 10:1755–1758.
[7] Pornthep Sarakon, Theekapun Charoenpong, S. C. (2014).
Face shape classification from 3d human data by using
svm. IEEE.
[8] SHU CONG ZHANG, BIN FANG, Y.-Z. L. (2011). A
face clustering method based on facial shape information.
IEEE.
[9] T. F. Cootes, C. J. Taylor, D. H. C. and Graham, J. (1995).
Active shape models - their training and application, com-
puter vision and image understanding. Computer Vision
and Image Understanding, 61.
[10] Theiab Alzahrani, Waleed Al-Nuaimy, B. A.-B. (2021).
Integrated multi-model face shape and eye attributes iden-
tification for hair style and eyelashes recommendation.
Computation.
[11] Tio, A. E. (2019). Face shape classification using incep-
tion v3. ArXiv, abs/1911.07916.
[12] Tou, J. and Gonzales, R. (1974). Pattern recognition
principles. Addison Wesley.
[13] Vallespi, C., De la Torre, F., Veloso, M., and Kanade,
T. (2006). Automatic clustering of faces in meetings.
In 2006 International Conference on Image Processing,
pages 1841–1844. IEEE.